The applications constructed using machine learning mechanisms take the relationship between technology and humanity to a whole new stage. Autonomous robots, i.e. robots that function without needing human instructions in real-time or constantly (e.g. vehicles, drones, vacuum cleaners, Twitter bots, etc.) influence and modify our operating environments. They alter our behavior indirectly. As we reach new frontiers with the latest technological developments and appreciate the many significant outcomes and advantages it can have on the way we operate, play and live, we must still be aware and plan for possible negative impacts and potential misuse of the technology. There has been increased exposure in recent years to the potential effects of future AI systems. As the complexity of these systems evolves further, influential thinkers have openly cautioned about the possibility of a dystopian future. Such alerts are in contrast to the existing state-of-the-art AI technology.
Products of technology also have second and third-order effects that are not always visible at first. This is particularly so when goods outgrow their original purpose and audience and reach a scale where a paradigm of one-size-fits-all fails miserably. When corporations such as Facebook, Google and Twitter struggle to rein in their algorithms and the at-scale gaming of their sites, we see this happening with social media affecting democracy around the world.
Ethics is characterized as the moral standards guiding an individual’s or a group’s conduct or behavior. In other words, the “rules” or “pathways of decision” that help decide what is right or good. Ethics is often known as the philosophy of right vs. wrong, and human moral obligations and responsibilities. Because of this, it can be said that technology ethics is simply the set of “rules” or “decision paths” used to determine its conduct. It loses its neutral position during the very phase of developing technology. It no longer remains only another means to an end but becomes the living embodiment of its creators’ views, consciousness and ethical resolve. Thus, with the ethics of its creation and its makers, the ethics of a technology (or product) begins.
AI’s ethics lie in the ethical quality of its prediction, the ethical quality of the conclusions drawn from it, and the morality of the effect it has on people. The personhood of an individual is their identity. In other words, it is dangerous to incorrectly represent or fail to represent the identity of a person in a machine learning system. Any decision made by this system thereafter, concerning that individual, is also harm. The moral responsibilities imposed on technology and its developers require them to work to mitigate all such harm. Hence, the need for AI ethics. A lot of AI ethics surveys, articles, and discussions are all about Biases. Rightly so, it leads to unequal results because of a failure to recognize or correct those prejudices.