With great power comes great responsibility. The era of machine-driven combat using killer robots is not too far. Robots will be used as soldiers in future wars.
Experts in Machine Learning and military technology state it would be possible to build robots that make decisions about whom to target and destroy without the involvement of a human controller, as artificial intelligence, decision-making algorithms and facial recognition are frequently becoming more powerful. Researchers in Artificial Intelligence and public policy are suggesting that killer robots are a bad idea in real life.
Robots and drones are already being used by the military. Intelligent autonomous robots programmed by humans to target and kill or commit crimes in the future are not far away unless there would be a treaty for robotics and Artificial Intelligence (AI) to be used responsibly.
Like any other technology, Artificial Intelligence can be used for good as well as for bad. Facial recognition and object recognition are technologies have improved during the last few years. Facial recognition and object recognition are likely to become a crucial part of lethal autonomous weapons (LAWS). However, it is also very easy to fool these technologies, if one wants to fool them.
Military robots are developed under strict privacy to restrict others from learning about their existence. Some of these robots are currently being deployed while some are under development. Military robots include remote-controlled autonomous robots or drones designed for military applications. They can be used for search and rescue, transport and also for an attack with the implication of killing humans and even destroying cities. The United States has flown military drones over areas where they are at war or engaged in military operations. But human controllers decide when these drones will fire so far.
The risk posed by lethal autonomous weapons, also known as killer robots is real. The battles of the future can then be more high-tech infused and with less human involvement. The military has been experimenting with robots that can be part of the battlefront and are used as killer weaponry. Artificial Intelligence researchers have plenty of reasons to support their conclusions that the world should ban the development, experiments and deployment of these lethal autonomous weapons. Military powers could mass-produce an army of lethal autonomous weapons or killer robots and humanity could pay a high price for it. The production and activation of these killer robots would increase the possibility of proliferation and mass killing.