Ethical and moral issues raised by Artificial Intelligence
If a machine passes the Turing test and gains consciousness, so much so that its intelligence and feelings are indistinguishable from a human, how would you tell the difference between a man and a machine? If we were a few hundred centuries ahead in time, how would you ensure your boss is human, not a robot? How would you ensure that you are a human, not a robot? How would you ensure that you are not living in Westworld (a TV drama raising ethical issues by AI)?
If a machine becomes indistinguishable from a human (and I think it is very likely it would), should they be treated the same way as humans? Should they have equal rights? Should your boss being a robot bother you if you know that he is a robot?
As we move ahead in time, we give machines more autonomy to take decisions than ever. As long as we want to make things easier and simpler for us, we will keep giving more control to the machines. By giving more autonomy to machines, we are creating greater moral dilemmas. One example is the trolley problem with respect to self-driving cars. In a situation where the self-driving car has no choice but to hit a pedestrian or the passenger, whom should the programmers and designers program the car to choose?
Picture: Moral Machine
It wouldn’t be a surprise if machines replace soldiers in future wars. Would it be ethical to voluntarily clash humans and machines in the first place? Couldn’t a machine with sufficient autonomy go rogue and attack the civilians or its own human army? As we give more power to machines, these moral dilemmas are bound to arise.
Related: How does Google’s self-driving car deal with the trolley problem in philosophy?