Ethics: Finding the True North For AI

“The word “good” has many meanings. For example, if a man were to shoot his grandmother at a range of five hundred yards, I should call him a good shot, but not necessarily a good man.” ― G. K. Chesterton detours my thinking to an anomaly which is the basis of my thoughts about ethics in AI. Good is relative. In fact, it is a measure of intensity.

I love the Merriam-Webster definition of Ethics. It defines it as the discipline dealing with what is good and bad and with moral duty and obligation. What are the moral obligations of an AI and its creators? How well does it declare its moral obligations and duties? How well does it satisfy its moral obligations and duties towards other humans?

Now, like any other technology, AI knows no boundaries, and a global effort is needed to govern its growth as the technology becomes ever more powerful and intertwined with society. What does AI do before we look at the ethical stuff around it? BBC helps me out here to put it out simply.

Personal electronic devices or accounts (like our phones or social media) use AI to learn more about us and the things that we like. One example of this is entertainment services like Netflix which use the technology to understand what we like to watch and recommend other shows based on what they learn.

It can make video games more challenging by studying how a player behaves, while home assistants like Alexa and Siri also rely on it. AI can be used in healthcare, not only for research purposes but also to take better care of patients through improved diagnosis and monitoring. It also has uses within transport too. For example, driverless cars are an example of AI tech in action, while it is used extensively in the aviation industry (for example, in-flight simulators).

Farmers can use AI to monitor crops and conditions, and to make predictions, which will help them to be more efficient. You only have to look at what some of these AI robots can do to see just how advanced the technology is and imagine many other jobs for which it could be used. In other words, AI is attacking all spheres of our lives.

So what are some of the Ethical issues around AI?

How will AI affect our Humanity? Artificially intelligent bots are becoming better and better at modeling human conversations and relationships. According to the World Economic Forum, In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being.

AI applications cannot, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy was deeply misguided. Few experts are also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position is now known as computationalism) which implies that AI research devalues human life. Experts feel that people’s deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. It is more of humans losing independence and free will.

Talking of independence comes the security concern. Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people’s decisions for them. These systems are globally networked and not easy to regulate or rein in. And it is not just about laws and regulations, because laws can be bent or broken. Privacy concerns would only be adequately addressed with inbuilt integrity and ethical practices.

It seems sometimes that the focus of machines is all about compromising human relevance. What happens after the end of jobs? Many see AI as augmenting human capacities but some predict the opposite — that people’s deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others.

Ankit Rathi narration about the loss of jobs in the wake of AI tells the story. “The relationship between automation and employment has always been complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects. Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence.”

“ The Economist states that ‘the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution’ is ‘worth taking seriously’. Many futurists warn that these jobs may be automated in the next couple of decades and that many of the new jobs may not be ‘accessible to people with average capability’, even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that ‘we’re in uncharted territory’ with AI.”

These are just highlights of some of the Ethical issues in AI. Ethics aren’t just ethics if you bend them to suit yourself. Finding the true North for AI is essential if augmenting human performance is the main Agenda. Achieving this involves ensuring that AI doesn’t make worse mistakes than humans. And since they lack human judgment, autonomous weapons should not be allowed to kill. We need to keep learning and our focus shouldn’t be surpassing human intelligence but augmenting its potentials.

Leave a Comment

Your email address will not be published. Required fields are marked *