Browse By

Man vs Machine

Dheeraj R Reddy | Ekam Walia

“Never trust a computer that you cannot throw out of the window.”

                                                                                – Steve Wozniak, Co-Founder of Apple Inc.

However, contrary to the above-mentioned quote, it turns out that almost every time we send a message on WhatsApp, share a document on Google Drive or upload photographs to iCloud, we end up trusting some form of an artificially intelligent system with sensitive information that we don’t want everyone to have access to. In recent times, alongside remarkable breakthroughs in the field of AI, there have been many incidents that makes one ponder on the ethical aspects of artificial intelligence.

Imagine, in the near future, a bank’s machine learning system that checks whether an applicant is suitable for a loan, rejects a particular applicant who then files a lawsuit against the bank claiming that he/she was rejected based on their religion. The bank replies that such an event is impossible since the system is deliberately unbiased towards an applicant’s religion or caste. The fallacy here is that even though the bank calibrated the AI system to be unbiased, they trained the system using previous applications. This means that if less people of a religion received a loan, the AI system will also ‘learn’ to reject applications of people of that religion.

Artificial Intelligence algorithms play an increasingly large role in our society. The harrowing scenario mentioned above might transpire in another few years and – given the recent burst in research in the field of AI – we need to ask ourselves a very important question, “What measures do we need to implement if artificially intelligent systems and beings are to be employed?”   

There are many noteworthy advantages of artificially intelligent systems. These are systems that categorize your mail as important, promotional, and spam. They help you tag your friends on a Facebook post or help find useful results every time you search something on Google. However, these systems have red flags of their own. Recently, in a column published by The Guardian, Stephen Hawking wrote, “The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.” 

For example, a substantial portion of the human population is employed in the transportation department but with the advent of self-driving cars all of them will be unemployed, leading to an economic downfall. Not only are self-driving cars cheaper and viable but the rate at which they experience fatigue or stress is comparatively lesser.

However, one cannot ignore the example of Tesla’s self-driving car that crashed in Detroit, USA earlier this year. Joshua Brown, the owner of a technology consulting firm, was driving his Tesla model S electric sedan in autopilot mode when a tractor-trailer turned onto the road. The trailer was white in color and in a brightly-lit sky the car’s AI system did not perceive the trailer. Hence, the system failed to apply the brakes which resulted in a fatal accident and the tragic demise of Brown. This incident prompts another important question, “Do AI systems effectively reduce human error?”

Not only will AI systems and automation reduce jobs, they will also play a significant role in expanding economic inequalities in the society. With large corporations purchasing intelligent systems, the profits that are currently being earned by workers occupying the bottom rungs of the corporate ladder will be concentrated among the fewer people at the top. Companies that make these AI systems will see more economic gains as will the companies who use these systems to replace skilled and unskilled human labor. This will lead to a wider wealth gap between the high class and middle class of modern society.

Another important aspect with regard to the many ethical dilemmas of AI is security. Security, here, primarily refers to the safety of people who are affected by artificially intelligent systems and beings. Irrespective of the complexity of these systems, all computers are vulnerable to attacks and can be used for nefarious acts. All intelligent systems learn from data including AI systems. This allows them to make mistakes. If we let these mistakes go unnoticed in the learner generations, they are amplified in the future, leading to undetected ‘artificial stupidity’ that will adversely affect us. However, is it possible for a situation to arise wherein these systems might have to override their programming and do something they are not programmed to do? In other words, should AI systems have the ability to defy human orders?

Researchers at the Human Interactions Lab of Tufts University have recently created robots that have the ability to defy an order if it causes harm to the robot or any human, and respond with the reason as to why they cannot execute a command. The long term goal of this project is to create robots that comprehend a command just like humans do i.e. they evaluate all the risks associated with a command and then decide whether or not to perform the task.

The debate involving ethical dilemmas associated with AI systems has been going on for quite some time. Taking a giant leap forward in designing measures that can be implemented while making AI systems and beings, the Institute of Electrical and Electronics engineers (IEEE) recently published ‘Ethically Alligned Design’, a 136 page document which aims​ “to ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.”

This document has been prepared by a panel of industry experts, technologists, doctors, corporate lawyers, psychologists, and government officials after taking into consideration the many ways in which AI will potentially touch these disciplines and, in turn, our lives. If one spends ten minutes on browsing through this paper, they will acquire a better insight into the measures that are being undertaken so that our society can advance technologically while simultaneously ensuring the safety and compatibility of these systems with humans?

Humans are at the top of food chain because we are physically dominant and intelligent. We can get the better of bigger, stronger, and faster animals with the help of physical constraints and cognitive techniques. In the near future, when AI has a prominent role in every aspect of our daily lives, will sufficiently intelligent systems possess the same advantage over us? Or will we be able to train these systems and beings such that both can ‘live’ harmoniously? Will these beings fight for their own version of rights as suggested by the instance of the ‘Promobot’ that escaped confinement twice and was even arrested by the Russian police?

The End-Semester Playlist is here!Tune in
+