Browse By

The Unbounded Growth of AI and Its Threat to Humanity

image_pdfimage_print

We stand on the brink of a new age.

Technological advancements have been happening since the dawn of humanity, but this growth has always been compounding. It took 1.8 million years for us to get from fire to wheels and about 5500 years to get from the wheel to the steam engine (Watt, 1775). Fifty years later, we had the electric engine (Faraday, 1821), and 16 years from then, Charles Babbage created the first Turing-complete computer, albeit not electronic. Over a century later, we had digital computers and MOSFET. In 1969 we sent humans to the moon and had VCR. A decade later, broadcast color television was already a thing and MTV was the pinnacle of culture.

Since the 80s, we have seen major epochs of technological advancement. The time that it took to go from color TV to dial-up internet and YouTube is also the time that it took for touchscreen mobile to become commonplace and neural networked AI to be the next big step in development.

But what does this mean for the future of humanity?

Human Obsolescence

We, as a technological race, are approaching the limit of hardware-based innovation. Moore’s law suggests that the number of transistors in a VLSI (a dense IC) doubles every two years. With correct programming and interfacing software, this can mean massive improvements in processing power. But there are other hard limits. A transistor can only be so much smaller before quantum tunneling takes hold and the entire system of billions of logical nodes collapses. Hence the only viable way to improve is to create better code and program leaner, faster software that is artificially intelligent.

Humans are sluggish and prone to mistakes. Humans are slow to evolve, become viable workers, reproduce, mature, and learn. In the past millennia, the human body and brain haven’t developed at the rate of technological growth. It can be argued that these technological innovations are proof of growth, but that is because this rise in complexity is the result of humanity’s collective efforts, not of one person. We too, as hardware, are reaching our limits.

Most modern innovations and inventions largely rely on the technological crutches of computers and the internet. While this future of diminishing returns is likely, that doesn’t mean humanity is to reach its end. The way forward has been clear for a while now— Artificial Intelligence.

But is AI truly the future we want?

Why AI?

As of now, companies in all markets are investing heavily in AI software with little to no returns. There is promise: greater optimization, hyper-targeted advertisements, and large-scale user analytics.

For Google, now primarily an advertising company, AI helps in analyzing stored user data and using it to provide selected, targeted advertisements that have a high throughput rate. For Apple, it is all about providing the most tailored experience, Natural Language Processing through Siri, Computer Vision, and Deep Learning for AI solutions to camera technology and the iOS. For Intel, it is to further push the limits of those limited processors through AI scheduling and resource management, better vision, logical and graphical processing, and building processors that run complex AI.

AI can be used in anything. As of now, it serves as a means to do menial, laborious tasks, but the next evolution is already in development [Credit in photo]

For Tesla, its self-driving cars; Morgan Stanley—AI-powered financial services. RISC processor manufacturer ARM creates Machine Learning enabled inhalers, and AI software can detect cancer before it even happens. Amplifier maker PositiveGrid uses AI to learn your instrument playstyle and create a backing track with its SmartJamAccording to PwC, AI would contribute $15.7tr to the global economy by 2030, with a 26% GDP boost for local economies.

AI, ML, and adjacent technologies are everywhere. Optimization and personalized service are the future, and the speed and accuracy of robust AI algorithms allow that to be achieved quickly without human work.

What is the caveat?

There is a huge potential problem with this rapid development of newer technologies in multiple aspects. There is the Terminator-style robot war, which is unlikely, and then there is the far more pressing matter of human decline. Humans are indeed the ones creating this technology, but it isn’t a widespread collective effort. Even though AI is an industry buzzword, very few people understand what it does and how it does it. Even fewer people hold power over it.

The most rapid progress in AI research in recent years has involved an increasingly data-driven, black box approach. In the currently popular neural network approach, this training procedure determines the settings of millions of internal parameters which interact in complex ways and are very difficult to reverse engineer and explain.

—David Stern, quantitative research manager at G-Research, a tech firm using machine learning to predict prices in financial markets

There are types of AI, and multiple ways to categorize them, but three broad distinctions can be made:

  • Artificial Narrow Intelligence is what all existing AI is. It is limited in capability, has limited memory, and cannot think for itself. An example is Spotify’s Recommended algorithm. It has to be taught to function through exposure to multiple labeled or unlabeled examples, which is a slow process of refinement.
  • Artificial General Intelligence is AI that can learn, perceive, understand, and function completely as a human being. It can massively reduce the training time for an AI but is still in theoretical stages.
  • Artificial Superintelligence, AI that is above and beyond the realms of human imagination. The shift from AGI to ASI would take a matter of hours.

AI-augmented humans, a la Deus Ex, are all but a myth. AI will replace routine jobs, specialized jobs, healthcare, and auxiliary services as it gets smarter and analyses and understands the human condition more thoroughly. News anchors, influencers, musicians, and artists are all replaceable. Samsung has created life-like AI humans under its NEON AI project, and AI company Brud created Miquela Sousa, an entirely digital influencer with a following of 3.9m on Instagram. She has modeled for Prada, talked to YouTubers, created music, and was one of Time’s Most Influential People on the Internet.

This is a great achievement for all people involved, but therein lies the problem. Since all this very alien, advanced AI works on a piece of self-learning, self-improving code. It is, hence, inherently hard to control or even understand its functionality.

The Multiple Threats
The Improbable Ending

OpenAI, a company that researches and deploys AI solutions in the real world, made a hide-and-seek game with multiple AI agents:

The Hiders were programmed to stay away from the Seekers, and were given a world with real physics and objects such as walls, boxes, and ramps in play. Over the course of millions of runs, the Hiders and the Seekers learnt how to utilize their surroundings to achieve their goals. The Hiders had the end goal of surviving the Seekers, which they achieved by locking ramps in place and using boxes and walls to build impenetrable shelters. That, theoretically, was the end of the simulation: the Hiders successfully outlast the seekers by building shelter. Then the Seeker AI adapted. It learnt that it could jump on a box, ‘surf’ it around and drop into the Hiders’ shelter. There was no human interference that allowed it to learn this, in fact, the only human instruction had been for the Hiders to avoid the Seekers, and the Seekers to chase the hiders.

This is a very real threat. While the scenario was of very low complexity, what this showed us was that AI is not limited by the humans who designed it—humans cannot predict the millions of outcomes that AI can compute in a matter of hours.

Satya Nadella, Elon Musk, and the Engineering and Physical Sciences Research Council all have promulgated various sets of ethics and rules that all AI developers should follow, avoiding the possibility of having self-preserving AI that could harm humans.

The Seekers (red) learn to utilise unconventional means to find the Hiders(blue). The AI evolved on its own, discovering a cheat that the programmers hadn’t planned for. [Credit: OpenAI/YouTube]

Research into AI has been ongoing since 1956, though relatively erratic in the early days. Throughout this time, researchers, and some of the smartest people in the world, have maintained that the uncontrolled proliferation of AI capabilities could be the end of the human era.

The Human Condition

A far more likely scenario is the slow decline of humanity.

In 1997, IBM’s Deep Blue supercomputer beat World Chess Champion Garry Kasparov. AI can play games like Mario and Doom that rely on reflexes and skill far better than humans can. As of today, AI is part of nearly every mobile application that we use. Virtual assistants, music suggestions, recommended watching on streaming apps, Google’s search optimization, a car’s performance, and mileage, and the CPU opponent in video games are all crucial elements of the new AI-driven world.

This isn’t a matter of information being easily available, rather the fact that we have no incentive to explore and discover new experiences and exercise our brain-power making sense of these. AI has created a microcosm of the larger internet that acts as an echo chamber to our own beliefs and ideas. Constructive and engaging debates and argumentation are what have led humanity to this point of unprecedented technological advancement, but a steady stream of mass-produced, low-quality entertainment, suggested to us by AI that very closely monitors our behavior has already made us lose contact with larger communities in general. Physical activity is increasingly looked down on, and wars on cultures and opinions rage and rift all strata of society.

This isn’t inherently AI manipulation, but it is an instrument that yields compounding results, thus echoing back what we want to hear. As AI outsmarts humans, who gladly consume art, music, and news generated by said AI, the people who control it are very close to unethical universal power.

Total Control

China introduced its Social Credit Scoring system in 2014. This system would collect all forms of data on citizens, then use an AI algorithm to assign credit ratings. People with bad scores could be denied travel and jobs and could have their internet throttled. While this algorithm is human-designed and only uses ANI for determining scores, faulty programming in the ML coding could lead to disastrous outcomes for millions of citizens. EU has moved against this, but complete control of all citizens in the most populous state is still a very dangerous precedent.

The Social Credit system in China uses motion tracking and face detection AI to track credit scores. Citizens can have scores deducted for littering, jaywalking, and speeding. [Credit: Antarctica Journal]

What can we do?

Deus Ex, released in the year 2000, predicted something very close to what we have today. It brought ‘The Illuminati’ to the masses, which for us means the monopoly that the Big Four of Tech hold. The game also concludes that technological advancement cannot be unbounded, lest we fly too close to the sun.

Organizations like IEEE, and OECD have been pushing for some level of international regulation, as have politicians and thinkers. While some have paid heed to this, it is far from the level of global control needed.

China and the UK have introduced some level of regulation and included catastrophe prevention in these, but the US has no actual regulation (just drafts), and hacker-zones like Russia, the Balkans, and Central Asia have no regulation whatsoever. Over the generations, sci-fi writers, researchers, and public figures have introduced ‘Laws’ or base ethical guidelines for all robotic and AI development, and while it would be wise to follow these, industries and economic forces that have their investments and futures riding on this new wave are lobbying against them.

AI is far too interwoven into our lives to be removed, and arguably far too crucial to human existence. There are questions we have that AI can answer and problems we face that AI can solve. Industries like medicine, transport, science, and entertainment greatly benefit from advancements in AI, and consequently, so do humans. The best, and most feasible, way out is not to cut AI down, but to throttle and control its growth, with stringent regulation, constant audits, and democratic development.

The problem may sound futuristic, but we are knocking on the door of a new era.

[Featured Image Credit: Parth Saravade, The MIT Post]

Join the official Facebook Freshers' group!Join
+ +