The Real Implications of AI
October 17, 2018
More than 70% of Americans are concerned that artificial intelligence will lead to “robots taking over.”  Fears that AI machines will replace the human workforce or that robots will develop superintelligence and rebel are propagated throughout the media and pop culture. Even Stephen Hawking and Elon Musk have warned that artificial intelligence could “spell the end of the human race” and is “our biggest existential threat.”  Musk has even suggested that “there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” 
Is he right?
First, we should define what artificial intelligence is: there are two main concepts. The kind we see most often in science fiction, is general, or strong, AI. Strong AI is used to “build systems that think but also explain how humans think.” But in real life, applications of artificial intelligence have not yet manifested in robots or computers that are capable of independent, human thought.
Narrow, or weak, AI is used to “build systems that can behave like humans, [but] the results will tell us nothing about how humans think.” This form of AI is typically “designed for specific tasks,” not to encompass all human reasoning. In this vein, we already interact with AI on a daily basis — whether or not we realize it. 
For instance, navigation services and ride-sharing companies use machine learning to estimate precise arrival times. Online retailers often use AI to predict products you might be interested in, and video streaming services use it to recommend personalized choices. Gmail uses AI to detect Spam messages. Facebook and Snapchat use facial recognition software in order to tag friends or add filters. And voice recognition software, like Siri or Alexa, is now commonplace. 
In the past decade, AI has been used to significantly improve products and increase businesses’ efficiencies. Through industry in particular, applications of AI have continued to grow, and have generally increased social good.
However, fear and uncertainty regarding AI and its potential have resulted in a call for increased government regulation. This is a concern for industry leaders, as legislative restrictions could slow or even halt innovation just when this technology has begun to flourish. Legislation would cause industry costs to rise, and ultimately society could “lose out on many socially and economically enriching innovations.” Many argue that the ethics committees in place, including the Partnership on AI, founded by Amazon, Apple, DeepMind, Facebook, Google, IBM, and Microsoft, do enough to self-regulate the artificial intelligence movement.
But there are more motives to regulate AI beyond just the fear of an AI takeover. Artificial intelligence is not currently perfect, and in some cases, it has had significantly troubling ethical consequences.
In order for machine learning to be effective, it needs to interpret big data – the acquisition of which begets many privacy concerns for Americans. Amazon can make product recommendations by storing the search histories of hundreds of millions of users. If you keep your location services on, GPS applications keep track of your location and travel in order to refine arrival times. Facebook can keep track of your external browsing history, likes, posts, and messages while you’re logged in in order to show you targeted ads (and we know how that data can be misused). Many believe that Alexa, Google Home, or even Facebook is listening and recording conversations in order to gather personal data.  While the General Data Protection Regulation (GDPR), which requires companies to clarify their privacy policies and imposes penalties for data misuse, was implemented by the EU earlier this year, there is no similar legislation in the US. 
As unbiased as artificial intelligence appears, its code is written by humans — who are inherently biased. Most developing AI code is bootstrapped, so biases and errors introduced at any stage remain in the system. In 2016, an algorithm used to determine the “risk” that criminal defendants would commit future crimes by investigators and in trials, was found to be racially biased. In response, New York recently introduced the “algorithmic accountability bill,” which prohibits algorithmic discrimination. 
A large concern is that various forms of AI can pose dangers to humans. Most of the existing AI regulation has to do with the increasingly popular market for autonomous vehicles. At present, the software is not perfect and there have been multiple accidents and fatalities during the testing process. A large ethical concern for self-driving cars is the “trolley problem” — situations in which an accident is inevitable but the AI has multiple courses of action to choose from (e.g. hit a pedestrian or swerve off the road and injure the driver). Currently, there is no distinct body of legislation for driverless cars (or any other artificial intelligence which could be dangerous), which scholars argue will likely have to change. 
Other potential threats posed by academics include artificially intelligent weapons (AI drones already exist), more efficient hacking and phishing, automatic propaganda and censorship, and “robot swarms,” large numbers of autonomous robots with the same goal. 
This dystopian-esque future is not inevitable, but we likely need both individual and federal regulation for some forms of AI, in order to keep our rights and civil liberties, avoid systematic bias, and prevent harm or wrongdoing.
To quote Jurassic Park, we do not want to end up saying, “your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”
Student Blog Disclaimer
The views expressed on the Student Blog are the author’s opinions and don’t necessarily represent the Penn Wharton Public Policy Initiative’s strategies, recommendations, or opinions.
Additional Blog Posts