Steel Ceiling

Sangho Suh

Computer Science

 

 

 

“Glass ceiling” is a term which refers to an invisible barrier that keeps a certain demographic—typically applied to women—from rising beyond a certain level in social hierarchy.[1] While “glass ceiling” is a phenomenon that needs to be eradicated, a measure to keep Artificial Intelligence (AI) from rising beyond humanity is a necessity. Indeed, while tremendous effort is being exerted to make the dream of true AI come true, there is an equal amount of warning about the danger of AI to humanity. Known public figures, such as Stephen Hawking, Elon Musk and Bill Gates, have voiced their concerns. But some may ask, “What is it about AI that makes it such a threat to humanity?” The movie, Ex Machina,[2] which is a British science fiction film about AI, suggests that our attempt at creating the most human-like AI will pose a danger. The content of movie and its copy, “there is nothing more human than the will to survive,” vividly foretells our future. In this essay, we explore what measures may be necessary to prevent AI from becoming a danger to humanity by discussing the plot and ideas presented in the movie, Ex Machina.

 

Ex Machina tells the story of programmer Caleb who wins a competition to spend a week with his employer, Nathan. When Caleb arrives at Nathan’s estate, he finds out that he has a job to do, that is, administer Turing test to an android named Ava built by Nathan. Immediately, Caleb discovers that Ava is extraordinary. In fact, after few sessions with her, Caleb falls in love with her and decides to help her escape from Nathan. Later on, Caleb realizes that Ava never had feelings for him to begin with and only pretended that she did in order to manipulate him to help her escape.

 

Surprisingly, this scenario turns out to be what Nathan had actually intended to see in order to prove a sign of true AI. To prove a true form of intelligence, Nathan had devised a sophisticated form of AI test as opposed to traditional Turing test, which Caleb was misled to believe that this test was all about. Instead, Nathan’s experiment was designed to test whether Ava uses unique human characteristics, such as self-awareness, imagination, manipulation, sexuality and empathy, to sucessfully manipulate Caleb to escape from the confinement. Indeed, after the encounter with Caleb, Ava realizes that he is the only way out of the confinement and manipulates him by mustering all the unique human characteristics mentioned above.

 

This acclaimed film, Ex Machina, shows clearly why AI can be so dangerous. While we are still years from seeing a machine exhibiting such comprehensiveness, solving tasks that require a combination of unique human characteristics, it shows what our desire to achieve true AI entails. Truly, if our goal is to create a machine that resembles us and we can select only one character, it would have to be our will to survive. So if we end up programming that to AI, what will that mean for us?

 

In the movie, Ava demonstrates strong emotion for a desire to survive. As test session nears the end, Ava asks Caleb what will happen to her if she fails Caleb’s Turing test. As he is unable to answer and ends up acknowledging that it is up to Nathan, she displays fear at the idea of being switched off and anger at why her life has to be controlled by Nathan. While this may seem problematic and scary, it should also be acknowledged that this manifestation comes very familiar to us. Certainly, if we look at the evolution of human society in the view of human history, the main engine that propelled us forward and made us who we are today has been our will to survive. Not to mention how this this is the basis for our day-to-day tasks. But since giving AI the will to survive against humanity can result in danger to humanity, we may ask, “Can we crate true AI without giving them the will to survive?”

 

In one conversation between Nathan and Caleb, Caleb asks Nathan why he made Ava. To this, Nathan responds by saying that Ava was not a decision of his but an inevitable evolution, because the arrival of strong AI has always been an inevitable and that the only variable was ‘when.’ Sensing that Caleb feels bad for Ava being disposed of after the test and that Ava has succeeded in manipulating Caleb, Nathan tells Caleb that he should only feel bad for himself, because it will be humanity in the future that will be set for extinction while AI survives. This scene implies that true AI and the will to survive are inseparable—or at least, that is so from Nathan’s point of view.

 

As the research on AI has witnessed rapid progress in recent years, with remarkable advancement in various fields brought by deep learning and the recent symbolic win of AlphaGo over Sedol Lee in Go game, many ethcal dilemma that we as humanity will face with the use of AI have been proposed. One example of that is the use of AI in self-driving cars. MIT Technology Review[3] raised an issue of the dilemma the self-driving cars face in the event of an unavoidable accident where the AI is forced to make a decision on whether to kill the driver and save pedestrians or kill pedestrians to save the driver. While more immediate problems such as this may need to be discussed first, it seems necessary that general public is made aware of what truly sophisticated form of AI may entail for us. The movie, The Avengers: Age of Ultron, has shown ultron, an AI created by Stark, attacking on the humanity. But since the movie was more entertainment-oriented, it did not explore in-depth what fundamental property would cause AI to make such a decision to eliminate humanity. Ex Machina does a great job of showing its viewers that it is the will to survive and gain freedom from control.

 

The short story, Runaround, written by science fiction author Isaac Asimov in 1942 introduced The Three Laws of Robotics[4], which are as follows.

 

1.     A robot may not injure a human being or through inaction, allow a human being to come to harm.

2.     A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.     A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

 

While this has been a reference point for ethics that are to be programmed to AI without exception, we propose “steel ceiling” which complements the above laws with additional constraint, that is, the absence of the will to survive, in order to prevent machines from being a potential threat to humanity.



[1] https://en.wikipedia.org/wiki/Glass_ceiling

[2] https://en.wikipedia.org/wiki/Ex_Machina_(film)

[4] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics



WRITTEN BY
서상호

,