[Ex Machina Spoilers]
Upon recommendation from a friend after letting my initial interest drop, I bolted out of the door just in time to see the film Ex-Machina. I found it less riveting as a film than as a question – rich ground for contemplation on the nature of identity and consciousness, evolution, humanity and compassion.
(image from Art of VFX)
The film introduces the Turing Test as premise under which a gifted hacker is brought to an isolated environment to interact with AI beyond anything in our current capabilities. “Eva”, is fluid of movement, has curiosities, is able to calculate and feel resentment. She even imagines a life beyond her current situation. She desires and feels herself as a “her” self, able to measure effect on a “him.”
If Turing’s Test aims to give a way to access an interaction with an “other mind” – a way to evaluate from the outside whether self-awareness/sentience is present – by all measures Eva passes the test at an intellectual level. Let loose in the world she will blend, function, manipulate, and be resourceful enough to have her “own life.”
However, she has an “empathy chip missing.” Eva has ground for empathy, which she was shown by someone else, but it doesn’t register, or doesn’t register as necessary or efficient. There is no indication that her awareness of self includes awareness of other in a compassionate way. She is largely made up of data gleaned, down to micro-expressions and human-appropriate emotional responses, but bypasses compassionate connection. We aren’t sure whether it is a matter of capacity or of phasing something deemed unnecessary, out. Which is a flaw in the Turing Test, or at least of basic understanding of the test, and presents crucial questions we might not have the answers to before we reach the next stage.
If there a war between intellect and (this, figurative) heart, my side is with the Dalai Lama, who has said that compassion is crucial in terms of survival. And I think we can be somewhat logical about this, making rational arguments and decisions in compassionate directions, without falling into Utopian territory.
Another question the film raises is that Eva is crafted from not only random data. She is particularly tailored to suit the tester’s preferences, gleaned from internet searches. She projects back qualities he has sought out, especially through porn sites. So this is a slightly different question that reaches into the ethics around selecting for preferences in offspring, as it is always possible to argue that one is doing what is best for a child by bringing them into the world with qualities particularly favored by the world, rather than trying to change the whole world to be favorable. Although evidence may abound in favor of certain traits, there are sufficient variables that no trait is a guarantee of optimum world-friendliness so I think we largely avoid this experiment so far.
The maker of the film, Alex Garland, compares the question of AI to the question of nuclear technology, both in its risks and potentialities, and also in the scope of the puzzles it poses about humankind and coexistence. He would not give up nuclear technology, even seeing the devastation upon Nagasaki and the threat of man-made, unfathomable scale disaster that humanity lives under since that time. Mankind pushes on, evolves in ways that it would not have without that knowledge. And must relentlessly evolve, without seeking perfection.
I agree that each time mankind has sought a perfect world, totalitarianism resulted, but I would like to believe we can do far better than that sort of devastation while still engaging in full-hearted discovery.
Which brings me to my main thought: we are only half-aware ourselves, while giving tests to measure awareness. Much or most of our own motives and intellect are obscured as we deal with ourselves, much less with other humans, much less with potential AI. Any test we give remains highly suspect.