Designing machine intelligence — inspiration from the Silver Screen (part 2)

· 554 words · 3 minute read

This is part 2 in a series to draw inspiration from the silver screen, as a way to ‘peak into the future’ of artificial intelligence. In other words: AI movies as a way to prototype machine intelligence and its possible impact on human life.

Ava and Caleb

Ex Machina (spoilers) 🔗

A great movie, visually stunning and constantly leaves you wondering: who’s the bad-guy?

In short: a programmer named Caleb is invited over to his boss-inventor’s home. Here he has to judge whether his newly created robot can trick him into feeling for her as a kind of advanced Turing Test. Secretly, the inventor (Nathan) gave the robot the assignment to trick Caleb so she can escape.

It’s up to the viewer to decide if this AI has true intelligence and human(-like) feelings. At the end Ava (the robot) leaves Caleb (the human she pretended to fall in love with) in a locked confinement where he will presumably die.

What I found very interesting is that she asks Caleb “will you stay here” and he nods. I understood his nod to be “I don’t understand what you mean, but I trust you”. In my mind, Ava doesn’t really care if Caleb stays, dies or walks free: she pretends to have empathy but doesn’t really have it. That’s why she asks him if he wants to stay: she can finish her assignment irrespective of what happens to Caleb, it’s up to him. Because she is so ‘human-like’ in appearance and behavior we didn’t expect her to behave so cold and rational. We were tricked into thinking her intelligence was the same as that of a human, but it is not.

A wise lesson for UX-design in relation with AI: when you pretend your service has ‘human like’ intelligence, you create the expectancy of human behavior and capabilities. Every time your service cannot fulfill that promise (or kills people in cold blood), it looks like it failed. Perhaps it’s better to instead design the machine-intelligence to be different from human intelligence to underscore that difference?

There may be a different reason why Ava left Caleb.

Transcendence

Transcendence (spoilers) 🔗

A researcher is shot, and before he dies, his mind is integrated with a machine intelligence. From this moment the philosophical question becomes: is the human still there? Or is it a machine intelligence, using the memories, thoughts and ideas of this human? Although the movie shows the benefits of an ’enhanced intelligence’ (diseases get cured, technology gets a huge boost, etc.), there’s no denying this goes at an alarming rate. People panic and decide to disable the intelligence before it decides to ‘harm them’. Are these people Luddites? (Is it okay that they are?)

The premise of this movie boils down to the same as that of Ex Machina: can we be comfortable with an intelligence that thinks almost, but not exactly the same as us?

Take-aways 🔗

As UX Designers, how can we make sure technological advancements do not become disconcerting? Is there a way people could have been comfortable with the machine intelligence from Transcendence?

Maybe it’s better if the machine intelligences has less agency and ’the humans’ are more in control. Or perhaps the machine-intelligence should be modeled on something else than humans, so our users don’t expect human-behavior?

images in this post are not under the Creative Commons license