In case you missed it, one current fixation with artificial intelligence tech involves creating realistic imitations of human beings, living or dead. These representations of people’s voices, faces, and mannerisms are alarmingly precise when wielded by skillful hands (and hilariously offensive when not).
We marvel at being able to hear, see, and interact with these outputs. That possibility is genuinely amazing and deserves its moment in the sunlight.
Yet, the uncanny valley remains: Even with top-tier AI models undergoing extensive (and shockingly expensive) training, the result is often so off that even children can spot it within a few seconds. This gap actually seems to widen as the fidelity of these creations increase. The more human qualities we try to imitate, the less “human” those creations become.
Something deep down tells us that it’s not right. We know it’s a fake. We see the man behind the curtain. We instinctually sense that it’s not to be trusted. We may choose to lie to ourselves, hoping that it might actually be real, just like us.
But it’s not.
Invention, it must be humbly admitted, does not consist in creating out of void, but out of chaos; the materials must, in the first place, be afforded: it can give form to dark, shapeless substances, but cannot bring into being the substance itself.
— Mary Wollstonecraft Shelley, Frankenstein
Part of of creating something that mimics real life is accepting the truth of reality: Humanity is so singular, so precise, that we feel a connection with other humans that transcends anything else in life. Humans are not the product of endless evolution cycles, we are not randomly assembled, we cannot be replicated. Each fragment of our existence is intelligently designed.
Yet, humans have a long-standing track record of creating self-defeating prophesies. The loop goes something like this:
- Dream of possible futures
- Find useful materials to refine
- Create shapes that bear our image
- Approach those shapes as if they were alive
- Wonder why they ignore, betray, and destroy our dreams
Regardless of your position on any technological milestone, there will always be a categorical separation between creature and creator. We will always feel strange when standing next to Frankenstein.
When we attempt to replicate humanity in some lower form, we are starting with raw materials that automatically disqualify the end result from having a meaningful connection with us.
It’s just not possible to build something that feels human unless you start with humanity itself.
Any sufficiently advanced technology is indistinguishable from magic
— Arthur C. Clarke
I find a particular irony in the label, “artificial intelligence.” These tools are powerful, the limits seem endless; I’m sure that humans felt similar tensions when electricity was invented. Each significant advancement feels like the dawn of a new era, making us question what we know about our present.
While we may not understand how to properly wield this newfound power, we do know that it’s not magic. It may feel like magic at times – exchanges with ChatGPT are darn near insightful – but we know the truth, deep down. This is why it’s shocking when someone tries to anthropomorphize AI just a little too much. It’s just a little too perfect, just a little too polite, just a little too…inhuman. Everyone else can see that something has gone wrong.
The good news? Humans are the best solution to our felt need for imperfection.
We have good days, we have bad days, we get lost in thought, we tell dumb jokes, we make someone laugh when we know they want to scream. We live our lives for any number of reasons that influence others around us. We sacrifice what we have to help one another, even if we don’t know them personally.
We live, we love, we suffer loss.
Then we die.
It is appointed and destined for all men to die once
— Hebrews 9:27 (AMP)
Imitating humanity is not something new that arrived with the dawn of AI. It’s not at all wrong to portray a likeness. But we must draw the line when attempting to fill in blanks that were meant to be left void.
So where do we draw the lines?
- Representing someone truthfully has traditionally been permissible, provided that full consent was granted by that person or their estate.
- Children imagine interactions with others but their innocence is their greatest defense, as they create with intention centered on empathy.
- Revisionism or fiction to knowingly alter what someone said or did has always been discarded as inflammatory, disingenuous, or both.
On that premise, here are three examples that can be weighed and measured:
- Brown Revisited: AI-generated voices trained on Supreme Court recordings, used to “read” actual transcripts from legal proceedings. Audio only, no fictitious information, clearly used to educate and inform. (great!)
- Agatha Christie: Fully synthetic human likeness created from historical material, used to “teach” writing classes from the BBC. Family consent, using source material, but “crafted by Christie scholars” for a sale. (groan?)
- Christopher Pelkey: Lip-synced photo and AI-trained voice of a deceased victim, used to read a statement in court at the sentencing of his murderer. Fabricated by his sister, used for impact, totally original writing. (gross…)
There are untold numbers of new examples being created every day. The spectrum of applications for these technologies become more expansive with each new idea. We can envision futures where these tools afford all types of useful applications; do we stop to ask whether we should continue forward at all?
But as we wade deeper, we are also more readily swept along with the current. Have we already crossed that line?
And in the naked light, I saw
Ten thousand people, maybe more
People talking without speaking
People hearing without listening
People writing songs that voices never shared
And no one dared
Disturb the sound of silence— Simon & Garfunkel, The Sound of Silence
We must constantly remind our fellow humans that there are underlying systems connecting us to one another. These signals are electrical, emotional, existential. They cannot be replicated using any raw materials except humanity itself.
Creating something in our image only works if we stick a big fat sign on it to remind everyone of the “artificial intelligence” inside. What happens when we forget? What should be done to those who knowingly remove that disclaimer?
Any time we anthropomorphize an object, we risk becoming overly attached to it. I’m the worst when it comes to hollering back at the robot in my phone because it didn’t know what I meant. But isn’t part of the problem that I expected it should know what to do with my random inputs?
It’s not human. It’s just code. It’s not really listening.
The true soldier fights not because he hates what is in front of him, but because he loves what is behind him.
— G.K. Chesterton
On a hopeful note: There is a clear path forward and the answer is not to throw out AI tech. We should never wholesale abandon great research, innovations, or advancements in our shared experience as creators. Humanity will benefit greatly from these leaps forward.
Yet, we cannot continue on the trend that we are right now. Resurrecting the dead or animating a likeness using falsified information is just plain wrong. This is an ethical issue that betrays the very essence of what it means to be…us.
It means we must reject any loss of dignity to our humanity, regardless of its supposed usefulness. The long-term cost is always greater than the short-term gain unless we build upon a stable foundation.
So the next time that uncanny valley appears, lean into it. Highlight the differences and contrast it with reality. Dissent with your unique voice, your personal experience, your humanity.
Your fellow humans deserve it.