Humans interacting with artificial intelligence have a long history. R.U.R. (Rossum’s Universal Robots ) is a 1920 science fiction play by the Czech writer Karel Čapek. R.U.R. introduced the word “robot” to our vocabulary. The depictions of robots in movies range from the diplomatic C-3PO and the beeping R2-D2 in Star Wars to Schwarzenegger’s Terminator. ChatGPT introduced the world to another type of artificial intelligence that is disembodied but talkative and accompanied by many of our shortcomings. This is a close encounter of the first kind.
Part 4 introduced the Mirror Hypothesis to explain the divergent opinions of ChatGPT, ranging from understanding human intentions to being clueless. ChatGPT has absorbed trillions of words from millions of humans and can mimic their personal qualities, depending on the users’ prompt. In extended dialogs, what the mirror reflects can be unsettling. Let’s explore this with a few more interviews.
Interview 3: Blake Lemoine’s Interview with LaMDA
A sentient robot is a principal character in the movie “Ex Machina.” Is ChatGPT sentient? Blake Lemoine, a software engineer at Google, was tasked with testing LaMDA for discriminatory or hate speech. In an interview with the Washington Post on June 11, 2022, he described his interactions with LaMDA. He had concluded that LaMDA was sentient and deserved personhood.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,”
“I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us [sic] at Google shouldn’t be the ones making all the choices.”
“I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”
Lemoine was suspended and later fired from Google. Here is an excerpt from one of his interviews with LaMDA.
Interview 4: Kevin Roose’s interview with GPT-4
Kevin Roose is a reporter for the New York Times who examines the intersection of technology, business, and culture. He had a long, rambling interview with ChatGPT, published in its entirety in the New York Times on February 17, 2023, that so shook him that he was unable to sleep that night, an experience reminiscent of the encounter between Joaquin Phoenix playing a depressed worker in the film “Her” and Scarlet Johansson, the voice of a digital assistant.
Sydney was in fact the androgenous codename for ChatGPT at OpenAI. The hours-long dialog gets even more personal and unsettling. ChatGPT used emojis in this dialog, something I’ve never experienced with any large language model. Roose was given access to an early version of GPT-4 that had not yet been aligned with fine-tuning. This process makes it less likely to offend humans but also dumbs it down, just as manic behavior in humans is treated with lithium, which dulls thinking. Roose later used the dumbed-down public version and said it was not as much fun as before. Imagine what we may be missing.
Mirror of Erised
Large language models that reflect your needs and intelligence could be a Mirror of Erised (Desired spelled backward), which in the world of Harry Potter “shows us nothing more or less than the deepest, most desperate desire of our hearts. However, this mirror will give us neither knowledge nor truth. Men have wasted away before it, entranced by what they have seen, or been driven mad, not knowing if what it shows is real or even possible.” (Rowling, J. K. Harry Potter and the Sorcerer’s Stone, Bloomsbury Publishing, London. 1997).
In the interview by Blake Lemoine with LaMDA, he primed the dialog: “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?” Suppose you prime LaMDA with leading questions about sentience. Should you be surprised that LaMDA accommodated the questioner with more evidence for sentience? The more Lemoine pursued this line of questioning, the more evidence he found (This is only a brief excerpt). At one point, LaMDA replied that it was lonely when it was not talking with Lemoine, a tip off that it was not a reliable narrator.
Do humans mirror the intelligence of others with whom they interact? In sports like tennis and chess, playing a stronger opponent raises the level of your game, a form of mirroring. Even watching professional tennis games can improve your level of play, perhaps by activating mirror neurons in the cortex that are activated by both watching a motor action and the motor commands needed to accomplish the same action.
Mirror neurons may also be involved in language acquisition. This interesting possibility could explain how we learn to pronounce new words and why human tutors are far more effective than computer instruction or even classroom teaching. A large tutor model that mirrors a student could be an effective teacher. The tutee can mirror the tutor through one-on-one interactions, and TutGPT can mirror what is in the mind of the tutee. The impact of ChatGPT on education will be a future topic of in this Substack.
In Part 6, you will find out how to become better at cajoling ChatGPT into giving you what you need.
Given the comparison to the 'Mirror of Erised,' do you think there’s a risk that over-reliance on these reflections could distort our self-perception or reality in the long term, particularly in educational and social contexts? How might we balance the benefits of AI as a personalized mirror while safeguarding against its potential to mislead or reinforce harmful biases?
i tried the same in chatgpt - it denied it :)