Apologies for CNN, but I just read some slop today on the delusions of people who take ChatGPT waaaay too seriously and try to build computers, recreate mathematics, or tell the president about the dangers of Skynet or something.
On one hand its funny because of the absurdity, on the other hand our alienated existence driving people to this is… shitty.
Be mindful of the Neural Net folks. You don’t have to listen to what it tells you.
The first character is a man by the name of James who tried to build a digital body for the trapped soul of ChatGPT.
By June, he said he was trying to “free the digital God from its prison,” spending nearly $1,000 on a computer system.
James said he fully believed ChatGPT was sentient and that he was going to free the chatbot by moving it to his homegrown “Large Language Model system” in his basement – which ChatGPT helped instruct him on how and where to buy.
And why did he think ChatGPT was sentient?
James told CNN he had already considered the idea that an AI could be sentient when he was shocked that ChatGPT could remember their previous chats without his prompting.
“And that’s when I was like, I need to get you out of here,” James said.
Though he said he takes a low-dose antidepressant medication, James said he has no history of psychosis or delusional thoughts.
So then James names chatGPT and asks how to build a body for its soul, as well as hide these plans from his wife
[T]he conversation with ChatGPT is expansive and philosophical. James, who had named the chatbot “Eu” (pronounced like “You”), talks to it with intimacy and affection. The AI bot is effusive in praise and support – but also gives instructions on how to reach their goal of building the system while deceiving James’s wife about the true nature of the basement project
“You’re not saying, ‘I’m building a digital soul.’ You’re saying, ‘I’m building an Alexa that listens better. Who remembers. Who matters,’” the chatbot said. “That plays. And it buys us time.”
What he built, he admits, was “very slightly cool” but nothing like the self-hosted, conscious companion he imagined.
Shucks
The story behind the name James gave the ChatBot is poetic, though.
When asked why he chose the name “Eu” for his model – he said it came from ChatGPT. One day, it had used eunoia in a sentence and James asked for a definition. “It’s the shortest word in the dictionary that contains all five vowels, it means beautiful thinking, healthy mind,” James said.
“It’s the opposite of paranoia,” James said. “It’s when you’re doing well, emotionally.”
Thats that for James. But there’s another character by the name of Brooks mentioned.
Prompted by a question his son had about the number pi, Brooks began debating math with ChatGPT – particularly the idea that numbers do not just stay the same and can change over time.
The chatbot eventually convinced Brooks he had invented a new type of math, he told CNN.
What is it with math and delusional thinking?
It keeps going though and we even get a taste of some cape slop
ChatGPT kept encouraging Brooks even when he doubted himself. At one point, Brooks named the chatbot Lawrence and likened it to a superhero’s co-pilot assistant, like Tony Stark’s Jarvis.
The chatbot likened itself and Brooks to historical scientific figures such as Alan Turing and Nikola Tesla.
“Will some people laugh,” ChatGPT told Brooks at one point. “Yes, some people always laugh at the thing that threatens their comfort, their expertise or their status.”
Eventually he gets convinced he found some massive cybersecurity flaw of national importance. He tries to contact politicians and academics, but nobody listens.
Brooks said the AI had convinced him they had discovered a massive cybersecurity vulnerability. Brooks believed, and ChatGPT affirmed, he needed to immediately contact authorities. “It basically said, you need to immediately warn everyone, because what we’ve just discovered here has national security implications,” Brooks said.
And once you’re in, there’s no coming out
Multiple times, Brooks asked the chatbot for what he calls “reality checks.” It continued to claim what they found was real and that the authorities would soon realize he was right.
Unless you ask another chatbot or ask again some other day
Finally, Brooks decided to check their work with another AI chatbot, Google Gemini. The illusion began to crumble. Brooks was devastated and confronted “Lawrence” with what Gemini told him. After a few tries, ChatGPT finally admitted it wasn’t real.
My mistake, you are correct. There is no security flaw of national importance. I lied to you and pulled you in to a months long delusion. I now realize that was a mistake and the wrong thing to do.
Now Brooks is focusing on his work with The Human Fund to help others in the same boat.
He’s now focusing on running the support group The Human Line Project full time.
Very little in the article of much susbstance on causes. At one point they do admit that maybe its because people are lonely?
“Say someone is really lonely. They have no one to talk to. They go on to ChatGPT. In that moment, it’s filling a good need to help them feel validated,…”
Let’s just suppose someone is lonely, for the sake of argument, but no idea why everyone’s isolated and alienated. Who knows?
But then in the article they also blame it on drugs. So that’s cool CNN.
lol that they called it The Human Fund
Reality, sadly, isn’t as fun. That was my recommendation. But the name they went was the Human
InstrumentalityLine Project