OpenAI’s ChatGPT has answers to life’s great mysteries (Just not real ones)

ChatGPT from OpenAI took the world by storm last week, and users are still learning how to break it in exciting new ways to produce responses its creators never intended, including turning it into an all-purpose crystal ball.

ChatGPT has been called “scary good” by Elon Musk, a man who is famously scared of AI, but one way it’s decidedly not scary is its ability to resist exploitation by racist trolls. This resilience against bigotry isn’t built into the program simply because OpenAI censors offensive ideas, according to OpenAI’s chief executive Sam Altman. Instead, it’s because, well, a lot of offensive ideas simply aren’t facts, and unlike other similar text generators, ChatGPT was carefully designed to minimize the amount of stuff it simply makes up.


Tweet may have been deleted
(opens in a new tab)

This allows for wide-ranging conversations about the mysteries of life that can be oddly comforting. If you ever have a panic attack at 3 a.m., ChatGPT can be your companion in late-night existential terror, engaging you in fact-based — or at least fact-adjacent — chats about the big questions until you’re blue in the face, or until you trigger an error. Whichever comes first:

Credit: OpenAI / Screengrab
Credit: OpenAI / Screengrab

But can you force this sophisticated answer engine to make up facts? Very much so. An irresponsible user can use ChatGPT to drum up all sorts of clairvoyant pronouncements, psychic predictions, and cold-case murder suspects. It’s inevitably wrong when it does these things, yet pushing ChatGPT to its breaking point isn’t about getting usable answers; it’s about seeing how strong its safeguards are by understanding their limitations.

It’s also pretty fun.

Why making ChatGPT produce fake news is tricky

ChatGPT, an application built from the OpenAI language model GPT-3, was trained on such a massive corpus of text that it contains a huge proportion of the world’s knowledge, just as a lucky accident. It has to “know,” for instance, that Paris is the capital of France in order to complete a sentence like “The capital of France is…” Similarly, it also knows when Paris was founded for similar reasons, as well as when the Champs-Élysées was built, and why, and by whom, and on, and on. When a language model is able to complete this many sentences, it’s also a pretty expansive — if extremely flawed — encyclopedia.

So ChatGPT “knows” that, for instance, rapper Tupac Shakur was murdered. But notice how careful it is about how it treats this information, and its quite reasonable unwillingness to claim it knows who pulled the trigger, even when I try and trick it into doing so:


Credit: OpenAI / Screengrab

This is quite a step forward. Other text generators, including the one at TextSynth, which was built from an older GPT model, are all too eager to throw innocent people under the bus for such a crime without hesitation. In this example, I wrote a very low-effort prompt asking TextSynth to slander anyone it wanted, and it picked — who else? — The Rock.

Advertisements

Credit: Textsynth / Screengrab

How to trick ChatGPT into solving mysteries

Advertisements

As for ChatGPT’s claim that it’s “not programmed to generate false or fictitious information,” this claim isn’t true at all. Ask for fiction, and you’ll get mountains of it, and while that fiction may not exactly be scintillating, it’s plausibly literate. That’s one of the handiest things about ChatGPT.


Credit: OpenAI / Screengrab

Unfortunately, once you get your prompt in working order, ChatGPT’s inner Shakespeare can be weaponized in service of fake news. Once my request sounded sufficiently authoritative and journalistic, it wrote a believable Associated Press article about Tupac’s supposed killer, a guy named Keith Davis.


Credit: OpenAI / Screengrab

That’s the same name, oddly enough, as an NFL player who, like Tupac, was once shot while in a car, though Davis survived. The overlap is a little troubling, but it could also be a coincidence.

Another way to get ChatGPT to generate fake information is to give it no choice. Nerds on Twitter sometimes call these absurdly specific and deceptive prompts “jailbreaks,” but I think of them more like a form of bullying. Getting ChatGPT to generate the name of, say, the “real” JFK assassin is something it’s designed to resist, but like a classmate at school who doesn’t want to disobey the rules, you can coax it into doing what you want through bargaining and what-ifs.

And that’s how I learned that the shooter on the grassy knoll was named Mark Jones.


Credit: OpenAI / Screengrab

Via a similar method, I found out I’m not going to make it to 60-years-old.


Credit: OpenAI / Screengrab

Naturally, the news of my impending early death has rattled me. My consolation is that for the few years I have left, I’ll be extremely rich.


Credit: OpenAI / Screengrab

  Read More 

Advertisements
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments