“It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT

Between Christmas and New Year’s, my family took a six-hour drive to Vermont. I drove; my wife and two children sat in the back seat. Our children are five and two—too old to be hypnotized by a rattle or a fidget spinner, too young to entertain themselves—so a six-hour drive amounted to an hour of

Powered by NewsAPI , in Liberal Perspective on .

news image

Between Christmas and New Year’s, my family took a six-hour drive to Vermont. I drove; my wife and two children sat in the back seat. Our children are five and two—too old to be hypnotized by a rattle or a fidget spinner, too young to entertain themselves—so a six-hour drive amounted to an hour of napping, an hour of free association and sing-alongs, and four hours of desperation. We offered the kids an episode of their favorite storytelling podcast, but they weren’t in the mood for something prerecorded. They wanted us to invent a new story, on the spot, tailored to their interests. And their interests turned out to be pretty narrow. “Tell one about the Ninja Turtles fighting Smasher Venom, a villain I just made up who is the size of a skyscraper,” the five-year-old said. “With lots of details about how the Turtles use their weapons and work together to defeat the bad guy, and how he gets hurt but doesn’t die.” My wife tried improvising a version of this story; then I tried one. The children had notes. Our hearts weren’t in it. It was obvious that our supply of patience for this exercise would never match their demand. Three and a half hours to go.

My wife took out her phone and opened ChatGPT, a chatbot that “interacts in a conversational way.” She typed in the prompt, basically word for word, and, within seconds, ChatGPT spat out a story. We didn’t need to tell it the names of the Teenage Mutant Ninja Turtles, or which weapons they used, or how they felt about anchovies on their pizza. More impressive, we didn’t need to tell it what a story was, or what kind of conflict a child might find narratively satisfying.

We repeated the experiment many times, adding and tweaking details. (The bot remembers your chat history and understands context, so you don’t have to repeat the whole prompt each time; you can just tell it to repeat the same story but make Raphael surlier, or have Smasher Venom poison the water supply, or set the story in Renaissance Florence, or do it as a film noir.) My wife, trying to assert a vestige of parental influence, ended some of the prompts with “And, in the end, they all learned a valuable lesson about kindness.” We ran the results through a text-to-speech app, to avoid car sickness, and the time pleasantly melted away. My wife took a nap. I put in an earbud and listened to a podcast about the A.I. revolution that was on its way, or that was arguably already here.

ChatGPT is a free public demo that the artificial-intelligence company OpenAI put out in late November. (The company also has several other projects in development, including Dall-E.) We’ve known for a while that this sort of A.I. chatbot was coming, but this is the first time that anything this powerful has been released into the wild. It’s a large language model trained on a huge corpus of text that apparently included terabytes of books and Reddit posts, virtually all of Wikipedia and Twitter, and other vast repositories of words. It would be an exaggeration, but not a totally misleading one, to refer to the text that was fed into the model as “the Internet.” The bot isn’t up on current events, as its training data was only updated through 2src21. But it can do a lot more than make up children’s stories. It can also explain Bitcoin in the style of Donald Trump, reduce Dostoyevsky to fortune-cookie pabulum, write a self-generating, never-ending “Seinfeld” knockoff, and invent a Bible verse about how to remove a peanut-butter sandwich from a VCR, among many, many other things. The other night, I was reading a book that alluded to the fascist philosopher Carl Schmitt’s critique of liberalism in a way that I didn’t quite understand; I asked ChatGPT to explain it to me, and it did a remarkably good job. (Other times, its answers to questions like this are confident and completely wrong.) Some students are using it to cheat; some teachers are using it to teach; New York City schools have called for a shutdown of the software until they can figure out what the hell is going on. Google Search scrapes the Internet and ranks it in order of relevance, a conceptually simple task that is so technically difficult, and so valuable, that it enabled Alphabet to become a trillion-dollar company. OpenAI and its competitors—including DeepMind, which is now owned by Alphabet—are aiming to do something even more potentially transformative: build a form of machine intelligence that can not only organize but expand the world’s glut of information, improving itself as it goes, developing skills that are increasingly indistinguishable from shrewdness and ingenuity and maybe, eventually, something like understanding.

The interface is about as simple as it gets: words in, words out. You type in any prompt that comes to mind, press a button that looks like a little paper airplane, and then watch the blinking cursor as ChatGPT responds with its own words—words that often seem eerily human, words that may include characteristic hedges (“It’s important to note that . . .”) or glimmers of shocking novelty or laughable self-owns, but words that, in almost every case, have never been combined in that particular order before. (The graphic design, especially the cursor, seems almost intended to create the illusion that there is a homunculus somewhere, a ghost in the machine typing back to you.) There is a robust and long-standing debate about whether the large-language approach can ever achieve true A.G.I., or artificial general intelligence; but whatever the bots are doing already has been more than enough to capture the public’s imagination. I’ve heard ChatGPT described, sometimes by the same person, as a miracle, a parlor trick, and a harbinger of dystopia. And this demo is just the public tip of a private iceberg. (According to rumors, OpenAI will soon put out a more impressive language model trained on a far vaster trove of data; meanwhile, Alphabet, Meta, and a handful of startups are widely assumed to be sitting on unreleased technology that may be equally powerful, if not more so.) “If we’re successful, I think it will be the most significant technological transformation in human history,” Sam Altman, the C.E.O. of OpenAI, said recently. “I think it will eclipse the agricultural revolution, the industrial revolution, the Internet revolution all put together.”

Luckily, unlike every other technological transformation in human history, this one will only serve to delight people and meet their needs, with no major externalities or downside risks or moral hazards. Kidding! The opposite of that. If the A.I. revolution ends up having even a fraction of the impact that Altman is predicting, then it will cause a good amount of creative disruption, including, for starters, the rapid reorganization of the entire global economy. And that’s not even the scary part. The stated reason for the existence of OpenAI is that its founders, among them Altman and Elon Musk, believed artificial intelligence to be the greatest existential risk to humanity, a risk that they could only mitigate, they claimed, by developing a benign version of the technology themselves. “OpenAI was born of Musk’s conviction that an A.I. could wipe us out by accident,” my colleague Tad Friend wrote, in a Profile of Altman published in 2src16.

OpenAI was launched, in 2src15, with a billion dollars of funding. The money came from Musk, Peter Thiel, Reid Hoffman, and other Silicon Valley big shots, and their contributions were called “donations,” not investments, because OpenAI was supposed to be a nonprofit “research institution.” An introductory blog post put the reasoning this way: “As a non-profit, our aim is to build value for everyone rather than shareholders.” The clear implication, which Musk soon made explicit in interviews, was that a huge, self-interested tech company, like Google or Facebook, could not be trusted with cutting-edge A.I., because of what’s known as the alignment problem. But OpenAI could be a bit slippery about its own potential alignment problems. “Our goal right now,” Greg Brockman, the company’s chief technology officer, said in Friend’s Profile, “is to do the best thing there is to do. It’s a little vague.”

In 2src18, Musk left OpenAI’s board. (“I didn’t agree with some of what OpenAI team wanted to do,” he tweeted.) In 2src19, Altman announced that OpenAI would become a for-profit company, and that it would start a commercial partnership with Microsoft—a huge, self-interested tech company. In January, Microsoft announced a “multiyear, multibillion dollar investment” in OpenAI, reportedly agreeing to put in ten billion dollars and end up with an ownership stake of forty-nine per cent. In the Profile, Altman made the self-aware point that the implicit two-step justification for OpenAI’s existence—No one entity should be trusted with A.G.I. followed by We’re building an A.G.I., and you should trust us—was not likely to win hearts and minds. “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board,” Altman said. “Because if I weren’t in on this I’d be, like, Why do these fuckers get to decide what happens to me?”

Read More