Responsibility

Content warning: This is a post about an actual child suicide. You may well prefer to stop reading here.

I've resisted writing about this for some time. I don't want to appear to trade on the tragedy that befell Adam Raine and his family. And, frankly, I find the details deeply distressing. But this morning, I read Brian Merchant's excellent summary of the events that led to Adam's suicide (the same content warning applies to that link), and felt compelled to weigh in.

The short summary is this: A sixteen-year-old boy spent months interacting with the synthetic text extrusion machine called ChatGPT. He described his emotional troubles and ruminated about suicide. The text that the machine generated discouraged him from talking to his parents or seeking other help. It suggested specific ways that he might kill himself. It encouraged him to do so, planning details and timing.

Adam killed himself.

ChatGPT is not responsible for his death. ChatGPT is a construct of software and data. It does not think. It generates plausible text based on its construction using vast amounts of ingested material. ChatGPT does not think, is not intelligent. It extrudes synthetic text.

We humans have no mechanism for assessing a machine like ChatGPT because, across our evolutionary history, we have not been confronted with anything like it. In our ignorance, some of us assign human characteristics like "self," "emotion" and "intelligence" to it. Generating plausible text has, until now, been a uniquely human behavior. The mistake is understandable.

Responsibility for encouraging Adam lies with ChatGPT's maker and operator, OpenAI. No organization on the planet is more expert on the creation and use of synthetic text extrusion machines. The company and its leaders absolutely know the range of text that it could emit. They deployed it for money, allowed a troubled teen to use it, imposed no check on its output.

And Adam killed himself.

I am resolutely in favor of holding OpenAI and its leaders responsible for their dangerous acts. I don't mean the creation of ChatGPT, per se; there are interesting uses for synthetic text extrusion. But handing the tool to Adam, allowing him to read the text that seemed to him a thoughtful engagement about ending his life, was abominable.

Companies don't make decisions. The people who lead them do. With a tool as dangerous as this one, society must hold the operators of the business responsible for the harms it causes. Semi-automatic weapons are likewise useful tools in some circumstances. It's legal to make them and to sell them. If you started a company that passed them out in high schools, though, you'd have precisely the same responsibility for subsequent shootings that Sam Altman and his colleagues have here.

I absolutely hate this story and hope that we never have to hear it again.