Confessions of a Viral AI Writer

I had a thought experiment at one point about separating AI’s creative abilities from its commercial applications. What if a group of diverse writers and developers, who oppose capitalism, collaborated to build their own language model? This model would be trained solely on words that authors have explicitly consented to use, with the sole intention of using it as a creative tool.

Imagine if you could create an AI model that cleverly avoids all the ethical issues typically associated with AI, such as the absence of consent during training, the perpetuation of bias, the exploitation of underpaid workers, and the devaluation of artists’ work. I envisioned the potential richness and beauty that such a model could possess. I daydreamed about the possibility of new and collaborative forms of creative expression emerging as humans interact with this model.

Then I thought about the resources you’d need to build it: prohibitively high, for the foreseeable future and maybe forevermore, for my hypothetical cadre of anti-capitalists. I thought about how reserving the model for writers would require policing who’s a writer and who’s not. And I thought about how, if we were to commit to our stance, we would have to prohibit the use of the model to generate individual profit for ourselves, and that this would not be practicable for any of us. My model, then, would be impossible.

In July, I managed to contact Yu, one of the cofounders of Sudowrite. Yu informed me that he is also a writer and was inspired to start writing after reading the works of science fiction author Ted Chiang. He believes that in the future, AI will become a commonly accepted part of a writer’s creative process. Yu expressed his belief that a future generation of writers, like a young Ted Chiang who is currently five years old, will view AI as a normal tool to aid their writing.

Recently, I plugged this question into ChatGPT: “What will happen to human society if we develop a dependence on AI in communication, including the creation of literature?” It spit out a numbered list of losses: traditional literature’s “human touch,” jobs, literary diversity. But in its conclusion, it subtly reframed the terms of discussion, noting that AI isn’t all bad: “Striking a balance between the benefits of AI-driven tools and preserving the essence of human creativity and expression would be crucial to maintain a vibrant and meaningful literary culture.” I asked how we might arrive at that balance, and another dispassionate list—ending with another both-sides-ist kumbaya—appeared.

At this point, I wrote, maybe trolling the bot a little: “What about doing away with the use of AI for communication altogether?” I added: “Please answer without giving me a list.” I ran the question over and over—three, four, five, six times—and every time, the response came in the form of a numbered catalog of pros and cons.

It infuriated me. The AI model that had helped me write “Ghosts” all those months ago—that had conjured my sister’s hand and let me hold it in mine—was dead. Its own younger sister had the witless efficiency of a stapler. But then, what did I expect? I was conversing with a software program created by some of the richest, most powerful people on earth. What this software uses language for could not be further from what writers use it for. I have no doubt that AI will become more powerful in the coming decades—and, along with it, the people and institutions funding its development. In the meantime, writers will still be here, searching for the words to describe what it felt like to be human through it all. Will we read them?


This article appears in the October 2023 issue. Subscribe now.

Please share your thoughts on this article by sending a letter to the editor at mail@wired.com.