⚡️ Robots Write Experimental Poetry
but most of it is boring, and doesn’t add much – if anything
Poetry has a rich history; is GPT-3’s really so special? In today’s essay, Shira Abramovich, a poet with a degree in literary arts, places AI-generated poetry in context of a long tradition in experimental and computational literature, and provides a framework for thinking through the questions that matter when it comes to experimental poetry — AI-generated or otherwise.
✍ robots write experimental poetry
In its period of limited release, GPT-3 has accumulated a dizzying list of bylines. It’s written for the Guardian, the New York Times’ Modern Love column, with oddly poetic results, and even included in a New Yorker article by Stephen Marche to write new poetry in the style of Samuel Taylor Coleridge.
Does this count as real writing? Or real poetry? After all, to quote Marche’s article in the New Yorker, “GPT-3 is a tool. It does not think or feel. It performs instructions in language.” Does its inability to think, notice, or feel prevent the poetic text from being real poetry?
This question is actually decades older than the large language models of the past few years. Ultimately, it’s mainly a question of opinion, a question that’s been debated in experimental literary communities for generations. Looking at these movements can help us contextualize the debate, and turn our attention towards broader issues of authorship, agency, voice, and randomness — questions which often mirror broader issues in ethical AI.
As a poet-translator as well as a software developer, I’ve thought a lot about what these questions mean for me. Here are three strands in experimental literature which have helped me clarify my views on AI-generated poetic texts, why I think they count as poetry, and how that conviction raises questions of power which are far more fraught.
Algorithmic Writing and Potential Literature: the OuLiPo
The OuLiPo is one of the most influential contemporary experimental writing groups, and the word I’d use to describe it is “playful.” Its name is a French abbreviation for “Workshop for Potential Literature,” pointing to its members’ shared interest in experimentation. The group was co-founded in 1960 by a mathematician, François Le Lionnais, and a well-known French writer, Raymond Queneau. They are known for their use of linguistic rules, games, and constraints — for example, poems where each successive line is exactly one letter longer than the previous one, or palindromes that are half the length of this newsletter.
Queneau and Le Lionnais worked together on the book that began the OuLiPo, One hundred million million poems (1961). It comprises ten sonnets, but instead of being printed on a full page, the thick paper is cut so that each line appears on its own separate strip. Each line of each poem can be read with any line of any other poem, which means that the reader – or a random generator – can link up the lines to form 10^14 possible poems. Mathews’ Algorithm (named after Harry Mathews, one of the few American members of the group) generalized Queneau’s method in One hundred million million poems into a method of jumbling any group of similarly-constructed texts.
If this sounds like a progression of mathematical innovation, that’s exactly the point. The OuLiPo are interested in applying techniques drawn from mathematics to language. This, however, means that some members tend to characterize themselves, explicitly or implicitly, as “apolitical” by virtue of their commitment to strict constraint. This, too, should be familiar to anyone who’s worked with NLP, or AI systems more broadly. In AI as with the OuLiPo, the correctness of the math or the purity of the constraint does not prevent either from existing in unjust systems.
Not all members of the OuLiPo shy away from social commentary, however. Georges Perec, one of the early members of the group, published La Disparition in 1969, a novel without the letter E, which used the “disappearance” of the letter E as an allegorical reference to the “disappearance” of his Jewish mother during the Holocaust. More recently, the queer feminist writer Anne Garréta, member of the OuLiPo since 2000, wrote Sphinx, a love story in which neither of the main characters is explicitly, linguistically gendered, a shockingly difficult task in a gendered language like French. But this outlook is a rarity in the group. It’s worth noting that of the 41 people who have ever been part of the OuLiPo, only six are women. Some of them have since distanced themselves from the OuLiPo, which other members have credited to the open misogyny of some of the group’s early members – notably Queneau himself.
Still, the OuLiPo’s impact on experimental literature is huge. They remain the most prominent group to work with linguistic constraint as a central principle, essentially conceiving of literary production as the result of algorithms. Given the ambivalence of many OuLiPeans around the social implications of linguistic play1, there’s little difference to me between the adoption of these techniques by humans and the writing of poetry by a large language model.
Linguistic Machines and Authorship: Digital Language Arts
Neighboring the OuLiPo in the world of experimental writing is the field of Digital Language Arts. DLA, the practice of writing with computers, has been around for decades – since the advent of personal computing in the 1960s. John Cayley, one of the field’s longer-running practitioners, has been writing poetry with computers since the 1970s. Cayley is particularly interested in ways in which computation transforms modes of reading or consuming writing — how it creates new interfaces for reading and listening — and has created several works that read text in unconventional ways. This commitment to reading with machines extends beyond his own practice – I’ve witnessed him introduce fellow poets at events using an Amazon Alexa.
One of the most public practitioners of Digital Language Arts is Allison Parrish, a professor at NYU specializing in programming and poetry. Parrish has written everything from Twitter bots to algorithmically-generated novels. In a 2015 talk, she described her practice as “exploring semantic space with (literal) robots” — essentially, allowing programs to produce linguistic combinations that humans would not generally attempt. This naturally produces some weird and nonsensical stuff, but to Parrish, this is the whole point. “I have no interest in making poetry that looks like it was written by humans,” she wrote in the same talk. “I think that that’s a plainly boring task that nobody should try to attempt.”
Clearly, though, that’s what large language models are attempting, and with some success. The problem is that their poetry exists in the strange gap between human-generated poetry and wacky computer-generated poetry, which can inspire laughter and innovation on our own linguistic patterns. This work isn’t weird enough to be fun and wacky, but also doesn’t depart from human-generated poetry enough to make it a useful tool for poetic innovation. Put plainly, most LLM-generated poetry is boring, and doesn’t add much — if anything — to the world of poetry as it exists now. That doesn’t mean it isn’t poetry2, but it does mean that there are just more interesting questions around.
Arguably the most important one is that of authorship. Large language models draw their prowess from massive corpuses of training data, which means that arguably all the authors of all the input texts are authors of any text produced by an LLM. That’s an ethical problem for writers working with these models — especially when harmful language turns up — since the language doesn’t originate from the model itself.
This concern further validates issues that AI ethicists have raised about LLMs. The ambiguity of sourcing in LLM-generated texts is one of the many key issues brought up in “On the Dangers of Stochastic Parrots” by Bender et al., the paper that led to Timnit Gebru and Margaret Mitchell’s firing from Google Ethical AI. In scientific research as in digitally-produced writing, voice and authorship continue to be contested issues — questions which are further amplified in the field of conceptual poetry.
Voice, Appropriation, and Power in Conceptual Writing
Both the Oulipo and Digital Language Arts overlap with the realm of conceptual writing, which, much like conceptual art, exists to put form on display. Many use appropriative techniques to repackage text into other forms, the literary equivalent of putting a urinal in an art gallery, but they do this with varying aims, and with varying degrees of political awareness.
One example of an appropriative text is M. NourbeSe Philip’s Zong!, a piece made up entirely of words from Gregson vs. Gilbert, a case report on the massacre of Africans on a slave ship in the late 1700s. By appropriating and transforming the words of such a violent document, Philip calls attention to the power encoded within it, and takes new ownership of the text.
On the other hand, there’s Kenneth Goldsmith, one of the leaders of the conceptual writing movement. Goldsmith coined the concept of “uncreative writing,” the wholesale appropriation and repackaging of writing from one form to another — without attribution or respect to context. In a now-infamous 2015 incident, Goldsmith rearranged the autopsy of Michael Brown into a performance piece, sparking outrage from poets, both conceptual and not.
This wasn’t the first time Goldsmith had produced work that appropriated inappropriately under the guise of the “avant-garde.” A year prior to the 2015 incident, the Korean-American poet Cathy Park Hong responded to Goldsmith’s strand of conceptual poetics with a critique-slash-manifesto called “Delusions of Whiteness in the Avant-Garde.” She puts it bluntly: “From its early 20th century inception to some of its current strains, American avant-garde poetry has been an overwhelmingly white enterprise, ignoring major swaths of innovators — namely poets from past African American literary movements — whose prodigious writings have vitalized the margins, challenged institutions, and introduced radical languages and forms that avant-gardists have usurped without proper acknowledgment.”
Why care about “GPT-3 poetry” at all?
This is why I think the question of whether AI-generated poetry “counts as poetry” is beside the point. After all, whatever you may think of them, varying types of attribution and remixing have “counted as” poetry to different people. Experimental and conceptual poetic traditions teach us that language is a medium that can be created and manipulated algorithmically, one which requires very little more than randomness — but which can easily do harm, especially without a comprehensive understanding of the place of language in the world. If we take their definitions of poetry as viable, AI-generated poetry is certainly poetry. That designation doesn’t mean it’s harmless; it doesn’t even mean much for the inherent “creativity” of the machines.
Instead, I’d like us to ask: in whose voice do our machines write? What voices do they obfuscate? Where do their words come from? In short, I’d like us to ask questions about power, and the ways in which it functions through and around language. And the best people to ask about that are human poets.
Shira Abramovich is a poet, translator, and software engineer currently working at Monthly. Recently, she has been dabbling in linocut relief printing and going to the farmer's market. She is very happy with her first name. Follow her on Twitter!
When it comes to using LLMs for creative purposes, how might one go about doing so “responsibly,” keeping the authorship question in mind?
I think the main thing is to take responsibility for what the model produces — you can't just say that the work was produced only by you or only by this model. It would make sense to allude to the many voices whose work fed into the model's current prowess, since all of them have contributed to the work the model produces. But I'd argue that it's actually more interesting to train your own model on a defined corpus of texts and work from there.
What have you been reading lately?
I've been leafing through The Best American Poetry 2021, which is so burstingly good, and am halfway through The Talented Ribkins by Ladee Hubbard. When I need a dose of nonfiction, I've been turning to Lurking by Joanne McNeil. [Editor note: see our review here!]
🌀 microdoses
Timnit Gebru wrote an op-ed in The Guardian, arguing that curbing harmful AI requires bolstering worker power.
Utility companies (electric, water) are no longer sharing data with ICE — a huge win for limiting surveillance. Reboot community member and Kernel author Archana Ahlawat’s research with JustFuturesLaw is featured!
Yeah…
💝 closing note
Last week, we published an essay about magic and agency, which spurred some super interesting commentary.
Deblina Mukherjee notes: There's still a concept of "the rest" (as opposed to engineers), "the people" (as opposed to engineers), "users" (as opposed to engineers), and "end-users" (as opposed to engineers). That's a categorical difference — even in the most malleable, playful, interoperable systems.
Tommy Nguyen points out that “magic” isn’t exclusive to tech: I like the idea of more folks feeling agency and being able to use software/technology and truly feel in control over it...But I just trust doctors and vaccines and farmers and my local grocery stores. There's a lot going on in the world that i just don't entirely get, and take for granted as "magic.”
Our challenge (writing prompt?) for the next newsletter: how should we think about agency in light of powerful generative models like GPT-3?
Awaiting your submissions (unironically),
Reboot team
This ambivalence, in my view, comes partly from the group’s singular focus on form at the expense of content. That’s not a problem, per se, but there are occasions where a constraint could reasonably be used to comment on some other issue and isn’t.
One example is the “Prisoner’s Constraint,” which involves avoiding letters with ascenders or descenders (i.e., b, d, f, g, h, j, k, l, p, q, t, or y). Supposedly, the constraint is based on an allegorical prisoner whose supply of paper is limited, and who therefore uses very cramped writing to conserve resources – but the constraint applies only to form, and to my knowledge hasn't been used to comment e.g., on incarceration.
In fact, in a separate talk, Parrish argues that language models necessarily produce poetry, because the language they produce is only loosely connected to intent and meaning. I won’t pretend to define poetry here or even to summarize Parrish’s whole argument – but it’s worth noting that there is a strong ground in literary theory for considering language models’ writing poetry.