I quit my job last November to spend 2025 as a full-time writer. I’ve been publishing regular essays and podcasts on my personal newsletter—from what “AGI” means to an interview with Taiwan’s Audrey Tang. Today I’m cross-posting an essay on LLMs and the oral culture revival. If you’d like to follow my work there, I’d appreciate a subscription! (And everything’s free, but paid subscriptions are lovely—this is how I make money now.)
—Jasmine
Talk Is Cheap
Originally published as “the post-literate society” at jasmi.news.
A full year before ChatGPT launched in November 2022, OpenAI released the GPT-3 API to little public fanfare. The AI research community was following, but most others hadn’t realized how quickly deep learning had advanced. Likewise, while DeepSeek’s core technical innovations were revealed in the V3 paper published last year, it took the open-source launch of R1 and its app to send shock waves across the public markets.
Why the disparity? When playing with product ideas, you want a prototype, not a mock; it’s hard to feel the impact of a feature until you can use and feel it firsthand.1 Tech founders neg each other by calling their products “ChatGPT wrappers”: thin skins over a preexisting technology that don’t make core technical innovations. Yet ChatGPT, by this definition, is a wrapper itself: a user-friendly face on a mysterious technology, taking indescribable trillion-parameter language models (AI researchers like the metaphor of Lovecraft’s Shoggoth) and stuffing them behind an innocuous messaging interface. Be helpful, honest, harmless, the labs instruct their model. It’s not only the underlying intelligence: The wrapper is the thing.
I’m not here to argue about whether AI’s value will be captured at the model or application layer. That debate can be left to Twitter’s armchair VCs. Rather, I’m interested in the message inside the medium. Like all new information technologies, the rise of the chatbot is about more than usability—it accelerates a wider shift toward a dialogic style of thinking and communication. In other words, we’re seeing a 21st century oral culture revival.
Since MIT professor Joseph Weizenbaum created ELIZA in 1966, chatbots have been the default design for interfacing with AI. After ELIZA came PARRY, ALICE, and Jabberwacky. Microsoft is especially notorious for birthing (and killing) borderline maniacs: Tay the edgelord, Sydney the homewrecker. Each bot got a name and corresponding personality; though primitive, they’re still remembered for the mischief they caused.
The chat interface isn’t a given: you can imagine other plausible form factors for AI. My friend Justin suggested that AI, like the best tools, ought to feel like an extension of oneself rather than a separate thing—like a skilled designer using Figma, or a chef and their knife. We brainstormed examples: Cursor, Copilot, possibly; I use Granola to take meeting notes and it’s as good as people say. A browser extension that auto-converts articles to your reading level; an invisible agent that makes travel plans and appointments behind the scenes. “Magic Edit” buttons on photos, AI overviews on Google Search. These uses of AI are integrated with our existing behaviors, rather than living in a separate tab.
Chatbots also refute the iron laws of Nikita Bier, Twitter’s premier product troll and dark pattern designer. “Needing a user to type with a keyboard is asking for failure,” he declared. Instead, always autofill, use checkboxes, or multiple choice; if you have a user fill in a text box, expect conversion to crash. Generations of teachers have known the same thing: nothing intimidates like a blank page.
But the more I reflect on “chat,” the more brilliant I find it. Decades of innovation converged on the conversational interface, a single blank input on a lily-white screen. Chatbot interactions aren’t one-off commands, but play ball with the user in a back-and-forth stream. The simplicity is intentional: OpenAI is not wanting for top-tier designers or big creative budgets.
Instead, chat works because we’re so used to doing it—with friends, with coworkers, with customer support. We chat to align stakeholders and we chat to find love. For many, chatting is as intuitive as speaking (maybe more so, given Gen Z’s fear of phones). In a 2017 blog post,
remarked that “Most text conversation UI's are visually indistinguishable from those of a messaging UI used to communicate primarily with other human beings.” With LLMs, the Turing test has become a thing of the past.As a result, there’s no learning curve to ChatGPT—I think “prompt engineering” is mostly a LinkedIn bro scam. Talk to the AI like you would a human. The model was trained on tweets and comments and blogs and posts; no foreign language needed to make it understand. Send it your typos, your style guides, your wordy indecision. The chatbot won’t judge—it’s all context, it helps.
For example, researchers found that saying “please” to ChatGPT gets you better results (but don’t be too deferential); if you promise to tip or say “this task is really important,” then that’ll help too. ChatGPT’s UI encourages these niceties for a reason. It asks “What can I help with?” and offers to tell new users a story. It flatters, it exclaims, it throws questions back at the end. (If only all humans could be so courteous.)
As such, LLMs leverage thousands of years of honed evolutionary instinct—the variedness and nuance of human relations, distilled into something as machine-parsable as text. A new cognitive science paper explains why conversations are so easy: humans were built for dialogue, not monologue; when talking, each speaker auto-adjusts to the other’s linguistic frames. This dance propels understanding forward, gliding the pair toward a common goal. Natural language chatbots offer similar benefits: they instantly adapt to the user’s vocabulary and style, and the user adapts back in return. Anything you can think through with a person, you can now think through with AI. By making chat their flagship product, AI companies get usability for free.
Take this
story, which describes how a 20-year-old engineer persuaded Claude to help him DIY a nuclear fusor:built trust over many hours of conversation, much like a precocious teen wearing down a parent. Unlike a textbook or lecture, this process mirrors how knowledge has traditionally been transmitted—through apprenticeship and dialogue. Claude, like the village elder or the traveling bard, acted as a relational source of knowledge rather than a neutral tool.He filled his Project with the e-mail conversations he’d been having with fusor hobbyists, parts lists for things he’d bought off Amazon, spreadsheets, sections of books and diagrams. HudZah also changed his questions to Claude from general ones to more specific ones. This flood of information and better probing seemed to convince Claude that HudZah did know what he was doing.
Naturally, voice—and then video—is the next AI frontier. 1-800-CHAT-GPT is surprisingly popular; on Instagram Reels, women explain how to build artificial boyfriends using advanced voice:

I’ve admittedly gotten in the habit of rambling to ChatGPT myself.2 I’ve used voice mode to drill through mock questions before high-stakes interviews, and learn Mandarin phrases while visiting Shanghai (did you know ChatGPT speaks Chinglish?). When out with friends, I pull the app out to do quick lookups (“What kind of bird is this? Is a banana actually a berry?”). Go ahead and call me embarrassingly San Francisco, but voice feels less antisocial than Google because everyone can hear.
In an old blog post, Mills, ℭ𝔬𝔫𝔰𝔱𝔞𝔟𝔩𝔢 𝔬𝔣 𝔔𝔲𝔞𝔩𝔦𝔞 proposes that voice interfaces are actually the most democratic design: “Whereas the PC + GUI requires significant learning from users before they can even create objects and has done rather little to enable the creation of tools by the masses —the utility most important of all— nearly everyone can describe problems and, in conversation, approach solutions.”
It made me think of a conversation where I lamented the labyrinthine complexity of Substack’s publishing tools. “How can we balance customizability and comprehension?” I wondered aloud. One writer’s optionality was another’s confusion. But Mills assured me that the dashboard would become a thing of the past. Eventually, publishers wouldn’t need to navigate all these tabs and dropdowns. They could instead tell a Substack AI: “Move all my interviews into a new site section” or “Draft a thank you email to my most engaged subscribers, but let me review it first.” And voila! The pieces would click into place.
Mills’ suggestion sounded abstract then, but I now more clearly see what he means. Usability is not a monolith. As an example, Afra Wang explains how for Chinese elders failed by imperfect keyboard systems, the internet has always been oral:
Undeterred by her inability to read or write, [my aunt] carved out her own digital niche. Her WeChat feed is a busy tapestry of voice messages, photos, and lip-synced music videos. For her, the written word is an obstacle creatively circumvented, not an insurmountable barrier.
In college, I only passed Data Structures & Algorithms because of CS friends who took over my Zoom screen-share and live-narrated their code. Half-tutor, half-agent; I learned via demo instead of text. Now Claude has computer use and OpenAI Operator is out. More people own computers, but fewer understand them. Complexity is hidden behind a human-like interface. The Shoggoth grows, and it smiles.
Talk is back, but at what cost? In Orality and Literacy—the essential media framework for the 21st century—anthropologist Walter J. Ong draws a distinction between “oral” and “literate” cultures. He posits that speech and text are not only communication mediums, but entirely different ways of cognition and life.3
Preliterate oral cultures relied on mnemonic. Stories centered “heavy” figures like dramatic heroes and villains, and morals were imported through simple sayings and memes. If an idea wasn’t remembered, it didn’t exist. Because nothing was recorded, speech was inseparable from the setting it was conveyed in; the speaker’s gestures, intonations, and status all played a core role. Then, after writing was invented, language focused on precision and fact: memory was no longer a problem, but an expanded vocabulary was needed to make up for lost social context. Written texts also allowed knowledge to exist separate from its author. It gave rise to key scientific values like objectivity and falsifiability; to independent thinking and Enlightenment contrarianism. Without the written word, you can’t ponder someone's idea in quiet, alone. Writing birthed the heretic.
We are clearly returning to an oral-first culture. First, social media accelerated conversation, focusing on instantaneity over permanence and collective consciousness over individual belief. Second, video has overtaken text on every online platform (much to my personal dismay). Most people have lost the focus to read a 1,000 word article, but have no problem listening to a 3-hour podcast. Now, LLMs are in the process of obsolescing literary precision, too. Why write concisely when people will just read a summary? Why learn a system’s mechanics if an AI can do everything for you? We no longer need to convey thoughts via structured grammars. The LLM, as a universal translator, has solved legibility.
Consider, for instance, how Trump reshaped political communication.4 He’s an archetypal figure for a new oral age; someone whose speech is instantly viral, repeatable, and impossible to forget. Fake news, build the wall, many such cases, Sad! Overeducated pundits struggled to “take him seriously but not literally”—he gets things wrong, and keeps changing his mind! But it’s about the feeling of the words, not the post-hoc transcription. The weight of Trump’s presence matters more than the facts.

2016 was a turning point for oral culture. Peak Trump, peak Twitter, the death of the text and the fact. When we all lost our minds to the collective unconscious, the birth of a worldwide “vibe” that could shift together as one. And at the risk of sounding hyperbolic: I think there is a correlation between oral culture and authoritarianism, between a less literate population and strongman leaders. When people don’t evaluate ideas separate from their speakers, power gravitates to the most magnetic voice in a room.
Trump illustrates the potency—and risks—of oral persuasion. But increasingly, the most convincing voice may be an artificial one. The AI labs know this: in OpenAI’s o1 system card, researchers note that it can exhibit “strong persuasive argumentation abilities, within the top ~80–90% percentile of humans.”5 Sometimes this is useful, like when I want ChatGPT to critique my essay drafts or explain Adorno like I’m 12. Other times, it leads us to dark places: models are known to hallucinate and even deceive.6 The system card for 4o warns how ChatGPT’s “human-like, high-fidelity voice [led] to increasingly miscalibrated trust” during user testing sessions. In a 273-page paper on “The Ethics of Advanced AI Assistants,” DeepMind’s own researchers repeatedly recommend that chatbots remind users about their non-personhood; suggesting they use less emotional language, limit conversation length, avoid advice on sensitive topics, and flag the possibility of mistakes.7
Their predictions were right, even if the product tips went unheeded. I squirmed at the New York Times story about a woman’s AI side-piece “Leo”, and ached at the suicide of a teenager addicted to Character AI. More than misinformation alone, chatbots adapt to individual users’ desires, building relational authority over time. Anthropomorphism, not intelligence, is AI’s killer feature.
The anthropologist in me bristles at ChatGPT’s first-person pronoun use; the PM says it’s the magic sauce that makes the product stick. You can never attach to a tool like you can a person. As the n+1 editors wrote in their 2011 ode to Gchat, “The [chat] medium creates the illusion of intimacy—of giving and receiving undivided attention.” Is (artificial) attention all we need?
But herein lies the paradox of modern technology: Silicon Valley has always excelled at making things people want, but “what we want” is never a fixed, easy thing. There’s what I want right now versus what I want in the long run, what I want for myself versus for society at large. I want a large fries and three hours of TikTok brainrot and an AI boyfriend that lets me win every debate. As products improve—across food, entertainment, media, and more—it gets easier to satisfy these short-term wants. It’s almost a market inevitability: the more advanced our technology, the more base our culture.
Yet the great achievement of human civilization is transcending our most animalistic impulses—devising systems that empower the better angels of our nature. I might not want to work out today but I’ll feel better tomorrow; vigilante justice is cathartic but due process works better. I spent over $200 to have various chatbots pen this essay for me, then in the end, I researched and wrote it myself because I wanted to learn.8 Some experts support regulations and product tweaks to mitigate risks—chatbot disclosure laws, proof of personhood, alignment training, etc.—but we’ll equally need a new personal ethic for living alongside AI, an alertness to how new technologies shape our minds for better and worse.
I’m not “anti-LLMs,” in the same way that I’m not anti-candy or anti-Twitter or anti-airplanes. Banning a new technology won’t stop people’s desires to be entertained, finish work faster, and hear what they want. Talk is cheap, thinking is hard. That’s a thousand-year-old truth, and it’ll live a thousand more.
Jasmine Sun is the cofounder of Reboot and a writer and technologist. You can follow her independent work at her newsletter, jasmi.news.
Reboot meets our readers at the intersection of technology, politics, and power every week. If you want to keep up with the community, subscribe below ⚡️
🌀 microdoses
I made a timeline of how “AGI” as a concept has evolved (and retreated) over time (ft. vibe-coded beige microsite with serif font). TLDR—nobody knows, nobody has ever known, so it’s whatever’s convenient.
A gorgeous essay on “failed cities” discourse. SF is back baby!!!
Okay, maybe SF is not back until they learn to install speed bumps so good Samaritans don’t have to get them from Amazon instead. (h/t
)Chinese manufacturers are now TikTok influencers (h/t
)
Talk to a human today,
—Jasmine & Reboot team
This was really interesting, and closely related to two different things I've been thinking about recently:
1. The cultural feedback loop that will shape acceptable use of LLMs is in high gear, and it is really hard to predict where we end up.
I see so much experimentation with voice interfaces, talk at e.g. a recent Disability Tech conference on how to use LLMs to help people with dyslexia interact with a text-heavy world, etc.
But I also see push-back: people reacting negatively to text that seems LLM-authored; more and more of my friends (30-ish professionals in Copenhagen, Denmark, I guess) taking up reading, like... fiction, sometimes after having given it up for years. Because it feels real in an unreal world.
2. It certainly seems that a cultural feedback loop was missing from the Turing test. Through the process of anthropomorphization we seem ready to accept many new breakthroughs as having finally made it, seeming human. But a few months later, we learn how to break that new version of LLMs, and the magic fades, and we push it back into firmly not-Turing-test-passing territory, firmly machine, not human-like.
That is: as we learn how to interact with new technology it seems less human. Kind of like how we keep thinking we must have reached the peak of realistic video game graphics, and have every 5 years since I was 15.
Passing the Turing test through novelty seems much easier than still passing it a year later, after thorough interacting with and shaping of our culture.
"... but we’ll equally need a new personal ethic for living alongside AI, an alertness to how new technologies shape our minds for better and worse."
The idea of a "personal ethic" in the context of AI is so helpful. I work in the children's online safety and digital wellbeing space and do lots of education and consulting with schools and families, and this is such a great way of articulating that this is a need - and I'd say, a must.