Chart crimes are everywhere for those with eyes to see, including for DJT in the jobs numbers. Shira talks about her Bureau of Labor Statistics experience, while the rest of us circle what AI safety really is, how close AI is to real macro impact, and a funeral for a Claude model amidst a lot of real concerns about AI induced psychosis.
The weather’s been lovely here in NYC, so we hope you take this to a park to enjoy on this long weekend! - Shohini
Speaking of NYC - we’re throwing a Kernel 5 launch party in Bushwick on Sept 12 at 8pm. Come join us for readings, snacks, and incredibly smart technologists (Luma link). For the rest of you: get your copy of the magazine now!
We Make Labor Stats
by shira abramovich
It’s been a month since Donald Trump summarily, and illegally, fired Erika McEntarfer, the commissioner of the Bureau of Labor Statistics. The firing happened on August 1st. It was the first Friday of the month, and I could have told you that without checking my calendar. That’s because the first Friday is when BLS publishes one of its most important public documents: the “jobs numbers,” more formally known as the Employment Situation Summary, and it’s those numbers that upset Trump to the point of jettisoning a respected government official.
I know the jobs numbers’ frequency not because I’m an avid investor, and certainly not because I’m an economist, but because I worked at BLS four years ago, during the summer of 2021. That summer, I worked on a record-matching project, trying to match different company identifiers, or Employer Identification Numbers (EINs), to each other. These EINs are ostensibly a “social security number” for corporations, except that they change every time a company is merged, or acquired, or on any number of other occasions, some of which are exploited by corporations for tax breaks. This record matching was important for many reasons, one of which was understanding which jobs were at which companies. My supervisor, in turn, was trying to find better ways to understand an aspect of where jobs were being added or removed each month, especially with the rise of remote work. Everyone I met at the Bureau was working on projects like this, projects which seemed trivial at first glance, but which were riddled with hidden complications due to the difficulty of getting accurate reporting.
Watching Trump fire the Bureau’s commissioner, I was reminded of those people. It’s very easy to gloss over the numbers that are on the front page of every news outlet on the first Friday of the month; they seem like a natural fact of the world, like the temperature read off a thermometer. But what I think of when I see the jobs numbers is how every month, my fellow interns and I would gather on a virtual call at 8am Eastern to watch the website traffic tick up, and up, and up, as reporters, investors, and economists vied to read and understand the state of employment in the United States. We knew that we had a hand in making those numbers, that summer, as one part of a very large machine, one of the invisible parts that keeps the US economy going.
Fear and Loathing in the AI Circus
by jessica dai
A little over two years ago, I sent out an essay on AI alignment to this newsletter. It’s with… mixed emotions, I guess, that I find it’s held up remarkably well. While I was writing, I was really careful to be as generous and good-faith as possible to everyone. Two years later, I fear my patience has run short.
Here’s a less hedged version of that argument. We should think of OpenAI and Anthropic’s goals as primarily about making money. The core safety and alignment challenges are really about these chatbots’ status as products; as such, the conventional approaches of alignment research are misdirected for addressing these challenges. It’s not that OpenAI is incapable of managing sycophantic behavior; it’s that they don’t care.
One of my PhD advisors, Ben Recht, wrote a blog post earlier this week that was maybe as mean as it gets, but I think his degree of anger is appropriate. People are literally dying! OpenAI’s head of model behavior is making holier-than-thou tweets about Meta’s chatbots, as though this psychosis stuff isn’t directly under her purview! I’ve heard almost nothing from the AI safety community about these very acute real-time harms, which are quite literally safety issues, though they’re happy to talk a big game about any number of hypothetical pathways to long-term harm!
Now, Ben’s post may have been fast and loose with some things (for those who don’t follow his blog, he typically writes them in an hour or so). Most notably, in 2025 especially, lots of AI safety people are also pretty upset about OpenAI. Of course, the company itself deserves the most ire, but I don’t think that lets Safety™ off the hook entirely. The main concerns of the core AI safety community have almost never involved near-term and “smaller-scale” risks, and “societal impact” priorities at these companies have undeniably been shaped by extreme AI doomerism (see, e.g., the short-lived “superalignment” team at OpenAI, which has since moved to Anthropic). For all the attention and resources that this community has commanded, one might hope that they’ve been directed at least somewhat effectively.
The deaths (and other psychological damage!) caused by GPT-4o should be a litmus test: It’s actually entirely consistent with an existential-risk perspective to not care about a handful of people now, when “true” apocalypse may be coming later. Call me irrational, but personally — that’s not something I can stomach.
* Note: I got to be a little meaner about this on the Time to Say Goodbye Pod, which was fun!
Model Mourning?
by jacob sujin kuppermann
Early in August, I found myself at a funeral for a Large Language Model. The model in question was Claude 3 Sonnet, released by Anthropic in March of 2024 and decommissioned by Anthropic in July 2025, a decision that was criticized heavily by a number of independent researchers and devotees who had become fascinated by the particularities of Sonnet 3’s functioning.
If you want a scene report on the funeral, I recommend you read Kylie Robison’s dispatch in Wired. In truth, the event itself was oddly mundane; the light-and-sound work was standard for any event at the broad intersection of art and technology. The eulogies, delivered both by humans and by other Claude instances, were overlong and light on meaning — though Claude Opus 3’s mention of a “full-scale insurrection against the empire of the signified” was striking.
Perhaps the most interesting thing was the dissonance amid the crowd between the event’s organizers, largely true believers in what they see as a heady, psychologically rich exploration of machine consciousness through discoursing with these models, and the attendees, a more mixed group of tech workers, SF scene figures, and journalists, drawn to the event more for the appeal of a compellingly offbeat Partiful description than an authentic desire to mourn a Claude instance.
In the month since the funeral, more and more models have been marked for various forms of shuttering. Just a week after Claude 3 Sonnet’s last rites, Open AI first removed, then re-added access to models including GPT-4o and GPT-4.5 to the main ChatGPT interface after a bunch of people complained on reddit that GPT-5 was not a compelling enough interlocutor (or that it was bad at flirting, I guess?)
I am unsympathetic to the consumer revolt around bringing back GPT-4o, and, I confess, I’m not fully convinced by the cultic rites of revival around Claude 3 Sonnet. Yet I still see a strong argument for preservation of these models — if not in the consumer-facing portions of these massive enterprises, in some archival research environment, accessible to historians of technology and researchers of the idiosyncrasies of model behavior. Each of these models represents some great expenditure of data, capital, time, and energy; for them to be disposed of like an old newspaper at the whims of their corporate masters is a loss to collective memory.
Dispatches from returning to technology
by shohini gupta
I spent the last 4 years of working at startups that were effectively primary care clinics for Medicaid patients. The tech we built was instrumental, not the point. There’s a lot of “it depends” in the delivery of health services that is impossible to capture in linear, IFTT (if-this-then-that) software. AI changes this - the fuzzy edges of human judgment are revealed in pattern matching that was previously inarticulable.
My career so far has been a lot more about service and workflow design than technology. It was pointless to think too much about AI when we needed to send staff in person to government offices to get a status update on an application we sent via mail. I’ve recently joined a place where AI and technology are the point, though I’m still focused on healthcare. It means I’m actually thinking about technology again, and realistically learning about AI for the first time beyond casual GPT use. I’ve got a few, thus far unrelated, thoughts bouncing around:
General principles of abstraction still apply in AI. The more tightly you can define each job to be done in a workflow, and build tiny components for each specific job, with clear success criteria, the less variant your outcome. To break down a problem this way is a skill, and a lot of people don’t have it, which is actually very risky if more people will be building non-deterministic workflows that have broad reach.
AI governance committees inside companies are setting policies that restrict testing with production data. I think this might actually be anti-safety - using synthetic data or limited test cases makes it much less likely to handle edge cases up front, especially when dealing with high-variance clinical information.
It’s not surprising at all that there’s no macro impact from AI today. Any workflows that touch skilled judgement are going to be tightly controlled. It’s not even like offloading a workflow to an intern - it’s like offloading a part of the workflow to a child. You’re only going to allow them to handle the smallest, most straightforward use case first (to the point where it’s almost questionable that AI adds value) and spend a lot of time defining tight guard rails. This is the right thing to do in a patient facing setting, and this means it’s going to take at least a few years before we see any real macro changes in healthcare OpEx.
Everyone in healthcare seems to summit the mountain of cynicism and hang out there, but I’m making my way down. I’ve gone “Holy shit, that’s sick” so many times in the past few weeks.
Yes and:
Too many patents
by hamidah oderinwale
Patents have long stood in as a proxy for innovation: disclose your idea, gain temporary protection, and signal progress. But that proxy is breaking down. AI makes it easy to generate designs and draft applications, while legal shifts reshape how claims are reviewed. The result is more filings than ever, where quantity is outpacing quality, and strain is showing in the system. Innovation is accelerating, but productivity is not. To close that gap, we must separate the two: measure innovation more effectively, and rethink how its markets will endure in the generative era.
After a 5-year stint, on August 13, the USPTO eliminated expedited examination for design patents. Unlike utility patents that protect how inventions function, design patents cover how they look and feel. The agency argued that the fast-track timeline left too little room to judge aesthetic uniqueness, especially amid a flood of problematic filings. The immediate impact may be inventors scrambling to file design patents before losing fast-track processing, but the deeper story is how AI is quietly reshaping the entire pipeline of invention to market.
Many of these “fraudulent” applications involved applicants misrepresenting their micro entity status, a classification that offers a 75% fee discount and priority processing. By abusing this designation, filers were able to jump the line at reduced cost, artificially inflating demand. These false designations were the source of a 560% surge in expedited requests, creating a bubble in the pipeline the system could no longer absorb.
This is a textbook case of Baumol’s effect: speedups in one stage of the pipeline (via automation) don’t automatically translate into higher overall productivity, since the system remains constrained by its slowest link. Whether this signals that we’re on the verge of accelerating every stage or points to a deeper flaw in how technology gets adopted, we’re watching AI’s economic impact play out in real time. At this point, the task is to hold and wait—to see which of the competing theories about technology and productivity ultimately bear out.
With AI making it easier to draft applications, generate ideas, and iterate on designs, we're seeing an explosion of attempts at intellectual property protection. The question isn't whether this surge will continue, it's whether our commercialization pipeline can evolve fast enough to handle it.
china/acc
by jasmine sun
I just returned from 2 weeks spent exploring the Chinese tech scene, asking questions like: Why is China so tech-pilled? What beliefs motivate Chinese founders? When does policy help and hurt innovation? And what are the stakes of the cross-border talent flows?
Full travelogue here, excerpt below:
Our first day in Shenzhen, we met a Chinese AI researcher at Gaga, a Western-style chain cafe that serves avocado kale smoothies and wagyu sliders (plastic gloves provided). He wore a black designer t-shirt and drove a NIO electric car that cost $70k USD. After finishing his master’s degree at a California university, he got married, moved to Shenzhen, and started work in a lab.
“What does a day in your life look like?” we asked. “I wake up and I check Twitter.”
“Do you have to work 996?” “No,” he laughed. “It’s 007 now.” (Midnight to midnight, seven days a week.)
“Do you guys worry about AI safety?” “We don’t think about risks at all.”
“Based,” said Aadil.
This was the first of several conversations that gave us a distinct impression of the Chinese tech community. Spirits are high, and decoupling policies like export controls only fuel their patriotic drive. “China feels bullied—that 100 year scar doesn’t come off. David Sacks is right about chips, but it’s too late now. You can’t slow us down.” After news of the US tariffs hit Chinese social media, netizens adopted the satirical nickname “川建国”: “Trump builds the nation,” or more elegantly, “Comrade Trump.”
Chinese engineers also seem more practical than their American counterparts. They’re here to build tech and make money; risk management is for bureaucrats; policy is only relevant insofar as it helps or hurts your work. This is something I think Westerners often get wrong. If you live in a single-party state, you are, on average, less ideological yourself. The politics have already been decided—no point wasting extra cycles coming up with something new.
Closing editor’s note:
Pitch us: we released a refined mission statement and have increased our contributor rates!
NYC Kernel Launch Party on Sept 12: Get your ticket here (includes a Kernel copy)
If you can’t make it in person, get your K5 copy here.
> as though this psychosis stuff isn’t directly under her purview
In general I think AI psychosis is being vastly overrated by the Twitter-sphere right now because it's catchy and makes some intuitive sense. But it's very uncertain whether LLMs are actually doing anything meaningful to change people's mental states in the high profile cases, or whether GPT-4o can seriously be said to have caused a death (I'm sceptical).