Reboot's Ins & Outs for 2026
Predictions for the future of slop, academia, web design, and more
Here at Reboot, we don’t tend to think of ourselves as forecasters. But the start of a new year is the ideal excuse to share our rigorously vibe-researched tech and culture predictions, from the future of academia to microsite design trends.
Presenting: 2026 ins & outs, courtesy of the editorial board:
OUT: Beige Microsites
What’s chic, blurry, and various shades of vague? That’s right, a new AI manifesto, hosted on the vanity (sub)domain of your dreams! 2025 was full of new oatmeal-flavored pages full of serif font, each of which took up brief but vacuous discursive space for anywhere between an hour and a week. Regrettably, even the White House got in on the aesthetic, which is characterized by pale beige or white backgrounds, narrow serif headers, and custom scroll transitions that I’d previously associated with print-to-digital publications. By taking up this aesthetic, I get the feeling that the AI predictors are trying desperately to show seriousness in a world rapidly filling with lurid AI slop. Maybe this is where Pantone got their idea for the most boring color of the year in recent memory?
I don’t actually mind the beige-plus-serif aesthetic itself—I’ve made my own single-page HTML versions in the past, and I like the way they often feel like print. Even the blurry pictures appeal to my nostalgia for old cameras without autofocus. But the fact that all this is being used to push specific visions of human-compatible AI—sometimes by the biggest companies in the business, which don’t seem too excited about anything besides desperately trying to find profit—feels deeply manipulative to me, and the repeated beige is getting pretty tired.
I’m ready for a change. On this year’s beige microsites, predictions and exhortations, fill the space between header and footer, but I truly couldn’t tell you what they were. They would flood feeds for a week, then melt back into the primordial beige whence they came. Bye!
IN: The Internet of Variety
It used to be that you’d log onto God’s green Internet and be assaulted by a barrage of color, RGB values burning their way into your retinas. Alternatively, there were also pure HTML websites straight out of 1999. CSS? JavaScript? Don’t need them, just the style attribute on any given p element to change its color.
While I’m happy that some corners online have gotten less overwhelming, and others more beautiful, I’m hoping that 2026 holds a lot more weird websites with lots of variety, rather than the standard set of themes I always see on everyone’s personal websites (me included). It’s kind of amazing to be on the Internet at a time when we have so many eras of web design to look back on. Maybe it’s time to add a koi widget back to my personal website? In 2026, I think it’s time to stop taking the Internet so seriously, and go back to having more fun.
— Shira
OUT: Giving AIs multiple choice tests
For the last several years of AI progress, most benchmarks were shaped like multiple choice quizzes. The MCAT, the LSAT, a sheet of IMO problems, Humanity’s Last Exam. These evaluations have the advantage of being fast to run and easy to grade.
But just as stellar SAT scores don’t say much about a student’s ability to work long hours or align stakeholders, it’s damning that a “PhD-level intelligence” still can’t reliably book hotels or flights.
IN: Giving AIs real jobs
AI companies have instead realized that their models will have to deliver more than straight As to justify their valuations — they’ll have to demonstrate mastery of practical real-world tasks.
Evaluations like OpenAI’s GDPVal are one way of doing that: the company broke down 44 occupations into component tasks, hired seasoned professionals to create example prompts, and judged how far AI models are from human experts. (GPT 5.2 is a solid sales manager but an abysmal audio tech.)
Meanwhile, Anthropic ran a survey study to measure internal acceleration: to what extent is Claude helping employees do more work? Respondents self-report a “50% productivity boost,” but it’s hard to know whether this corresponds with actual impact—in METR’s developer study, engineers overestimated how much AI helped. (They also gave Claude the job of running a vending machine, which led to amusing failures like “violating the Onion Futures Act of 1958.”)
Ultimately, economic impact is just really hard to measure. If 95% of enterprise AI pilots yield no returns, how do you know if that’s due to technological limitations or employees’ resistance to change? A decade later, we still don’t know if Slack makes collaboration faster or slows people down with pointless pings. I think the ultimate eval is the market: whether AI subscriptions and contracts renew.
— Jasmine
IN: AI Slop (unfortunately)
Cultural slop, of course, has been with us for a long time; any number of pre-fab, bare-minimum art-products from generations past would match our contemporary-felt definition of the term. Yet the epoch of specifically AI-generated Slop (ETA: May 2024) is something distinct; not quite a qualitative shift, but a quantitative one of such magnitude that the qualitative similarity feels almost quaint. If 2024 and 2025 were the years in which the artistic signifiers of the core of AI Slop rounded into form, 2026 will be the year that we witness a flowering of noxious variation; fine-tuned slop formulations targeted for each-and-every subculture, even those that are putatively anti-AI. I cannot predict, to any reasonable extent, whether these slopifications will be successful. I can, however, say definitively that they will be omnipresent.
OUT: Slop Bowls (unfortunately)
Please, spare a thought and a prayer for the humble slop bowl (2012-2025?); perhaps the defining food of the 2010s, an endless array of premium mediocre variations of the form perfected by its originator, Chipotle. The average American in 1995 could not conceptualize the sheer variety of compostable-hexagon-packed rice-greens-protein meals available to their contemporary counterparts; how quickly we forget the advances of modern, venture-backed fast casual restaurants and dismiss these meals as mere bowlslop. The ongoing contraction of the restaurant industry is hitting the world of bowls first; the overall anti-slop derision of the online commentariat is just insult to injury.
Yet I suspect that we will miss the slop bowl when it’s gone. There is a certain comfort found in these hyper-regularized nutrient polyhedrons, of the honest work done to produce a 12-to-20 dollar meal that’s pretty much alright all the time. What awaits beyond the frontiers of bowlslop is not a return to some imagined more wholesome culinary past but a future far sillier, filled with gimmicky culture warfare and even more hyper-optimized attempts to perfect food into nutrient pastes and shakes, devoid of even the meager pleasures of a harvest bowl.
— Jacob
Out: Academia with non-academic characteristics
AI startups love to talk a big game about being at the cutting edge of “research,” and founders who chose not to start a PhD program self-describe as “PhD dropouts” for the research cred [objectively, this is stolen valor]. PhD students, on the other hand, are picking future projects based on potential employability and optimizing their paper tweets for virality; meanwhile, their advisors and PIs are hopping around startups of their own.
I personally find this situation deeply unnatural, but it’s gotten especially absurd in the last year or so. Here’s my forecast: I’m thinking we’ll all get tired of this in 2026. As “AI” becomes more commonplace, startups can go back to doing product development without calling it science, and as non-academic positions become more appealing (financially and otherwise), those who remain in academia are only those who have real reasons for doing so.
IN: Neo-academia with academic characteristics
One piece of big news from last week [for the academic world, at least] is that the NSF is announcing a new grantmaking mechanism: $10-50 million grants awarded for large teams and institutes, in contrast to the typical smaller ($1 million or less) grants awarded for individual projects. In particular, the NSF is interested in developing institutions (“labs”) that aren’t tied to the university. This sounds familiar; NSF aside, there is no shortage of people and organizations trying to implement alternative mechanisms for accelerating new research.
I won’t deny that there are lots of silly, inefficient, and wasteful things about academia. But I suspect that the most successful iterations of these “non-academic” institutions will find that they will need to either rely on existing academic infrastructure, or replicate them from first principles. Training, credentialing, and filtering is hard! Evaluating contributions is hard! Most existing non-university research organizations lean heavily on the academic credentials of their personnel, ongoing collaboration with academic institutions, and participation in the academic community. Perhaps some of these well-intentioned attempts at innovation will actually break some ground, but for all the limitations of the academic game — I suspect that academia, and its characteristics, will remain.
— Jessica
OUT: Psychedelics
On November 30, Bryan Johnson livestreamed a 5-hour magic mushroom trip. He was accompanied by his cofounder and trip-sitter (since hard-launched as his girlfriend in a 1,342-word X post), a DJ set from Grimes, and a cornucopia of online tech guy personalities like Marc Benioff and Naval Ravikant.
Johnson called this the “most quantified psychedelic experiment in history”—he took 249 different biomarkers before beginning—and concluded, based on his more “entropic” and “exploratory” brain activity, that psilocybin had serious potential as a longevity therapy.
I respect self-experimentation. But this mass-market empiricism also signifies the beginning of the end of cool: psychedelics’ final departure from hippiedom and into the sterile realm of the optimized self. Do shrooms if you like, of course—but know that the frontier of mind-experimentation has since moved on.
IN: Nasal oxytocin
I was stargazing in the woods (literally) earlier this year when a 22-year-old AI researcher turned around, and asked, “Jasmine, want a hit of oxy?” In his hand was a small glass bottle outfitted with a nasal spray cap. I inserted it into a nostril and breathed in.
Now, don’t get this confused for Oxycontin, the disreputable opioid credited with tens of thousands of overdose deaths over the last few years. Rather, what my friend had purchased was oxytocin: the “love hormone” that our bodies produce when you hug your partner, experience orgasm, or when a mother holds her newborn baby.
Now, for just $49.99 for a 5mg bottle, you can skip the hard part of pair bonding and buy instant feelings of warmth and affection toward the people around you. Supposed side effects include mild muscle relaxation and finally being able to make eye contact without cringing.
— Jasmine
IN: Folk schools
At a time when automation anxieties are at an all-time high and the dream of liberal education is more imperiled than ever, folk schools present a cozy vision of weaving tapestries on a loom and staging Dada-esque happenings. Earlier this year, Laurene Powell Jobs announced that the former San Francisco Art Institute campus she purchased for $30 million will re-open as the California Academy of Studio Arts (CASA), a non-accredited school that will provide free studio-based education for up to thirty artists. Jobs has modeled CASA after Black Mountain College, an experimental college in North Carolina that, between 1933 and 1957, incubated much of the American avant-garde, including painter Robert Rauschenberg, composer John Cage, choreographer Merce Cunningham, and sculptor Ruth Asawa. Although it remains to be seen how the Black Mountain spirit will live on in this billionaire-funded, Hans Ulrich Obrist-attached project, CASA’s folk school vibe captures a certain yearning for embodied, unplugged learning and an ascendance of autodidacticism amid crumbling institutions.
OUT: Pop-up villages
Following in the LED-studded footsteps of Burning Man, pop-up villages have recently attracted hordes of young futurists to remote locales for weeks-long retreats focused on such topics as transhumanist technologies, follistatin gene therapy for longevity, and reversible cryopreservation. Promising abundance in everything except seed oils, pop-up villages like Edge Esmeralda, Zuzalu, and Vitalia function as ephemeral network states where citizens hack both collective governance and individual lifespans. As these experiments in developing the “software of community” seek more permanent hardware (the Edge Esmeralda team recently presented their plans for developing a small city of 9,000 residents to officials in the Northern California town of Cloverdale), I’m predicting 2026 will be a year of pop-up village scandals. After all, if the ’60s taught us anything, it’s that communes will not save us.
— Hannah
IN: Forward-deployed engineers
While the traditional consultant might be “dead,” Forward Deployment Engineers, or FDEs, are in: standing on the shoulders of the Members of Technical Staff. Once synonymous with Palantir, the role is now reappearing across AI labs as a way to translate general-purpose models and AI platforms into stakeholder value—a unique blend of product manager, salesperson, prompt engineer, and, depending on the role, developer. They are the Swiss army knife of modern B2B SaaS.
The title has begun to dominate job boards, positioning itself as the newly coveted technical generalist role at a growing number of startups. Thrive Capital’s recent partnership with OpenAI suggests an effort to operationalize the FDE model at scale across its portfolio—aiming “to drive direct, scalable impact across core enterprise operations” like accounting and IT.
OUT: Redteamers-in-residence
Companies optimize for what sustains their commercial viability. A fissure between independence and corporate constraint is widening in AI labs—talent is reorganizing based on where their work can survive, and the meaning of “public benefit corporation” grows fuzzier. In 2026, this will likely require public sector funding to keep independent governance research afloat, or risk further fragmentation as the allure of harmony between corporate and ‘civilizational’ interests fades.
Closeness to the technologies you hope to regulate offers perspective the ivory tower outside cannot. Legacy tech companies have quietly built policy teams for years. But AI labs launched their internal research teams positioned as separate, built on the promise they could surface research unfettered by corporate strategy.
That promise is fading. Tom Cunningham, a member of OpenAI’s economic research team, left in September 2025 after concluding it had become difficult to publish rigorous work on AI’s negative economic impacts. The work that survives tends to double as strategic forecasting to inform new product verticals. Nicholas Carlini, a security researcher, left Google DeepMind for Anthropic, citing similar publication restrictions on security research.
— Hamidah
IN: Jagged Unc-ification
To be clear: that’s “uncle”, not “uncool.” (Sorry, Zohran!) But as tech culture’s exhausting focus on twinkish youth runs up against the fraying of the already threadbare remnants of the monoculture, it has become clear that hip generalism — knowing what’s hot and new everywhere, all the time — is no longer a tenable tactic; the main thoroughfares of tech discourse on X, the everything app ™, are clogged with sub-LinkedIn baitposts.
Instead, for better or for worse, the next few years will be ones of retreat into specialization; keep up with a scene or two, but everywhere else slouch into jagged unc-ification, accepting that you will not be at the frontiers of grey market Chinese peptide biohacking or open source model seances or neo-luddite DIY computing practices or Danish synth-pop auteurs all at once. The nebulous, paranoid anxiety of trying to be cool in all arenas was a phantasm of the 2010s; now, a half-decade on, it’s time to lean back and specialize.
OUT: The Taste Industrial Complex
Your thinking cap will not save you.
— Jacob
Reboot publishes first-person essays by and for technologists. Sign up for more like this:
Of course, at Reboot we believe that the future is constructed, not predicted. With a bit of human agency in the mix, maybe the bleaker of these trends aren’t as inevitable as we think.
Here’s to 2026!
— Jasmine & Reboot team







