Reboot
The Reboot Podcast
Probability at the End of the Information Age
0:00
Current time: 0:00 / Total time: -55:52
-55:52

Probability at the End of the Information Age

Professor Ben Recht on bad books, psychological crutches, and philosophy of science

Surprise, another podcast! (Apparently Kelin’s was the “first”, so this is the “second.” But, from 2022-era Reboot, we have at least 3 older audio posts — I still think they’re excellent, so do check them out if you enjoy audio!)

Listen here on Substack (web or app), or subscribe on Apple Podcasts or Spotify. A transcript and takeaways will be published with each episode.

Probability at the End of the Information Age

On certain slices of the internet, Ben Recht might be known as Substack power-user (“that one prof who blogs about math”), professional hot-take-haver, or “recurring podcast guest.” He’s also a computer science professor and one of my PhD advisors at Berkeley.

I recently sat down with him to talk about… Nate Silver’s new book, understanding the world through statistics, the academic rat race, and the psychology of it all.

Here’s an edited transcript that covers the highlights, but the audio is spicier than the text… do with that what you will.

— Jessica

We tried to take a video in my office, but there’s some Mac bug that makes photobooth not work for long videos….. Rookie mistake, but please enjoy this mockup of what Soda Hall probably looked like on the day we recorded.

Describe yourself!

Ostensibly I work on machine learning, but I’m not really sure that’s a fitting title anymore because I'm not sure anybody works in machine learning anymore. We’re in a whole new world. What is machine learning?

I’ve done applied engineering mathematics for my entire career—optimization, applied statistics, signal processing, control theory—the theoretical foundations of engineering. And as I get older, I’ve been more interested in some of the philosophy and history of that subject as well.

I had a weird career because I thought I was going to go to grad school to do pure math. Then I had some awakening in my junior year where I thought I was going to get myself trapped into something so esoteric that I would only be able to talk to two or three people, and that didn’t feel good. I felt like that was going to drive me insane.

So I did some soul searching. I tried to apply for a bunch of weird jobs. I thought about trying to get into music recording engineering. That didn’t really pan out. I ended up in this bizarre interdisciplinary program at MIT where I thought I would do something in the middle of music and math and engineering. And then I just got interested in applying some of those math ideas to engineering problems.

I considered doing a philosophy double major as an undergraduate. And also, I went to the University of Chicago. I don’t know if that contextualizes anything. They drag you into the weird, pure corners of the ivory tower—that’s kind of part of the MO of that degree. It wasn’t really until the last five years or so that I came back to it, but I had a lot of the books already, so that was nice. They were there waiting for me 20 years later.

You recently read this new Nate Silver book....

We were in the same year in the same major, and I never knew him, which is weird because it’s a very small math department. I think there were 30 people who graduated. I thought I knew everybody....

I think it’s important to describe the first book, The Signal and the Noise. That book was "a defense of statistical prediction to everybody else," or "an introduction to statistical prediction to everybody else." The statisticians loved that book cause they thought that it was defending their world to everybody else, explaining why statistics was important.

I think our relationship with technology, with predictive technology, was very different at that point. There was a lot happening in baseball, there was moneyball. But 2012, I think, we were just starting. Was there an algorithmic timeline on newsfeed in 2012? I’m thinking probably, but it probably wasn’t that good. It was a very niche opinion to think that there was anything wrong with, or to even notice, how much data the big tech companies were collecting.

So in 2010, 2011, there was a lot of excitement about the promise of using "big data." That’s where we set the stage for Nate Silver. Fast forward to 2024: I think we’re all kind of tired of it, 12 years later. I feel like we’re near the end of the data revolution. I mean, people are still using it, but it’s very different, and plays a different role. And I think Nate Silver plays a different role.

[In the last few years,] I think that he got into a lot of fights, he felt like people were misunderstanding his role, and then he wrote this new book—On the Edge—to defend himself, and to also explain himself. It’s interesting. He was into poker forever and he claims that he got into politics in the first place after they made online poker illegal because he was so upset by it. So this book is a big defense and hagiography of gambling.

You (co)wrote a pretty definitive pan.

I’m not a fan. I didn’t like it.

Make a steelman case for this book: why should someone read it?

Oh God, I don’t do steelmans. I don’t believe in that. I feel like people gotta make their own arguments. I can’t make their arguments for them.

Ok, then... what would someone take away from this book that might be useful?

Well, the first chapter is his, like, “I’m aggrieved, I’m angry.” But then the book has two parts after that.

The first part is about gambling. So that’s poker, betting on sporting events, and the math behind it. I’m not sure this would be a great book for people who are into that kind of thing, but it’s at least current. You would learn about the most recent stuff that’s happening in solvers, and you would learn about a lot of what's happening with the online apps—DraftKings, FanDuel. If you don't know this stuff already you can probably get some value out of that.

The second half of the book is a bizarre attempt to describe Silicon Valley as a bunch of super edgy risk takers and that, for me, was too much. I couldn’t figure out what it was about that group that he found so alluring.

That section has a lot about the prediction markets crowd, the rationalist crowd, the effective altruist crowd, the venture capitalists. There are three chapters about Sam Bankman-Fried.

For someone who doesn’t know the Silicon Valley culture, do you think the book could be a way for them to understand what’s happening?

No. He’s not an interviewer, right? One of the things I found interesting is how, because he had this agenda of fitting everything into this world of “the people who take risks and the people who don’t”—I think he missed what a lot of people were telling him. He would have quotes, for example notably from Peter Thiel, and I just don't think he understood what Peter Thiel was trying to tell him. Or at least that was my read.  

I think the thing that’s funny about that second half is there’s nothing you could’ve gotten out of it that you wouldn’t get if you spent time on Twitter. This is a Twitter brained book. There is a long section on Roon, a big section on Aella. I don’t know! I was reading this, and I was like, why are we interviewing this guy?

Is there any version of this book that would have been good?

It’s tricky to me because I think part of the reason I had a hard time with it is that the philosophy it embraces from page one is one that I have always found to be incomprehensible. I can’t understand the mindset that goes into thinking this is a reasonable way to construct beliefs.

I don’t even think Nate Silver himself works this way, but this book is embracing a completely literal interpretation of probability as “how much you were willing to bet on something.”

Is the world probabilistic?

It’s such a loaded word. We all feel like we know what it means, but nobody knows what it means. It’s usually that we’re “uncertain”; probability and uncertainty kind of get swapped around in funny ways.

So when you say the world is “probabilistic,” that can mean so many different things. It could just mean the future is unknown. I feel like that’s true. It could mean that there is a coin flipping process that generates reality. That’s a second version.  Or “causal”—that’s another word that can mean a lot of things.

We all kind of know what they mean, but then you try to write them down as math, formally—no, we all get stuck. That’s why I like philosophy. Maybe where I am with philosophy now: I like philosophy because it makes me less bothered by the fact that I’m confused.

So it's psychological.

It’s a good crutch. I feel like there are some things we can have a good handle on and then you realize that, oh, some of these other things, that’s just a mess. It's good to know where the messes are.

Isn't that what the uncertainty quantification world is trying to also do?

The uncertainty quantification thing is so funny because you say the future is uncertain, but what is certain is my probabilistic model of the future. I love that turn: you make this statistical object the “certain” thing.

In [our book review], we talk about this a bit, the idea that uncertainty quantification can make you “feel” better. Philosophy is the opposite for me: once I know it’s unknown, I feel better about the fact that I'm confused. It's more of a serenity prayer; the wisdom to tell the difference.

Do you have a heuristic for when you're going to say, okay, “I'm giving up on trying to formalize this”?

I don’t have a good heuristic... Now I want one! I think it’s been helpful to think about this as both a philosophy question and a history question. It’s good to know that people are thinking about the same problems for a really long time. Usually that means it’s hard. And it’s good to trace the lineage for where partial progress has been made, where progress has been forgotten. Can you find places to move in and out?

You spent the whole summer blogging about this psych professor's 1980s lectures.

I was randomly talking to Moritz Hardt about this class I was planning to teach about some of the philosophy of decision-making and he said, I just started watching these [recorded lectures by Paul Meehl] on YouTube, and they’re incredible. So I started watching, and then I just got the idea that I would try to take it like I was actually taking the class. That’s what the whole blog series was—my notes and interpretations, maybe something I could turn into a term paper, but I did it through blogging, because, you know, that's what you do in 2024. But I imagine it's a lot to get an email from me four times a week.

Yeah, I'm not going to lie. I didn't read all of them.

It’s okay. One of us did.

Meehl’s general argument is that one of the critical things that makes a theory "scientific" is that, with all the language and statements and implications involved in a theory, they have to eventually end in something that you can test. There has to be language and a path to a prediction that you can now evaluate. The problem is nobody knows what to do.

So theories should be predictive.

It’s a little bit weird what predictive even means, right? Because it could be past predictive. It has to be explanatory, but it has to be explanatory of things that are roughly tangible. In one of Meehl’s first papers, he invented an idea called “construct validity”—what does it even mean to be tangible?

Psychology has this problem all the time. “What is intelligence?” If I have a theory of intelligence, how are we going to make a test that would actually be a valid construct of that? You’re going to have some kind of explanatory power, but then what exactly are we doing? What are we describing in that explanatory power? What makes us think a scientific theory is good is if the funky words up in theory-land give us things that we could try out in the real world.

I may have written a blog post or 100,000 words about it. For me, it was really helpful. I think it’s nice to be able to understand, what would you ever hope for a good theory to do?

What if I don't care about the theory, I just want “number go up”?

Now we’re looping back to the difference between the Fisherian approach and I guess what we could call a Popperian approach.

I think that you could argue that you could do this very Fisherian thing: just experiment a lot and still call that science. Try two things, A and B. If B is better than A, I’m going to use B. So there is no “theory,” really, on paper.

But my general experience in talking to people who actually have to satisfy KPIs is that there is theory that goes into it—“I am going to do X because I think Y is going to happen.” It’s not just blind optimization. I know there’s companies that try to sell people that on just the blind optimization. But I do think that more often than not, these things are design driven. And in that sense, they're theory driven. Are they scientific? That's a good question. I don't want to be Mr. Science Gatekeeper, that seems silly.

One of the things that’s been driving me crazy about Meehl’s 1980s critique: I think there was a bunch of people who were making the same critique in the late 80s and I think they all hold now more than they did then. They’re all pointing towards virtualization before it happens. This all before the internet and the internet, but it's all about what happens as we get more mediated from each other, this rat race thing, the ballooning of bean counting.

Who were these people?

Meehl, Neil Postman. There’s James Scott (everyone in tech loves him, that’s what Jasmine tells us...). The weird French postmodernists: Baudrillard is a great example, Lyotard saw it coming even earlier. I don’t agree with her, but even Camille Paglia was saying similar things in the late eighties.

One of the things I’m really obsessed with is like, how did these people see this? And why is it that we were unable to? If everybody was making this critique already, and everybody was kind of listening to this critique already, why have we not been able to do anything about it?

Do you have a hypothesis?

Yeah. The glib and stupid answer is capitalism. I hate that answer, but I think there is something partially there about how we embraced a certain technocratic, technological mindset, paralleling a particular set of technologies, which was really good at simulating stuff and really good at virtualizing stuff.

As that got better became easier to add to bureaucracy; once you add to bureaucracy, you have to patch bureaucracy. Sometimes when you have exponential growth, it’s impossible to get out of the way.

I don’t like gambling. I don’t like making predictions. I’m almost always wrong. It’s fine. But—I do sense that we’re at the end of the information age. I don’t know what comes after that.

Reboot publishes essays on tech, humanity, and power every week. If you want to keep up with the community, subscribe below ⚡️

closing note

In case you haven’t seen these already, here’s a Ben Recht Reading List:

And sorry, again, about the not-quite-professional production quality. At least Ben’s mic sounds great (if the same can’t be said for mine). Nobody here is quitting our day jobs anytime soon….

— Jessica & Reboot Team

Discussion about this podcast

Reboot
The Reboot Podcast
Candid conversations with mission-driven technologists about how they approach their craft and careers. Find our essays and updates at joinreboot.org.
Listen on
Substack App
Spotify
RSS Feed
Appears in episode
Ben Recht
jessica dai