Reboot

Share this post

⚡ New Event: The Alignment Problem ft. Brian Christian

joinreboot.org
Books

⚡ New Event: The Alignment Problem ft. Brian Christian

How AI reflects and refracts human values

Ben Wolfson
and
Jasmine Sun
May 28, 2021
1
Share
Share this post

⚡ New Event: The Alignment Problem ft. Brian Christian

joinreboot.org

Last night, we had a fascinating conversation with data artist Jer Thorp on the social, cultural, and human dimensions of data collection and creation (if you missed it, read our review). For our next event, we’ll move on from data collection to understanding the algorithms that process it.

📖 the alignment problem by brian christian

Our guest for Thursday, June 3 is author and researcher Brian Christian.

Brian Christian is a technology writer known for bestselling titles like Algorithms to Live By and The Most Human Human. His newest book, The Alignment Problem, reaches from early behavioral science to recent advancements in computing to tell an accessible, engaging history of ethics in AI.

Join us next Thursday for a Q&A on how AI can be more ethical and humane.

Register Here

🔊 our take: AI from every angle

By Ben Wolfson

The Alignment Problem: Machine Learning and Human Values by Brian Christian is essential reading for anyone even vaguely interested in the machine learning algorithms beginning to run the world. A narrative nonfiction magnum opus, the book is “about machine learning and human values: about systems that learn from data without being explicitly programmed, and about how exactly—and what exactly—we are trying to teach them.” In other words, it’s an exploration of how to make sure computer programs understand human norms and do what we mean, that these systems are aligned with the goals of their operators (us). This is The Alignment Problem.

To investigate the alignment problem, Christian’s book traverses almost 100 years of research ranging from early work in neuroscience, childhood development, and behavioral science to the last decade, when many of these concepts are rediscovered and applied by modern programmers. As a layperson, I found this broad disciplinary background essential to understanding how machine learning and neural networks work.

Additionally, rather than merely explaining how AI works today, Christian disentangles the research processes, both failures and successes, that drove recent breakthroughs. If you’ve ever read a book or article about AI ethics, you’ll recognize some of the given examples — algorithmically derived parole, facial recognition that only sees white faces, and more — but Christian takes us through their entire history, starting with the first (manual) predictive parole system implemented in 1930s Illinois to COMPAS and California's (now overturned) experiment with algorithmic risk assessment for bail. By going deep on history, Christian shows the reader how we ended up were we are, with systems full of ethical dilemmas that seem obvious in hindsight.

While the book was an excellent primer on AI, it left me with questions about the "human values" portion of its subtitle. Christian is explicit about how machine learning is the sum of its data sets and the values embedded in them, but his conclusion saliently mentions that human values themselves don’t line up neatly. Might values-aligned machine learning inadvertently — or perhaps intentionally — universalize particular subsets of human values at the cost of others?

That being said, few books have so successfully woven together ethics case studies, a wide-reaching history of the field, and compelling, layperson-friendly explanations of machine learning and neural networks. Christian’s multi-disciplinary approach means that even those well-versed in machine learning will discover a richer background behind familiar topics. As AI research and products continue to change and impact our lives, reading The Alignment Problem helped me feel more prepared to understand the latest developments.


🌀 microdoses

  • 🤖 To build on Christian’s analysis, browse The Boston Review’s forum/debate series on AI’s future — Rediet Abebe and Maximilian Kasy call for a power-based, not fairness-based analysis; Kate Crawford argues that ‘augmentation’ can be a euphemism for worker surveillance; and plenty more.

  • 🔍 Google is considering redesigning search to respond to queries with natural language instead of listing ranked links, posing uncertain but significant implications misinformation and algorithmic literacy.

  • ✒️ "And how will you do this?" she wants to know. "A political party? A march? A revolution? A coup?" "A magazine."

  • 📚 Books are lindy

    Twitter avatar for @TheGayChingy
    twinks posting their W’s @TheGayChingy
    People who read books regularly act like it hits different than other mediums and as someone who just started reading books regularly again for the first time in years yeah it kinda does they are right
    7:42 PM ∙ May 19, 2021
    83,045Likes7,708Retweets
  • 🐕 (Not an invitation to explain Dogecoin)

    Twitter avatar for @myeshachou
    myesha thee stallion @myeshachou
    “the palestinian occupation is complicated” wtf is doge coin
    2:34 PM ∙ May 20, 2021
    24,542Likes2,528Retweets
  • 🦆 Still better than the new Calendly logo

    Twitter avatar for @KelOfKells
    Kelly Snyder @KelOfKells
    I'm sorry, I know someone worked very hard on this. I thought that after two years my feelings would fade, but the Slack logo is four ducks all sniffing each other's butts and I'm tired of pretending it's not.
    Image
    6:10 PM ∙ May 20, 2021
    49,104Likes7,099Retweets

💝 a closing note

I asked the Reboot community about their favorite Wikipedia page:

  • Jasmine: I always come back to the iconic “Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo.” Yes, it’s case sensitive.

  • Nikhil: “List of lists of lists” is kind of a meme name-wise, but reveals a structure to Wikipedia I never considered.

  • Theresa: Racial formation theory, my king, and a militant student organizing group that kidnapped a billionaire's daughter.

  • Anh: List of really, really, really stupid article ideas that you really, really, really should not create.

Toward human-machine alignment,

— Jasmine & Reboot team

1
Share
Share this post

⚡ New Event: The Alignment Problem ft. Brian Christian

joinreboot.org
Comments
Top
New
Community

No posts

Ready for more?

© 2023 Jasmine Sun
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing