Notes on “Among the A.I. Doomsayers” by Andrew Marantz

This came out today, and I’m in it a little bit because I attended a dinner party Katja threw for Marantz and arrived early. Mostly this is just disappointingly bad and it was a mistake to talk to him. He warned in advance “I’m going to have to talk about the people as well as the ideas” but it seems clear that he’s not very captivated by the ideas and a lot more interested in talking about the people and the fights we’re having; the overall tone of “what a bunch of weirdos” pervades the whole article. Maybe I’m just spoiled by talking to Tom Chivers.

Update: smart people disagree. Maybe I got out of the wrong side of bed this morning.

Nothing in the article really needs rebutting as such. I kept going “hm that’s not quite right” as I read it, so I thought I’d jot down some trivial bullet points.

  • Published in the print edition of the March 18, 2024, issue, with the headline “O.K., Doomer.” great.
  • I first met Katja at the CFAR minicamp in 2012, I think I said this at the time.
  • We’re only called “A.I. safetyists, or decelerationists” or “A.I. doomers” by people who hate us. We also want AI to take humanity to the stars, we just need to not all die for that to happen. Thus “AI notkilleveryoneism”. (Update: pushback on this)
  • And no-one calls “e/acc” people “boomers”. I posted to Twitter asking for a name for them, and the best candidate was “vroomers”.
  • “(Some experts believe that A.G.I. is impossible, or decades away; others expect it to arrive this year.)” Who expects it to arrive this year? For that matter, which experts say it’s impossible?
  • “And then, as if to justify the moment of levity” This is a very strange imagining of the conversation. The quote is about, and was quoted in the context of a discussion about, what kind of emotional state it makes sense to occupy when everything you hold dear seems to be at risk. It wouldn’t make any sense to quote it immediately following the Snoop Dogg quote.
  • “The existential threat posed by A.I. had always been among the rationalists’ central issues, but it emerged as the dominant topic around 2015, following a rapid series of advances in machine learning.” I don’t think rationalists became more concerned about AI x-risk in 2015; they were already super concerned. In 2014 Stephen Hawking, Max Tegmark, Frank Wilczek, and Stuart Russell published an article in the popular press about the risk, which was a big step forward in public recognition, and Bostrom’s book “Superintelligence” came out.
  • Seems odd to mention that Hinton and Bengio are on-side without mentioning that LeCun is a constant source of ill-thought-out mockery on this issue!
  • “Some people gave away their savings, assuming that, within a few years, money would be useless or everyone on Earth would be dead.” I advise against this and I don’t know anyone who did this.
  • “Like many rationalists, she sometimes seems to forget that the most well-reasoned argument does not always win in the marketplace of ideas.” This is just one of those false things people like to say, and Katja’s quote doesn’t illustrate it. She states that her targets would have reason to listen, not that they would listen to reason.
  • “Most doomers started out as left-libertarians” – this is a non-standard use of the phrase left-libertarian. I would say that “libertarian” would be more accurate, with the caveat that the Libertarian Party fell to a form of entryism and are now insane right-wing parties a million miles from my libertarian friends.
  • “A guest brought up Scott Alexander” I’m pretty sure I was that guest and this was at the party I was at, not this one.
  • “The same people cycle between selling AGI utopia and doom” as always Gebru has no idea. The anti-AI-omnicide movement started by Yudkowsky and Bostrom has always argued that AI could lead to a utopian future if we can somehow avoid it killing everyone.
  • “Recently, though, the doomers have seemed to be losing ground.” It’s worth taking a moment to think how extraordinary it is that we have the ground we have – concerns that in 2013 I thought would forever be dismissed as sci-fi are spoken of as worthy of serious treatment at all three major AI companies, and in the Senate.
  • Wild to hear that Upton Sinclair quote used as a reason to take “don’t worry about AI killing us all” more seriously. UPDATE: two people pointed out that I read this exactly backwards, and Sinclair is being applied in the very straightforward way that makes sense.
  • “Scott Alexander wrote a few days after the incident.” The web has this fantastic technology called the hyperlink, and indeed Scott’s essay is a necessary rebuttal to what that paragraph says.
  • “coördinate their food shopping” ah, the New Yorker.

Published by Paul Crowley

I'm Paul Crowley aka "ciphergoth", a cryptographer and programmer living in the Santa Cruz mountains, California. See also my Twitter feed: https://twitter.com/ciphergoth

Leave a comment