No architectural leap required

I recently listened to the Yudkowsky-Hanson debate that took place at Jane Capital in June 2011.  It’ll surprise no-one that I’m more convinced by Eliezer Yudkowsky’s arguments than Robin Hanson’s, but the points below aren’t meant to recap or cover the entire debate.

At roughly 34 minutes in, Hanson leans hard on the idea that a machine intelligence that could rapidly outcompete the whole of humanity would have to have some architectural insight missing in the makeup of human intelligence that makes it vastly more effective. The “brain in a box in a basement” scenario assumes no such thing. It imagines the machine intelligence starting out with no better tools with which to understand the world than a human being starts out with, but that simply because of the change in substrate from biology to silicon, the machine intelligence can do what we do vastly faster than human intelligence, and it is this advantage in speed and throughput that allows it to do better than us at building machine intelligences, and thus dramatically outcompete us.  As Yudkowsky puts it, since Socrates roughly 2,500 years have elapsed; a machine that thinks a million times faster than we do does 2,500 years worth of thinking in under 24 hours.

This scenario is made vivid in his That Alien Message, which I strongly recommend reading if you haven’t already.  In addition to the various skills we are born with to deal with things we encounter in the ancestral environment, like recognising faces, we have skills for thinking about seemingly arbitrary things utterly remote from that environment, like quantum electrodynamics, and for sharing our knowledge about those things through cultural exchange. The “brain in a box in a basement” scenario invites us to imagine a machine which can apply those skills at vastly superhuman speeds. Robin later speaks about how a great deal of what gives us power over the world isn’t some mysterious “intelligence” but simply hard-won knowledge; if our machine intelligence has these core skills, it will make up for any shortfall in knowledge relative to us the same way any autodidact does, but at its own extremely rapid pace.

None of this lands a fatal blow on Hanson’s argument; the argument can be maintained if we suppose that even these core skills are made up of a great many small disparate pieces each of which makes only a small improvement to learning, inference and decision making ability.  However, when he seems to imply that knowledge outside these core skills will also need to be painstakingly programmed into a machine intelligence, or that some architectural superiority over human intelligence more than mere raw speed would be needed to improve on our AI building ability, I don’t think that can make sense.

Published by Paul Crowley

I'm Paul Crowley aka "ciphergoth", a cryptographer and programmer living in Mountain View, California. See also my Twitter feed, my webpages, my blogs on Dreamwidth and Livejournal, and my previous proper blog. Or mail me: paul at

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s