No architectural leap required

I recently listened to the Yudkowsky-Hanson debate that took place at Jane Capital in June 2011.  It’ll surprise no-one that I’m more convinced by Eliezer Yudkowsky’s arguments than Robin Hanson’s, but the points below aren’t meant to recap or cover the entire debate.

At roughly 34 minutes in, Hanson leans hard on the idea that a machine intelligence that could rapidly outcompete the whole of humanity would have to have some architectural insight missing in the makeup of human intelligence that makes it vastly more effective. The “brain in a box in a basement” scenario assumes no such thing. It imagines the machine intelligence starting out with no better tools with which to understand the world than a human being starts out with, but that simply because of the change in substrate from biology to silicon, the machine intelligence can do what we do vastly faster than human intelligence, and it is this advantage in speed and throughput that allows it to do better than us at building machine intelligences, and thus dramatically outcompete us.  As Yudkowsky puts it, since Socrates roughly 2,500 years have elapsed; a machine that thinks a million times faster than we do does 2,500 years worth of thinking in under 24 hours.

This scenario is made vivid in his That Alien Message, which I strongly recommend reading if you haven’t already.  In addition to the various skills we are born with to deal with things we encounter in the ancestral environment, like recognising faces, we have skills for thinking about seemingly arbitrary things utterly remote from that environment, like quantum electrodynamics, and for sharing our knowledge about those things through cultural exchange. The “brain in a box in a basement” scenario invites us to imagine a machine which can apply those skills at vastly superhuman speeds. Robin later speaks about how a great deal of what gives us power over the world isn’t some mysterious “intelligence” but simply hard-won knowledge; if our machine intelligence has these core skills, it will make up for any shortfall in knowledge relative to us the same way any autodidact does, but at its own extremely rapid pace.

None of this lands a fatal blow on Hanson’s argument; the argument can be maintained if we suppose that even these core skills are made up of a great many small disparate pieces each of which makes only a small improvement to learning, inference and decision making ability.  However, when he seems to imply that knowledge outside these core skills will also need to be painstakingly programmed into a machine intelligence, or that some architectural superiority over human intelligence more than mere raw speed would be needed to improve on our AI building ability, I don’t think that can make sense.

Lambda calculus and Graham’s number

Very big numbers like Graham’s Number are often expressed with lengthy, clumsy, semi formal explanations. But it’s concise and convenient to express such numbers precisely in the pure lambda calculus, using Church numerals. Starting with Knuth’s up-arrow, if we define fn,a(b) = anb, then f0,a(b) = ab and fn+1,a(b) = fn,ab(1). From this we get that the lambda expression upify = λf b. b f 1 yields fn+1,a given fn,a, and so uparrow = λn a b. n upify (times a) b. A similar process yield’s Graham’s Number, given that G = f64(4) where f(n) = 3↑n3:

1 = λf xf x
2 = λf x. f (f x)
3 = λf x. f (f (f x))
4 = 2 2
64 = 3 4

times = λa b f. a (b f)
upify = λf b. b f 1
uparrow = λn a b. n upify (times a) b
grahamf = λn. uparrow n 3 3
graham = 64 grahamf 4

Putting it all together:

graham = (λc2 c3.(λc4. c3 c4 n. n f b. b f x. x)) (λb f. c3 (b f)) c3) c4) (c2 c2)) (λf x. f (f x)) (λf x. f (f (f x)))

In John Tromp’s “binary lambda calculus”, this expression takes up  120 bits—exactly as many bits, as Doug Clow points out below, as the string “Graham’s Number” in Unicode. Don’t try evaluating this in your favourite lambda calculus interpreter unless you’re very, very patient.

Updated 2020-02-20:  I blogged about a beautiful way to see this as a picture.

Welcome to my new blog!

When I first went to set up a blog, I wanted something I had the option of hacking on myself, based on a decent programming language and decent toolkit.  Being a Pythonista, I looked for something Django based, and settled on Byteflow. I set it up and started blogging.

This turns out to have been very much the wrong choice. There’s been one commit against Byteflow since September 2011, I’ve had trouble every time I’ve needed to upgrade, and I was overrun with spam until the last upgrade when comments broke. So this time I’m going the other way—choosing the awful, PHP-based WordPress, which is also by far the most popular platform for blogging in the world, and hosting it on instead of trying to host it myself, which should mean it stays up to date with security and anti-spam measures.

Let’s hope this leads to better and more productive conversations. Thanks for joining me!