My questions for Leah Libresco

Leah Libresco made waves earlier this year when, after years of blogging for the Patheos atheism portal, she announced her conversion to Roman Catholicism.  Shortly after that, in late July, she attended the same CFAR one-week camp as me, and it was a privilege to spend time with her: she’s smart, energetic, thoughtful and very good fun.  She has also been incredibly helpful coordinating post-camp activities to keep us all in touch and help us help each other achieve our goals.  Oh, and I think the vowel is pronounced the same way as “see ya”; I was told “not like Princess Leia”, which immediately meant I could no longer remember how Princess Leia’s name was pronounced.

Near the end of minicamp, late one evening, I asked her if she had spent much time arguing about religion with fellow minicampers. I tried to persuade her that she should, but I never got to pursue my own line of argument in detail—the conversation moved on and it was all too interesting to drag it back!  So I’m setting it out again here in the hope that she’ll have time to respond.

I confess I don’t fully understand her justification for converting, and I know I’m not alone: “it seems that her justification is opaque and too complicated for one blog post” wrote Vlad Chituc. In general, though, I find it’s a mistake to try to get into religious arguments “from the inside”—you end up playing a game of self-referential Twister whose rules you neither know nor care about. Instead I wanted to start from the outside, with a sequence of hypotheticals.  Leah believes that there is something about morality that implies a god.  I wanted to know if any of the following hypothetical words are compatible with the absence of a god:

  • A world in which there is no life
  • A world in which there are only simple single celled animals
  • Multicellular animals
  • Mutualism, such as between flowers and bees
  • The kind of reciprocal altruism we observe in animal species
  • A species that is violent towards those who don’t sacrifice their own interests to further the interests of all
  • A species that is violent towards those who are not violent as above
  • A species that develops language, and uses it to talk about who will be punished and who will not
  • A species that uses the same kind of language as we do to talk about morality.

If I recall correctly, Leah accepted the compatibility of all of these worlds with godlessness except the last.

On the one hand, this is good—there is a clear path by which she believes the existence of a god is the historical cause of her belief in a good, as any good Bayesian requires.  I was worried that the argument would be entirely based on ideas in moral philosophy, not in things we could observe about the world—such an argument would hold in all my hypothetical worlds, not just the last one.

On the other hand, if this is the key to her argument, then it’s odd that so much of it is taken up with discussing moral philosophy, when what she should be entirely concerned with is evolutionary psychology, which would directly address the question of whether our current attitudes and language about morality can arise in a universe without a god.

Near the end of our discussion, she asked me: “do you think morality is more like a matter of taste, or more like math?” I didn’t get a chance to answer, which is one reason I wanted to write this blog post. As it happens, on metaethical matters I tend to agree with Joshua Greene. But what I really wanted to say was I’M ASKING THE QUESTIONS! Or to put it a less silly way, I’m happy to have a discussion about what I think about metaethics, but I don’t see how that relates to my efforts to understand what her position is using hypothetical questions.

I chose my questions exactly in order to try and step around the minefield of metaethics, because what I wanted to know was, is there something different about what we observe that is acting as evidence for a god here?

No architectural leap required

I recently listened to the Yudkowsky-Hanson debate that took place at Jane Capital in June 2011.  It’ll surprise no-one that I’m more convinced by Eliezer Yudkowsky’s arguments than Robin Hanson’s, but the points below aren’t meant to recap or cover the entire debate.

At roughly 34 minutes in, Hanson leans hard on the idea that a machine intelligence that could rapidly outcompete the whole of humanity would have to have some architectural insight missing in the makeup of human intelligence that makes it vastly more effective. The “brain in a box in a basement” scenario assumes no such thing. It imagines the machine intelligence starting out with no better tools with which to understand the world than a human being starts out with, but that simply because of the change in substrate from biology to silicon, the machine intelligence can do what we do vastly faster than human intelligence, and it is this advantage in speed and throughput that allows it to do better than us at building machine intelligences, and thus dramatically outcompete us.  As Yudkowsky puts it, since Socrates roughly 2,500 years have elapsed; a machine that thinks a million times faster than we do does 2,500 years worth of thinking in under 24 hours.

This scenario is made vivid in his That Alien Message, which I strongly recommend reading if you haven’t already.  In addition to the various skills we are born with to deal with things we encounter in the ancestral environment, like recognising faces, we have skills for thinking about seemingly arbitrary things utterly remote from that environment, like quantum electrodynamics, and for sharing our knowledge about those things through cultural exchange. The “brain in a box in a basement” scenario invites us to imagine a machine which can apply those skills at vastly superhuman speeds. Robin later speaks about how a great deal of what gives us power over the world isn’t some mysterious “intelligence” but simply hard-won knowledge; if our machine intelligence has these core skills, it will make up for any shortfall in knowledge relative to us the same way any autodidact does, but at its own extremely rapid pace.

None of this lands a fatal blow on Hanson’s argument; the argument can be maintained if we suppose that even these core skills are made up of a great many small disparate pieces each of which makes only a small improvement to learning, inference and decision making ability.  However, when he seems to imply that knowledge outside these core skills will also need to be painstakingly programmed into a machine intelligence, or that some architectural superiority over human intelligence more than mere raw speed would be needed to improve on our AI building ability, I don’t think that can make sense.

Lambda calculus and Graham’s number

Very big numbers like Graham’s Number are often expressed with lengthy, clumsy, semi formal explanations. But it’s concise and convenient to express such numbers precisely in the pure lambda calculus, using Church numerals. Starting with Knuth’s up-arrow, if we define fn,a(b) = anb, then f0,a(b) = ab and fn+1,a(b) = fn,ab(1). From this we get that the lambda expression upify = λf b. b f 1 yields fn+1,a given fn,a, and so uparrow = λn a b. n upify (times a) b. A similar process yield’s Graham’s Number, given that G = f64(4) where f(n) = 3↑n3:

1 = λf xf x
2 = λf x. f (f x)
3 = λf x. f (f (f x))
4 = 2 2
64 = 3 4

times = λa b f. a (b f)
upify = λf b. b f 1
uparrow = λn a b. n upify (times a) b
grahamf = λn. uparrow n 3 3
graham = 64 grahamf 4

Putting it all together:

graham = (λc2 c3.(λc4. c3 c4 n. n f b. b f x. x)) (λb f. c3 (b f)) c3) c4) (c2 c2)) (λf x. f (f x)) (λf x. f (f (f x)))

In John Tromp’s “binary lambda calculus”, this expression takes up  120 bits—exactly as many bits, as Doug Clow points out below, as the string “Graham’s Number” in Unicode. Don’t try evaluating this in your favourite lambda calculus interpreter unless you’re very, very patient.

Updated 2020-02-20:  I blogged about a beautiful way to see this as a picture.

Welcome to my new blog!

When I first went to set up a blog, I wanted something I had the option of hacking on myself, based on a decent programming language and decent toolkit.  Being a Pythonista, I looked for something Django based, and settled on Byteflow. I set it up and started blogging.

This turns out to have been very much the wrong choice. There’s been one commit against Byteflow since September 2011, I’ve had trouble every time I’ve needed to upgrade, and I was overrun with spam until the last upgrade when comments broke. So this time I’m going the other way—choosing the awful, PHP-based WordPress, which is also by far the most popular platform for blogging in the world, and hosting it on WordPress.com instead of trying to host it myself, which should mean it stays up to date with security and anti-spam measures.

Let’s hope this leads to better and more productive conversations. Thanks for joining me!