Of AI and Turing Machines

As I alluded to in my last blog post, there are some interesting interactions between materialism, strong AI, and undecidable problems like the halting problem. This post is going to be a somewhat long, somewhat rambling musing session about those interactions. This is what you get when you give a computer science/philosophy person a blog. Well, that or lots of symbolic logic.

 Since this is a personal blog, and isn't in a philosophy class, I'll start with a brief explanation of each of those three terms, since they will be fairly central to the bits that follow. I've also included wikipedia links for further reading.

Materialism is a theory of mind that says that whatever it is that makes humans intelligent compared to say, rocks, is purely material: subject to all the normal physical laws, not some kind of spirit or soul or other supernatural entity, and ideally detectable and analysable with the appropriate tools. It's a fairly popular view among scientists, since the scientific world view tends not to like sticking things that it can't explain into it, because then the whole thing starts breaking down.

Strong AI is a theory that says that it is possible to make an artificial intelligence that can do anything mental that a human can do. This is sort of implied from materialism, since if intelligence is purely material it must be (at least theoretically) possible to stick the right materials in the right places and get something intelligent. Strong AI proponents claim that the reason that human-like AIs aren't very human-like right now is that we don't have the same processing power in computers that we do in our brains[1], that we haven't figured out how to get computers to learn yet (since that's a large part of how we can act intelligently), and that they need more research funding.

Undecidable problems come from computability theory, and are a class of problems that cannot be generally answered by Turing machines. The interesting thing there is that there are a lot of them, and that anything that a Turing machine can't do, any other computer can't either (there are theoretical computers called hypercomputers that can surpass Turing machines, but they tend to do things like complete an infinite amount of instructions in finite time, which we're pretty sure doesn't work in the physical world[2]).

 So what do the three of those imply? Briefly, it means that if strong AI is possible, there must be undecidable problems for humans. Not undecidable in terms of "do I want the muffin or the bagel for breakfast?", but problems that a brain could not successfully solve, and in some cases could not even think about. Obviously, we don't run into those very often, but it's something to consider: There may be some things that humans are simply incapable of thinking about, and will never be able to think about. Those undecidable problems must exist, because strong AI implies that human brains are Turing machines (as opposed to hypercomputers, which some people have proposed). A hypercomputing brain would require a hypercomputing AI to match it, and without one of the previously mentioned infinite-instructions-in-finite-time computers, there are always undecidable problems. For example, no Turing machine can determine whether or not each other Turing machine will halt or not on an input. It doesn't matter how much power you throw at it, it cannot be done.

Now, there are some things that we find it difficult, but not impossible, to think about. Think about the number 494,780,232,499,200. That's a very large number, and you probably have some trouble visualizing, or even conceptualizing, 494,780,232,499,200 of anything. 494,780,232,499,200 pennies? 494,780,232,499,200 grains of sand? 494,780,232,499,200 atoms? But you can think about that: it's 450GB, something that you've probably thought about before. What we are talking about here is something that your brain is incapable of thinking about, which is our best analog to computation. An unfortunate part of mixing theoretical computer science and brains is that while the computer science tends to assume that everything can be reduced to math, brains really don't seem to operate that way, even though the theoretical computer scientists insist that it should be reducible, so we can't really explain exactly what our brains are computing. Also, I can't give any examples of what I mean by a problem that a human brain could not compute, but it would be something that you might be able to figure out that you couldn't think about, and might not even be able to think about not being able to think about. A blot of nothingness in your mental awareness that you might be able to detect by its absence, but could never figure out what was inside it. Most of the standard examples are highly mathematical in nature, but have an annoying tendency to translate into all sorts of things.

Along with that, it implies that since brains are computers, or at least equivalent to them, it should be possible to, through malicious or accidental input, crash or "hack" a brain, just like with computers. This has come up several times in fiction. Of course, evolution would ensure that brains that crashed on common input wouldn't last long, but in the absence of unit testing and formal input validation, it seems unlikely that there isn't any input that will trigger adverse reactions. It could even be argued that drugs are ways of chemically hacking the brain, though that seems rather like hacking a computer with physical access to it. The number of people vulnerable to such an attack would probably be proportional to the "depth" at which the attack affected. Some might affect a single person and their close family, while something that was able to exploit something in the brainstem might be able to affect most or all of the people in the world. Just another thing to think about when you go around exposing your brain to all kinds of random input.

And now, perhaps the most interesting bit. Brains have what is called "First Person Conscious Experience", the feeling that you are you, and you are looking out of your eyes, hearing things with your ears, and touching things with your hands. Basically, the feeling that you are conscious, your body belongs to and is controlled by you, and the entire idea of an "I". FPCE is interesting to philosophers, since there isn't anything about neurons to explain why it is that humans (and some, but not all animals) seem to have it. I say "seem", since it's more or less impossible to prove that anyone other than yourself has FPCE, and the tests that we've devised for it are fairly crude. So, assuming that we achieve strong AI at some point (and materialism says that we can), computers would need FPCE as well. Now that we've got a situation with computers having FPCE, what other things might have FPCE? The only real criteria that we've found thus far has been sufficently complex brains. Computers (and neurons) would indicate that no individual unit in the brain needs to have FPCE for the whole thing to. So, what other things are there that has around the right number of interacting components? A common example is ant colonies, which display surprisingly intelligent behavior even though individual ants aren't very smart on their own. Ant colonies in excess of 50 million individuals have been found, could those colonies, collectively, have FPCE? Another classic example is Los Angles. It's got 30 million individually powerful, comunicating components. It's got several times that many if you count all the other things living there too. Might LA be conscious? What would it mean if it was? How about the earth as a whole, with hundreds of trillions of things living and interacting on it? And before you discount the bacteria, remember how your behavior changed the last time you got sick, and how that affected everything around you...

Well, that was a nice can of worms with no real answers in it. Of course, if there were answers to it it wouldn't be philosphy. Hopefully I've given you something to think about and/or awake from nightmares about in a cold sweat, and thank you for reading.

[1] The average human brain has approximately 100 billion (10\^11) neurons, each capable of around 200 switches per second, for a total of 20 trillion or so operations per second. However, neurons have much higher connection density and parallelism than computers, and take advantage of a lot of chemisty to get their work done. In 2005, the Blue Brain project used 23 teraflops to simulate 10,000 neurons. At that rate, you would need a computer 230,000 times more powerful than the #1 supercomputer in the world to simulate a human brain. It's also becoming increasingly unclear how much computation happens in the brain as opposed to being delegated to nerve centers in other areas.

[2] The classic example of this is the Zeus machine, a computer which completes its first instruction in one time unit, and each subsequent instruction in half the time of the previous instruction. Since the sum of 1/2\^x converges to 2, after two time units the Zeus machine will have completed an infinite number of instructions. Obviously, such computers are confined to the realm of theory.


Ever heard of epileptics?

Posted by AH on March 20, 2011 at 06:37 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed

Jacob Kessler


« July 2016