Where is my intelligent toaster?

As someone who did some research in AI in college, one of the questions that I was constantly asked was "Where is my intelligent toaster?" Of course, it wasn't always a toaster, sometimes it was a car, or a computer, or whatever. The question, at its most basic level, was why weren't the things around us wandering around like C-3PO or robots in an Isaac Asimov novel, acting human-like or at least talking to us about things ("Your toast is ready, master").

The answer that there are two things. The first is that "acting human-like" is hard. Really, really hard. Not only do human brains outclass current computers by orders of magnitude, but even simulating small parts of it (like speech) end up being much harder than they appear at first. Speech involves not only making noise, but also controlling the intonation and emphasis enough to sound like something other than a cold, unfeeling machine. Then, since at that point you might as well just use recorded human voices (and we know what those tend to sound like), you need to alter those enough to avoid being repetitive. Then, you need to be aware enough of your environement to alter those appropriately (and know what "appropriately" is), so that you aren't just randomly wandering around the pitch scale. It keeps going from there. It doesn't help that humans are very good at distinguishing human things from non-human things [1], so even getting "pretty close" isn't good enough.

The second, and perhaps more useful answer, is that AI is far too subtle for that. Rather than showing up at your door in gold and chrome and announcing that it is C-3PO, human cyborg relations, it has been slowly integrated into everything that can benefit from being able to adapt a bit better to changing circumstances. Your toaster monitors the heat levels near the bread and adjusts current flow to heat cooler areas evenly. Your car monitors hundreds of things in the engine and adjusts them constantly. Airplanes have enough intelligence in them that in many cases, the control sticks are actually just "suggestions" to the onboard computer, which is doing the actual flying. Traffic lights monitor the traffic approaching them and time themselves to avoid causing traffic delays, and in some cases communicate with adjacent lights to coordinate themselves. Your compiler figures out that there are some bits of your code that you are never going to need, and removes them entirely. Glassfish tries to figure out how many JRuby runtimes you actually need, and make sure you have that many.

"But wait!", I hear you say. "That's not actual AI. There isn't any intelligence there! It's just following instructions!". This is what those of us in AI like to call "moving the goalposts". Computers will, almost by definition, always "just be following instructions". Whether humans are or not is the subject of much philosophical debate. But regardless, there are many things that we have now, as mentioned in the above paragraph, that are now much better at observing their environments and reacting to them. And what is intelligence, if not being able to react well to an environment and learn from experience? Once, AI was considered to be playing chess well, or handwriting recognition, or spam filtering. Now that those are all possible, they are relegated to "just following instructions", since we can see how the computer is doing them. AI is now playing go, autonomous cars, and generalized learning, and it will move on once those are possible.

 Your intelligent toasters are here, you just haven't noticed them yet because they are so helpful, and their AI just makes them do what they do better and more automatically.


 [1] This is part of where the idea of the uncanny valley comes from.

Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

Jacob Kessler

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today