July 25th, 2006
|mathemajician||10:18 am - Public eye, public mind|
I'm regularly seeing the idea of the technological singularity turning up in the mainstream media. Moreover, the tone of the articles is much more positive than a few years back positive in the sense that "AI is now really going somewhere".
The latest I read was this short article on CNN.
As an AI researcher I'm certainly welcoming this change as it will lead to better opportunities in the coming years.
|Date:||July 25th, 2006 02:14 pm (UTC)|| |
And as an AI researcher I would think you also know that it's a lot of wishful thinking. The masses don't know the details and so they don't know the profound limitations of real AI as opposed to science fiction AI. You're not sitting in Switzerland building Lieutenant Commander Data; you're proving theorems about Turing machines and incompleteness, or doing other necessary but non-sexy work. Eliezer Yudkowsky is not going to save the world, as much as I might want to believe he will. But good press is good press, I guess, as long as it leads to some more funding for you :-)
(Incidentally, I don't mean to attack you or the field of AI, merely to express the disparity between the popular and academic notions of it. I dig your work when I see bits of it.)
Yes, more positive press about the field in general can only help when it comes to funding.
Do I think it's wishful thinking? Actually, I'm a big AI optimist. I expect super human machine intelligence in a couple of decades, if not sooner. ;-)
|Date:||July 25th, 2006 02:49 pm (UTC)|| |
I'm with you in spirit, but I suppose I'm too cynical to consciously believe it's coming anytime soon.
Keep up the good work, though. Success will only come to those who genuinely believe it's possible, so I'm glad you think how to do. I would love to be proven wrong about my AI pessimism!
|Date:||July 25th, 2006 02:51 pm (UTC)|| |
* That sentence was supposed to say "think how you do".
I think AI is missing a few key things at the moment:
1) Massive training data. Humans take years to train.
2) Massive computation resources. Human brains have massive computation power (in a parallel form of course)
3) I think there's a few key parallel hierarchical learning algorithms that the brain uses that we don't understand yet.
I think all of these problems are solvable in the next 20 years. But it's quite different to how most AI is done now.
|Date:||July 25th, 2006 03:49 pm (UTC)|| |
Obviously, problem #2 there is the easiest to solve, because it's going to get solved by other people anyway. Everyone needs massive computation resources for everything. And parallel processing is the only significant way forward I can see unless something dramatic happens.
Problems #1 and #3 are very valid points I used to mull over quite a bit in my "I'm going to be an AI researcher when I grow up" phase that never quite panned out after undergrad. If you get #3 nailed down well enough, #1 might even mostly fall into place automatically as a result. But #3 is, needless to say, a hard problem, and therein lies the source of my AI pessimism.
My major was electrical/computer engineering, but I hung out with the CS people a lot and administrated their beowulf cluster and such. My senior project was an evolutionary algorithm automatic circuit designer (not practical but fun to do), and I used to do lots of neural classifier stuff if I had half an excuse. The engineering people always wondered why I was in their department, but I just couldn't stomach the idea of sitting through undergrad CS classes and having someone slowly and boringly re-explain Java inheritance rules to the drones.
Now I do weird gadgety things like wireless power transmission for kitchen countertops, which is less cool in my estimation than, say, what you do for a living. Maybe I'll go back to school at some point.
Ok, so we both agree that #2 is going to get done. Indeed I think that most of the technology to do it is already known. What's needed is a concrete high level neural network design for chip engineers to implement in hardware. There are already a number of groups building neural chips, just not on a large scale and with cutting edge fab technology.
As for number #1. I don't really see this as a big problem. After all we provide children with enough input to learn from. Maybe we need to provide the AI with a basic robotic body and some other kinds of interface abilities. Looking at how much better the humanoid robots coming out of Japan get each year I find it hard to believe that in ten years from now a sufficient body for an AI won't be possible. Remember that people who are deaf and blind and in a wheel chair can still develop considerable knowledge about the world and a high level of intelligence. So we don't need the perfect humanoid body for the AI to be able to achieve human level intelligence.
Ok, so that's #1 and #2 covered, if not now, then in 10 years time. What about #3. This is the hard one, as you point out. What changed my mind about this was reading "On Intelligence" by Jeff Hawkins. He might not know exactly how the cortex works, indeed he clearly doesn't, but he does make a very important point: Most of the intelligence in the brain comes from a very general learning algorithm in the neocortex, it learns to do vision, sound processing, planning, touch processing, muscle movement and so on. It's basically a huge hierarchical prediction and modelling system that can learn, and does learn, to deal with just about any information you feed it. If you can extract this general purpose learning algorithm out of the 6 layered structure of the neocortex, you will have solved most of the mystery of how to do AI.
The book doesn't seem all that heavy and you might be forgiven for thinking that he doesn't really know what he's talking about. However if you read what people are saying about this book, very big names in both neuroscience and AI think that Hawkins might really onto something. I've read a bit of neuroscience myself since reading Hawkins' book and from what I've read it seems that the key points of what Hawkins' is suggesting could well correct. If he's right, then there's a relatively simple and general learning and control algorithm in the brain that does most of the work. We just need to figure out what it is. It seems reasonable to me that within the next 10 or 15 years that could well happen. It would blow the doors off AI as we would then understand how to get a computer to learn in the same way as a brain.