I just came across a relatively recent interview in Discover Magazine with Marvin Minsky, legendary professor at MIT and A.I. pioneer. In it he does a bit of bashing of neuroscience, which I feel I lack knowledge to comment on, but I’m absolutely with him on the claim that artificial intelligence is the way to understand the mind. Nobody can expect to understand the workings of the brain by handwritten formulas or tracing neural interactions — it’s too complex. We need simulations. I’m putting a small excerpt from the interview below, but you should really read the whole thing.
Neuroscientists’ quest to understand consciousness is a hot topic right now, yet you often pose things via psychology, which seems to be taken less seriously. Are you behind the curve?
I don’t see neuroscience as serious. What they have are nutty little theories, and they do elaborate experiments to confirm them and don’t know what to do if they don’t work. This book presents a very elaborate theory of consciousness. Consciousness is a word that confuses possibly 16 different processes. Most neurologists think everything is either conscious or not. But even Freud had several grades of consciousness. When you talk to neuroscientists, they seem so unsophisticated; they major in biology and know about potassium and calcium channels, but they don’t have sophisticated psychological ideas. Neuroscientists should be asking: What phenomenon should I try to explain? Can I make a theory of it? Then, can I design an experiment to see if one of those theories is better than the others? If you don’t have two theories, then you can’t do an experiment. And they usually don’t even have one.
So as you see it, artificial intelligence is the lens through which to look at the mind and unlock the secrets of how it works?
Yes, through the lens of building a simulation. If a theory is very simple, you can use mathematics to predict what it’ll do. If it’s very complicated, you have to do a simulation. It seems to me that for anything as complicated as the mind or brain, the only way to test a theory is to simulate it and see what it does. One problem is that often researchers won’t tell us what a simulation didn’t do. Right now the most popular approach in artificial intelligence is making probabilistic models. The researchers say, “Oh, we got our machine to recognize handwritten characters with a reliability of 79 percent.” They don’t tell us what didn’t work.