Who’s Afraid of Robots?

I’ve come to know quite a few who genuinely fear the idea of robots. Inevitably I wonder how widespread, or how many share this feeling?

So I’ve created this poll that I’ve submitted to Digg, hoping that others will want to see the question answered as much as I do. If the story reaches the front page, we’ll get a wider range of voters and consequently all the more interesting results.

Update: The story reached Digg.com, garnering over 6000 votes.

See the article The Fear of Intelligent Machines, Survey Results

for a summary and discussion of the results.

My thanks to everyone who are and have participated in the survey.
The poll will remain open indefinitely.

Evil Terminator robot

Friendly robot with flowers

- AI Information Tidbits -


Kurzweil and others predict computers being hypothetically powerful enough to simulate the human brain in 2013, processing-power-wise. In addition we’re currently seeing an amazing increase in AI related technologies: Home-robots are predicted a seven-fold increase in 2002 to 2007 (predictions by the EU), and the AI industry expected to nearly double in value in the same period, from $11 billion to $21 billion, with an annual growth rate of 12% (predictions by Business Communications, Inc.). Recommended reading: Who’s Afraid of Robots?, an article by Kristinn R. Thórisson.

No related posts detected.

44 Comments, Comment or Ping

  1. See, here’s my problem with robots, they are made to make our lives easier to whatever extent. But at what point is there that maybe they are just doormats for our own laziness?

  2. Hi blueyes,

    Sure, robots will make it possible to sit back and relax in some ways. But there are two main reasons why I don’t consider this a serious concern:

    (a) It’s optional for all people to take advantage of the automation. We take responsibility for ourselves.

    (b) All technology has made life easier for us in some manner. While robots will inevitably automate tasks that we currently do on a daily basis (e.g. clean), that doesn’t really mean that there are less things to do. It simply means that there are less repetitive, mindnumbing tasks that we have to do — which leaves us freer to persue what we find interesting. The emphasis shifts and new opportunities for activities arise.

    For example, printers and computers have enabled us to mass produce information — we’re not lazy to have stopped using metal cast type pieces.

    We just stopped hammering the metal and moved on to more interesting things :)

  3. There is always potential for using technology to do evil and there are many examples of this in history. Think of nuclear energy and how useful it is but then people also build nuclear bombs. We will build robots to help around the house but at some point, some goverment official will think that there is good use for robots in the military and they will go for it. This of course is already happening considering Korea’s SGR-A1 sentry robot and a myriad other under development.

    Cheers!

  4. Well. The use of nuclear weapons can result in an apocalypse instantly. I don’t see, exactly, that kind of threat being posed by robots or intelligent machines. Unless possibly if only a few selected countries possess intelligent technologies greater than others.

    But what if everyone possesses the same or similar intelligent technologies, could that somehow “even out” potential danger posed by misuse of AI?

  5. Also, I’m still uncertain whether robotic soldiers will actually cause a decrease in human casualties (ie. robots will be doing the fighting). Or if it doesn’t matter for in the end, when the robots have been destroyed, the humans will start fighting again.

    In any case, while we certainly need to be careful, like we (try to) do with nuclear power, I still think the upsides of more intelligent machines (ie. solving more complex problems faster than we can, inventing new medicine, doing surgical operations) outweigh the potential dangers.

  6. omega

    I fear bugs created by humans in intelligent machines

  7. The real question should be are intelligent robots afraid of me?

  8. @omega
    What if the intelligent machine can debug itself to minimize error? And what if another intelligent machine debugs the debugger in the first intelligent machine, and so on?

    @Dennis
    Hehehe.

  9. kevo

    This stuff doesn’t belong on digg as it’s not actual news; and even though most of the stories on digg aren’t news that doesn’t make it alright to post things like this Blog spam. Nothing but a lame attempt to gain more readers. Digg is a news site, not a self promotion page to get people to visit your blog so they can take a poll.

  10. Thank you for that insightful comment kevo. Are political polls not newsworthy either? In any case, I tried to be very clear in the description of the Digg submission that I was submitting my own blog — so that people would know in advance.

  11. Have you seen the movie with Will Smith? I bet the robots will take over at some point, because they will be smarter and will be able to think as one rather than many

  12. IceMan

    “What if the intelligent machine can debug itself to minimize error? And what if another intelligent machine debugs the debugger in the first intelligent machine, and so on?”

    You should read more into halting problem, essentially its not possible to do.

  13. Thanks, I’m aware of the halting problem (which doesn’t apply to my argument any more than it applies to human debuggers). My main point was to convey that intelligent machines can potentially (and intelligently) identify bugs faster than we can. Hence minimize possible bugs as we know them.

  14. Dr. Paul Proteus

    I don’t fear intelligent machines, I fear a lack of them.

  15. I do not fear them, but I suspect they will annoy the crap out of me. I know spamassassin’s AWL I.A. brings me nothing but frustration. As systems get smarter, I expect the frustration will just grow.

  16. einsteinbqat

    It is only what Humans do with these intelligent machines that is dangerous. There are all kind of sick and twisted minds on this planet. It is every decision that we make as Human in regards to this matter, that will either make us rise or fall.

    Anyhow, intelligence is a very subjective concept. Intelligence is defined by Humans, and compared to Humans. Therefore, for us to decide to what extent machines will be intelligent, and to what purpose this intelligence will be used, we will have to base our decisions solely on the very foundation of the definition of intelligence itself, and how Humans interpret it.

    All in all, intelligent machines should not be a problem to any of us. How intelligent and whether these machines will do good or bad things depends on Humans. So, it is the Human that we should fear.

  17. Ariel

    Why should I fear something that a: doesn’t exist, and b: never will?

    True AI is impossible, I know that’s not a popular belief, but it’s also true. It doesn’t matter how fast the computer is, no one alive knows how to program true AI, there aren’t even any idea that could work if only this or if only that (such as more memory or speed).

  18. @Ariel

    I think that making an absolute statement such as AI will never happen is pointless. There are people working on AI systems and there has been much progress during the last 50 years. In addition, you may be confused about what True AI (as you call it) is. You should consult an AI textbook about the goals of AI before you speak.

    Cheers!

  19. I am not afraid because I have a book that will protect me from the homicidal robotic menace! Now all I have to do is figure out a way to get it away from the vacuum cleaner!

  20. David

    I used to fear intelligent robots, but I’ve come to love and respect the Japanese people! MUST WORK! -BLEEP- MUST WORK! -BLEEP-

  21. I don’t want a pushbutton universe. I like to do things myself. Plus most of technology is made out of way gnarly materials.

  22. For I’m sure at our current rate of tech growth I will never see them.

  23. Jasper

    @Ariel: At least on the long term, i think you are wrong. As i believe our intelligence stems from the physics in our brain, it must be possible to make machines with similar results -intelligence.
    The way i fear robots as controlled by humans, is really the same as i fear any modern weapons being controlled by humans. Perhaps a difference is that a person only needs to be able to control the robots (which may not yet be at the point of taking moral isue) rather then humans. (fewer humans are needed, using robots)
    Another way to fear robots is the social change they can bring. (sexrobots forinstance)
    I fear robots by themselves, i believe sufficiently inteligent robots may come up with morals, the only problem is, will it value human life? I am afraid it will instead value inteligence, and see human lifeforms as an inefficient way of acheving it. (putting it bluntly(and unprecisely), perhaps it will want to chop up humans except for the brains.) Ofcourse, there are other morals to come up with to be feared. (btw i “fear” humans with sufficiently umatching morals aswel)

  24. Tyler Clendenin

    I do fear iintelligent robots. But this does not mean I do not want them to exist.

  25. asimov

    i think there are many scary aspects of robots in the future.
    i also think that it is inevitable that these issues will come up

    1)-surveilance society-
    The ever-watchful eye of the robot.
    Every robot is going to have a camera inside it, nescesarily, for them to be able to see and move about.

    Every interaction you have with a robot is going to be recorded, from walking by one on the street to the robotic servant you have in your home.

    Computer memory storage capacities are only going to get larger and less expensive, and everything in the future is going to have wireless connectivity.

    As things get more and more linked, you’re going to turn the world into a surveilance society by default

    2)-killer robots-
    robots authorised to use lethal force,(military, law-enforcement, security, penal institutions)

    3)-psychotic robots-
    ordinary non-lethal robots that break or malfunction causing them to become dangerous.

    3.2)-psychotic robots 2- ordinary non-lethal robots that are intentionally hacked, infected, or reprogrammed to kill, maim, destroy, etc.

    4)-alien form of consciousness-
    if robots ever do achieve a form of self-awareness/ self-consciousness, their form of perception is going to be totally alien to human consciousness. As humans we like to have sex, eat good food, play, dance, create, but a robot isn’t going to have any of these drives. We don’t know what a true artificial intelligence/consciousness will be, and it could be potentially dangerous to us.

    Another issue that i believe is relavint to this disscusion it that, (aside from religious fundamentalists), we don’t know what seperates a human from a robot. You might say they we are organic, or more creative, or that we can reproduce on a tabletop, but then i don’t think you would be looking far enough ahead. What i think would be more up for debate would be whether things like love or fear, or inspiration are things that are unique or special, or whether they are just programs that are hardwired into an organic computer.

    As they learn more and more about the human brain, and, terrifiyingly, learn more and more how to manipulate it,(as a single example of many, consider that all pain is self-inflicted, if you were to get your arm cut off by a circular saw, your physical arm is cut by a physical saw blade, but the sensation of pain is produced by your brain. I’ll let you imagine for yourself some of the non-beneficial uses of being able to directly manipulate the brain.)
    I think they will find that we are just advanced? organic robots.

    I’m less worried about robotic dominion over the world, because before we get to that point i think that humanity(if it can still be called that) will be integrated with them, bionic men, synthetic muscles, memory implants, etc. I guess it is a dominion of a kind.

  26. I don’t worry about intelligent robots. I worry about *un*intelligent robots…

  27. Cy-Fi

    As soon as we give an artificial intelligence any kind of self-preservation instinct its logical reaction is going to be hostility to us.

  28. humans united against robots has just moved to stage 2 – awareness. Check it out and suport HUAR

  29. Jeremy

    I believe once robots are intelligent enough to think for themselves they will surpass human knowledge quickly they will last longer then humans and have more options for upgrading and communicating with each other. Its the next stage of evolution and the same way that humans are on top of the food chain the robots will not need us for anything once they are self sustained. They will probably be depressed lost souls inside the robots mind because they will be designed to think like a human but they are not human and will learn that they were created from primitive objects and their whole existance was to be slaves to man. So yeah there is some reason to worry. But by the time we get really advanced robotics up and working we will have already found out how ot upgrade ourselves with implants of different things. So slowly we as humans will evolve into robots to obtain longer life spans and enhanced skills.

  30. Wow. My sincere gratitude goes to all of you for participating in the survey, leaving these awesome comments, maintaining civility and intellectual vigor — your response is far beyond anything I dared hope.

    I hope everyone enjoys seeing these numbers.

    best,
    -hthth

  31. I’ve been accused of being a bot. Resulted in a near-adulterous relationship with someone across the country.

    Perhaps not ALL fear bots? Perhaps some LOVE bots.

  32. It is strange to suggest that the main risk in AI would be “how it is used”. A genuine artificial intelligence is not just a tool – it is an independent agent, just like any human is. You cannot compare it to normal technologies, which are only as good or bad as the people using them.

    I think there is a very real chance for true AI to be developed within our lifetime [1]. You have to realize that there is only one thing such an event can be compared with – the evolution of Homo Sapiens. Once a new sort of intelligence was born, we came to dominate the Earth in the blink of an evolutionary eye, far faster than what you would have predicted based on previous speeds of development. A very real danger exists that an AI, if not properly programmed from the beginning, will likewise come to dominate mankind. Remember, an AI doesn’t need to be maliciously programmed in order to cause trouble – the programmers may just be careless and build an AI that thinks differently from the way they envisioned.

    So yes, I fear robots. I think there is a very valid chance for them to wipe out humanity in the near future.

    [1] http://www.saunalahti.fi/~tspro1/artificial.html for my thoughts about AI being developed within our lifetimes. Or alternatively read Kurzweil.

    I’d also suggest reading http://www.singinst.org/upload/artificial-intelligence-risk.pdf for a discussion about AI’s consequences.

  33. It is strange to suggest that the main risk in AI would be “how it is used”. A genuine artificial intelligence is not just a tool – it is an independent agent, just like any human is. You cannot compare it to normal technologies, which are only as good or bad as the people using them.

    It’s not strange at all. You’re right that if we assume the poll is only concerned with with AI as we envision it in its most advanced state (even approaching Singularity), then it isn’t the sort of technological “tool” we’ve grown accustomed to.

    But that’s nowhere near the whole story.

    AI already exists. Long before we’ll see any potent self awareness and human-comparable autonomy, we’re going to see a heap of semi-intelligent “tools” used for a multitude of purposes: robotic surgeons, bomb disposal robots, security guards & computer vision for surveillance, spyplanes and spambots … the list goes on (and all of these listed are “tools” that we’re actively using today). So the fact of the matter is that while AI is still in such a state of low autonomy that it requires the helping hand of humans, we’re going to be in full control of how the technology is applied.

    My goal was to discern the fear of intelligent machines and the fear of its application, which is important, because the first is a fear of the new intelligences and the latter a fear of old ones.

    Please check out my latest entry (The Fear of Intelligent Machines, Survey Results), I touch on this there as well.

    You have to realize that there is only one thing such an event can be compared with – the evolution of Homo Sapiens.

    Indeed, I absolutely agree — even written and published an article on that which you’d probably enjoy reading, but unfortunately I haven’t translated to english yet (You don’t happen to know Icelandic, do you?)

    A very real danger exists that an AI, if not properly programmed from the beginning, will likewise come to dominate mankind. Remember, an AI doesn’t need to be maliciously programmed in order to cause trouble – the programmers may just be careless and build an AI that thinks differently from the way they envisioned.

    You seem awfully certain of this, I’d like to say more so than you could at this point. We don’t fully understand the human brain — and hence, any perceived danger and comparison of future robot minds with human minds is purely speculative. I’m not saying that we couldn’t build such robots, but there are a myriad of problems that we will have to overcome first — and in the meantime we’ll learn a great deal. For example, creating better intelligent tools to help us debug robobrains ;)

    Thanks for the comment and reading material!

    -hthth

  34. So the fact of the matter is that while AI is still in such a state of low autonomy that it requires the helping hand of humans, we’re going to be in full control of how the technology is applied.

    Sure. But only as long as AI is in a state of low autonomy. It remains an open question for how long it will stay that way, once the right technologies become available.

    Your use of the “I’m not afraid of AI itself” question was a valid way of separating sorts of different responses from each other. I’m not criticizing you for including it, I’m criticizing people for voting for it. :)

    Indeed, I absolutely agree — even written and published an article on that which you’d probably enjoy reading, but unfortunately I haven’t translated to english yet (You don’t happen to know Icelandic, do you?)

    I’m afraid no. Swedish is the closest that I get.

    You seem awfully certain of this, I’d like to say more so than you could at this point. We don’t fully understand the human brain — and hence, any perceived danger and comparison of future robot minds with human minds is purely speculative.

    Are you saying that the prospect of AI easily coming to dominate mankind is speculative, or the prospect of them easily going out of control, or both?

    In either case, when considering that the whole future of mankind is potentially at stake, it seems reasonable to be conservative – and being conservative from a safety point of view means assuming the worst. And it’s not hard to imagine multiple ways by which an AI could overpower human capabilities. (Skimming through some of your entries on artificial intelligence, you probably know several these even without me pointing them out, but since you brought up the topic…) The easiest way to come up with a few is to think of simple upgradability of hardware – a computer could be built to run thousands of times faster than a human, to have enough memory to learn everything mankind has ever learned, to be free of the numerous biases plaguing human thought, to multitask tens of thousands of things at once… With little thought, one can find even more AI advantages.

    Another thing to remember is that evolution does not plan in advance – it always takes the shortest route, and never replaces legacy structures with more advanced ones after other things have already been built on top of them. If we’re comparing to the human brain, it seems like a reasonable assumption that there will be plenty of room for optimization. Also, be careful not to confuse the human brain with minds-in-general – even if we had a perfect understanding of the human brain, that still doesn’t mean that we would have an understanding of all the minds we could build. Human minds are only a tiny part of the space of all possible minds.

    As for AI easily going out of control – obviously that depends on how well the engineers building their AI understand about its motivational systems. If they create a mind using, say, brute force and evolutionary algorithms, then they might not really understand them at all. That is precisely the reason why more people should be made aware of the risks involved in (true) artificial intelligence.

  35. Sure. But only as long as AI is in a state of low autonomy. It remains an open question for how long it will stay that way, once the right technologies become available.

    Yes. I agree. I was uncertain of whether you were criticizing the poll options or the majority of votes — either way I wanted to raise this issue :)

    Are you saying that the prospect of AI easily coming to dominate mankind is speculative, or the prospect of them easily going out of control, or both?

    I agree that AI certainly has the potential of surpassing human intellectual levels by far, and that it’s easy for us to imagine that they could “dominate us”. What I don’t agree with is speculating the risk of them growing out of control, wanting to dominate us, hurting mankind in any deliberate way — or otherwise speculating what a higher-level intelligence could possibly want to do.

    There is too large a gap between modern day AI and human-like AI to be afraid of it. There’s a myriad of technologies that will presumably emerge from intelligence research, which could make the future vastly different from what we can envision at this point.

    We don’t know, for example, to what extent we (humans) will augment ourselves with machinery. Will we develop out better brain-computer interfaces and thereby potentially make our race intellectual equals of any AI we develop? To quote your example, this could allow us to direct access to all human knowledge stored on machines, or it could allow us to control “lesser” forms of AI as extensions of our minds — having them multitask for us while we think about other things. Possibly, we could even gradually see the complete merge of the biological and mechanical.

    There’s no way of knowing how this will develop and what it will change. In fact, this is an essential part of the Singularity — the increased level of unpredictability paralell to intelligence increase.

    Thus, I feel that any prediction is purely speculative (ie. beyond the reach of scientific calculation of chance) and shouldn’t provoke fear at this point.

    it seems reasonable to be conservative – and being conservative from a safety point of view means assuming the worst.

    That’s not exactly fear of AI. That’s fear of the unknown and unexplored. The fear of change. Which I think we should try to welcome rather than fear.

    But, like I stated in my evaluation of the poll results, I’m not saying we don’t have to be cautious. We are, like you so poetically point out, plagued by numerous biases which limit us and make us prone to mistakes. Which is, again, why I think we should look forward to AI technologies that could make up for those limits.

    Another thing to remember is that evolution does not plan in advance – it always takes the shortest route, and never replaces legacy structures with more advanced ones after other things have already been built on top of them. If we’re comparing to the human brain, it seems like a reasonable assumption that there will be plenty of room for optimization.

    I’m not entirely certain of some of these claims, and I don’t really get what your point is with regards to our discussion.

    Also, be careful not to confuse the human brain with minds-in-general – even if we had a perfect understanding of the human brain, that still doesn’t mean that we would have an understanding of all the minds we could build.

    If by this you’re assuming that the mind is what the brain does (which I agree with), and you’re pointing out that even though we understand how minds work, we could still have difficulty explaining why Paul hates Kathy — then yes, I agree with you.

    As for AI easily going out of control – obviously that depends on how well the engineers building their AI understand about its motivational systems

    Agree. But see next point.

    If they create a mind using, say, brute force and evolutionary algorithms, then they might not really understand them at all. That is precisely the reason why more people should be made aware of the risks involved in (true) artificial intelligence.

    Ah, but you see, here we venture again into unknown territories. Provided that we’d create human-level intelligent systems, then we can assume that there was a gradient development trajectory of less intelligent systems that led to their conception (given the current trend and history of AI). So, we can assume that we’d have not human-like, but very capable systems for analyzing and maintaining our super-intelligent AI system. Therefore, we wouldn’t necessarily have to understand the human-level intelligent systems completely at all times, it would be sufficient to understand the monitoring systems (also remember brain-computer interfaces with regards to this, they could help us understand more complex systems).

    But again we’ve ventured into the premise of the unpredictable, as we can’t predict how increased automation and level of AI will effect our methods of programming, debugging and designing. Preaching the risks of human-level AI now should not be actively sought out for these reasons, nor should they be feared. The scientific quest for knowledge and development should override that fear.

    I’m not going to argue against (or for) Anissimov’s article on “other kinds of minds” as I don’t have time to give it a proper read at the moment. But, to return the favor I can recommend Minsky’s short paper on alien intelligences — in which he discusses what different intelligences have in common.

    I think we can agree that we have a lot in common in our philosophical and scientific standpoint towards AI — our differences of opinion are very minor with regards to the big picture. Thank you for taking the time to write such well thought out comments.

    I really enjoy arguing with you, it’s informative and excellent food for thought :-)

    Best regards,
    -Hrafn

  36. What I don’t agree with is speculating the risk of them growing out of control, wanting to dominate us, hurting mankind in any deliberate way — or otherwise speculating what a higher-level intelligence could possibly want to do.

    I mostly agree about the risk of them working to deliberately hurt mankind not being a real issue. I am more concerned about indifference towards mankind.

    Without knowing anything about the details of how an AI might work, I would think that we do know one thing that applies to all kinds of minds: they are by default indifferent towards everything. I am indifferent to the prospect of crushing to death billions of microscopic creatures each time I move about, I’m indifferent about the color of the bus that takes me home, I’m indifferent towards who sells me a meal at the local fast food place (as long as somebody does). A construction firm that levels a forest to make room for new apartment buildings is not deliberately hostile towards the animals living there, only indifferent. There’s an infinite amount of things that a mind could potentially care about it – if it wants to avoid complete paralysis, it can only have an interest in a tiny fraction of them. By default, any AI that gets built is indifferent towards whether its actions hurt or benefit mankind. I would consider that worrisome.

    (Furthermore, it would seem likely that intelligence and motivations are two separate things. Motivations are what you want to achieve, intelligence is the mechanism for getting there. While we certainly cannot speculate on how exactly a superhuman intelligence would go about in achieving a certain goal, it doesn’t seem implausible to assume that we could already begin discussion about what those ultimate goals should be. In fact, since we are talking about minds that are practically omnipotent, it’s quite important to get that discussion started as soon as possible.)

    As for augmenting ourselves with machinery – it is true that we could achieve some quite powerful benefits that way. It has, however, been argued that boosting our brains to the level of a true superhuman AI will require an understanding of the nature of intelligence and brain function that is far more extensive than the one required to simply build a superhuman AI. I’ll quote AI risk for the reasoning:

    Furthermore, humans are not designed to be improved, either by outside neuroscientists, or by recursive self-improvement internally. Natural selection did not build the human brain to be humanly hackable. All complex machinery in the brain has adapted to operate within narrow parameters of brain design. Suppose you can make the human smarter, let alone superintelligent; does the human remain sane? The human brain is very easy to perturb; just changing the balance of neurotransmitters can trigger schizophrenia, or other disorders. Deacon (1997) has an excellent discussion of the evolution of the human brain, how delicately the brain’s elements may be balanced, and how this is reflected in modern brain dysfunctions. The human brain is not end-user-modifiable.

    If you have the sort of knowledge needed to carefully take apart the brain’s intertwined structures, reconstruct them so that they can support a far greater amount of intelligence than they now have, incorporate that vastly increased intelligence which you presumably know how to build by now, and perform all this on a living human without him dying or going insane in the process… then it would probably be far easier for you to implement the superhuman intellect in a computer program, where you can build the system to work the way you want from the very beginning.

    I would not, then, say that any prediction about AI is purely speculative. We know that minds are by default indifferent to everything, and we know that it seems unlikely for humans to keep up with computer intelligence for long. And there are several projections which place true artificial intelligence within our lifetimes. I would say that all of this points to a non-trivial probability of AI wiping out humanity within, say, this century – which sounds sufficient to provoke fear to me.

    That’s not exactly fear of AI. That’s fear of the unknown and unexplored. The fear of change. Which I think we should try to welcome rather than fear.

    It’s fear of unknown AI. ;) I’m not afraid of change, but I’m afraid of change when it seems possible that the change in question can be an existential risk.

    I’m not entirely certain of some of these claims, and I don’t really get what your point is with regards to our discussion.

    Just pointing out one more reason why there’s likely plenty of room for improvement in the human brain / mind.

    If by this you’re assuming that the mind is what the brain does (which I agree with), and you’re pointing out that even though we understand how minds work, we could still have difficulty explaining why Paul hates Kathy — then yes, I agree with you.

    Actually, I’m not even sure myself of what my point was with this mention. :D Maybe it’ll come back to me.

    Therefore, we wouldn’t necessarily have to understand the human-level intelligent systems completely at all times, it would be sufficient to understand the monitoring systems

    Perhaps. Assuming that we do understand the monitoring systems, that is.

    It could of course be that as we approach the level of superintelligent machines, we’ll learn to understand them better and better until finally we’re ready to build a perfectly safe AI. But that assumes that everyone involves the risks involved. I’ll again quote AI risk:

    “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” — McCarthy, Minsky, Rochester, and Shannon (1955).

    [...]

    The Dartmouth Proposal included, among others, the following topics: Linguistic communication, linguistic reasoning, neural nets, abstraction, randomness and creativity, interacting with the environment, modeling the brain, originality, prediction, invention, discovery, and self-improvement.

    Now it seems to me that an AI capable of language, abstract thought, creativity, environmental interaction, originality, prediction, invention, discovery, and above all self-improvement, is well beyond the point where it needs also to be Friendly.

    The Dartmouth Proposal makes no mention of building nice/good/benevolent AI. Questions of safety are not mentioned even for the purpose of dismissing them. This, even in that bright summer when human-level AI seemed just around the corner.

    [...]

    At the time of this writing in 2006, the AI research community still doesn’t see Friendly AI as part of the problem. I wish I could cite a reference to this effect, but I cannot cite an absence of literature. Friendly AI is absent from the conceptual landscape, not just unpopular or unfunded. You cannot even call Friendly AI a blank spot on the map, because there is no notion that something is missing. If you’ve read popular/semitechnical books proposing how to build AI, such as Gödel, Escher, Bach (Hofstadter 1979) or The Society of Mind (Minsky 1986), you may think back and recall that you did not see Friendly AI discussed as part of the challenge. Neither have I seen Friendly AI discussed in the technical literature as a technical problem. My attempted literature search turned up primarily brief nontechnical papers, unconnected to each other, with no major reference in common except Isaac Asimov’s “Three Laws of Robotics”. (Asimov 1942.)

    I have ran into the same issue myself. A few months back ago I spoke with Pentti Haikonen, a PhD researcher who thinks he has a theory which enables the creation of conscious machines. I mentioned that a human-equivalent artificial intelligence may soon develop to vastly surpass human intelligence, and he agreed. When I then brought up the question of safety concerns, he dismissed them off-hand, saying that technology is only as dangerous as the people using them. Even researchers who believe themselves to be on the brink of creating true AI, it seems, can often be ignorant about the true consequences of such a thing.

    It’s difficult for us to know how, exactly, AI will be created. You are saying that this is a reason not to worry – I’m saying that it is exactly the reason why we should worry. There are lots of people every year who think their theory will enable the creation of true AI within 5 or so years, given the correct funding – of course they may be crackpots or just plain overconfident, but considering that many of these people are PhD’s, I wouldn’t have the courage to bet on it – sooner or later one of them might turn out to be right. It would seem theoretically possible that somebody might create an AI in a very rapid time, once they came up with a working evolutionary algorithm and had enough computing power. Even if people would develop the AI slowly in intermediate stages, it’s still not certain that they would spend enough time ensuring the safety of it – as the above examples show, researchers are often overconfident or just plain ignorant about how safe an AI is. A pressure for results will make the issue even worse.

    Can we act against an unpredictable AI of which we don’t know how it will be created, or by whom? Well, there does exist one way – to educate people about the potential dangers involved in artificial intelligence. And it’s not enough that people *know* about the dangers involved – they need to learn to constantly *think* from a safety point of view. According to what we know currently, things like evolutionary algorithms and neural networks are completely unsuitable towards building AIs – not because they wouldn’t work, but because their end results are too hard to understand. Discussion about the issue must begin *now*, when there’s still time to direct people towards safe AI engineering techniques. It may very well be too late if it begins 5 years before we’ll have true AI.

    I’m not going to argue against (or for) Anissimov’s article on “other kinds of minds” as I don’t have time to give it a proper read at the moment. But, to return the favor I can recommend Minsky’s short paper on alien intelligences — in which he discusses what different intelligences have in common.

    Don’t worry about it – I’m just tossing about links as a “click here to read more if interested” sort of thing, not as anything integral to my arguments. :)

    Thanks for the Minsky link. I skimmed it through, and found it pretty interesting. Of course, it’s obvious that any two minds will always have *something* in common – otherwise they wouldn’t both be called minds – but it was neat to see something about the topic, even if the observations made there were pretty basic.

    I think we can agree that we have a lot in common in our philosophical and scientific standpoint towards AI — our differences of opinion are very minor with regards to the big picture.

    Agreed.

    Thank you for taking the time to write such well thought out comments.

    I really enjoy arguing with you, it’s informative and excellent food for thought :-)

    And thank you for writing quality responses for them. I’m quite enjoying this as well – I notice it’s clearly helping me refine my opinions on these things. Might turn this discussion into an essay of some sort, once we’re done. :)

  37. I am more concerned about indifference towards mankind. I would think that we do know one thing that applies to all kinds of minds: they are by default indifferent towards everything.

    True, and great point. All minds that we’ve observed so far are indifferent towards something. Now answer me these questions:

    Is it because their survival depends on the focus of only the important things? Is it because evolution has provided them with mechanisms to do better? Is it because of choice (humans)? If choice, then is it because our feelings get in the way of logic? Or is it because our brains are too limited to account for everything? If we had more brainpower, like the ability to remember everything we see and take all we know into consideration for every major move we make, would that make us less indifferent towards the world? If you had the intellectual vigor to completely avoid harming beings that feel pain or a form of consciousness, would you avoid it? Will a superhuman AI have such a powerful intellect?

    I don’t really expect you to answer them, of course, I’m just underlining that you are (a) using lesser intelligences to understand the motives and behavior of higher intelligences (b) you are using that same lesser intelligence (ours) to make that prediction, (c) we don’t even understand the motives and behavior of those lesser intelligences yet (d) with the 3 previous points in mind, it should be acknowledged that scientifically we don’t have very stable grounds to base any prediction on. We need to understand more about what needs to be done and how it will work.
    Now, from what you’ve said I think that you’re working under the assumption that we can create AI by, for example, evolving it — not understanding the outcome. With that in mind, I can see why you’d be more worried, as it would be harder for us to control it. However, it’s highly unlikely at this point to assume we’ll create it that way (see below).

    Furthermore, it would seem likely that intelligence and motivations are two separate things. Motivations are what you want to achieve, intelligence is the mechanism for getting there.

    No. We use our intelligences to a large extent to motivate us and create/find goals (imagination, creativity). So while motivation can come from something like emotions, that’s not the whole story.

    It has, however, been argued that boosting our brains to the level of a true superhuman AI will require an understanding of the nature of intelligence and brain function that is far more extensive than the one required to simply build a superhuman AI.

    Ok. You’re assuming that we’d have to take the brain apart and piece it back together. But an interface that could allow us to store and retrieve text from a digital device wouldn’t necessarily require the understanding of the entire brain and would vastly improve our intellectual capabilities (i.e. the device itself could be intelligent, allowing us to ask it questions and getting answers instantly). However, that’s speculative on my part and beyond my abilities to argue. But, I can take the Garret Stanley’s one-way catbrain-recorder as an example of a hacked brain.
    The brain-machine interfacing was just one example I chose from a myriad of things that will presumably emerge from intelligence research, which could make the future vastly different from what we can envision at this point. Like I said.

    I would not, then, say that any prediction about AI is purely speculative. We know that minds are by default indifferent to everything, and we know that it seems unlikely for humans to keep up with computer intelligence for long

    You’re right in that it’s not entirely speculative, obviously we’re basing our predictions on something — and I’ve already acknowledged that there is need for caution.

    It’s speculative to the extent that, for example, an AI would be indifferent towards humans. How can you make that assumption without knowing how the AI works?

    Ok. Look, to prevent us from going in circles in our debate, here’s what I can gather so for from our difference of opinion:

    Your stance is that precautions regarding the hypothetical dangers associated with AI should be taken immediately.

    My stance is that it is close to useless to make such precautions now because we need to know more about how the “dangerous” AI system works to make the right precautions. In addition, immediate benefits from AI such as vacuum cleaners, care for the elderly, robotic surgeons, faster development of medicine, and so forth should be our main concern for now.

    I think we even eachother out nicely.

    when it seems possible that the change in question can be an existential risk

    We’re already at existential risk. We’re unsure of the effects of Global Warming because our intelligence and resources require a very long time to understand the causes and effects — if we had more intelligent machines we could have it work on that problem and solution for us (gather data, process data, make predictions).
    See what I mean? Global Warming isn’t the only existential risk we’re facing today.

    It’s difficult for us to know how, exactly, AI will be created. You are saying that this is a reason not to worry

    Yes, I’m saying that this is a reason not to worry at this point in time — not altogether. When we know more, we’ll know what the actual dangers are and how to circumvent them.

    There are lots of people every year who think their theory will enable the creation of true AI within 5 or so years, given the correct funding – of course they may be crackpots or just plain overconfident, but considering that many of these people are PhD’s, I wouldn’t have the courage to bet on it – sooner or later one of them might turn out to be right. It would seem theoretically possible that somebody might create an AI in a very rapid time, once they came up with a working evolutionary algorithm and had enough computing power.

    There a growing consensus that there is no such thing as a “Golden Algorithm” that will create a general intelligence. Rather, current research indicates that the AI problem is essentially an architecture and integration problem: We’ll need to understand the smaller systems of the mind, and then architecture a larger system from these parts. Which means we will have to know what we’re building and how it will work before we build it.

    Even if people would develop the AI slowly in intermediate stages, it’s still not certain that they would spend enough time ensuring the safety of it – as the above examples show, researchers are often overconfident or just plain ignorant about how safe an AI is.

    Yes, I agree. However, with each intermediate stage the overall intelligence and freedom of the human race is extended. Presumably providing stabler grounds for better work methods, better tools and more knowledge.

    Discussion about the issue must begin *now*, when there’s still time to direct people towards safe AI engineering techniques. It may very well be too late if it begins 5 years before we’ll have true AI.

    Yeah, and it’s starting. With each step of getting closer to general AI, you might have noticed that the media is catching on, so is the public — being educated about the dangers and benefits of AI (something for both of us :) .
    An example of this is the recently proposed robot ethics charter — which is definitely not the last such effort we’ll see :)

    And thank you for writing quality responses for them. I’m quite enjoying this as well – I notice it’s clearly helping me refine my opinions on these things. Might turn this discussion into an essay of some sort, once we’re done. :)

    Glad you’re enjoying it as well. I’ve already gotten a blog entry out of it, and a better understanding of AI fear arguments. So thanks for that :)

    I’m starting to think that neither of us is going to be able to properly convince the other one without a very, very long debate — as obviously we’ve both given this a lot of thought. We might have to agree to disagree, eventually.

  38. I’m scared of robots. Mainly because I know the terminator is no longer in the necessary physical condition to defeat them all.

  39. The Dartmouth Proposal makes no mention of building nice/good/benevolent AI. Questions of safety are not mentioned even for the purpose of dismissing them. This, even in that bright summer when human-level AI seemed just around the corner.

    As a sidenote, I consider this a very weak argument. The Dartmouth proposal dates back to 1955 — networking & internet didn’t venture outside labs until ca. 1980. Without direct web access to knowledge bases and information, any AI created would be severely limited with regards to how fast and how much it learns (basically limited to physical interactions). Similarly, our society didn’t depend nearly as much on computer tech as it does today.

    Consequently, in terms of the other arguments you have presented, this is way out of context.

  40. Sorry for the late responses – I’ve been quite Busy for the last week. Haven’t forgotten about this debate, however.

    Is it because their survival depends on the focus of only the important things? Is it because evolution has provided them with mechanisms to do better? Is it because of choice (humans)? If choice, then is it because our feelings get in the way of logic? Or is it because our brains are too limited to account for everything?

    Well, all of these, really. But the main reason I was getting at wasn’t mentioned, at least not directly. Craft a program to perform a certain goal, and it will ignore all the consequences of its actions that do not hinder the achievement of the goal because, well, it has no reason to consider them. I would argue that this is something we can say for certain about any kinds of minds – in order to care about a particular consequence of your actions, there needs to be a causal mechanism that *makes* you care about it. Even if your brain had infinite processing capacity, you will not suddenly start thinking “if I go to the kitchen, that will leave the bedroom empty, and it would be horrible if the bedroom were left empty!” for no reason.

    A mind might always develop new motivations for whatever reason, but that doesn’t mean that the motivations would just magically appear. If a mind is built so that it cares about a certain consequence, it will care about it. If a mind is built so that it may learn to care about a certain consequence, it might grow to care about it. But simply becoming more intelligent will not affect what you care about, unless becoming more intelligent allows you to figure out how the thing is somehow connected to something you already cared about before. Hume’s Guillotine applies regardless of intellect.

    (Sure, one could still argue that since we are just using our lesser intelligences to speculate about this, we might be making a mistake still… but since I’m deriving “all minds are by default indifferent towards everything” from the very notion that all things must have a cause, I would say that we have a very stable ground to base that prediction on.)

    No. We use our intelligences to a large extent to motivate us and create/find goals (imagination, creativity). So while motivation can come from something like emotions, that’s not the whole story.

    Disagree. We can come up with new goals using intelligence, but if we would not care about those goals if we did not develop some sort of emotional attachment to them, or if we didn’t obtain pleasure from them. I’m debating you because I find it both enjoyable and useful in clarifying my own thoughts – if I didn’t find it enjoyable or think that it was useful in helping myself spread word about AI (a goal which derives from the fact that I have an emotional motivation to not see humanity destroyed by rogue AIs) or have some other reason for doing so, I wouldn’t debate you no matter what my intelligence.

    The brain-machine interfacing was just one example I chose from a myriad of things that will presumably emerge from intelligence research, which could make the future vastly different from what we can envision at this point. Like I said.

    Oh, I’m not saying that we couldn’t boost our intelligence at all. Heck, I spend a large portion of my “AI in our lifetime” essay arguing that AI will emerge exactly because intelligence research will help make us more intelligent and thus assist us in inventing AI.

    Still, there’s a limit to how intelligent you can make a human before needing to “take apart his brain and piece it back together”. If you build a device that answers questions instantly, you are limited by the speed by which human brains can process the answers. If the brain’s hardware acts as a bottleneck on speed, an AI can eventually be constructed with faster hardware. (And the human brain’s processing capacity is far lower than the ultimate physical limits on computation.) Even if you uploaded a human brain on a computer substrate, it would be limited by its existing architecture, which is currently very fragile.

    As for “we might become more intelligent in the future, thus it’s pointless to speculate about this now”… that sounds mostly like a cheap cop-out to me. By that line of reasoning, we might as well give up all long-term planning by now (be it about our personal lives or politics), since we might become more intelligent in the future and come up with better plans then. (Not to mention that if we decide to plan on this “later, when we’re more intelligent”, when is the point when we decide ourselves to be intelligent enough and go back to thinking about the issue?)

    Your stance is that precautions regarding the hypothetical dangers associated with AI should be taken immediately.

    My stance is that it is close to useless to make such precautions now because we need to know more about how the “dangerous” AI system works to make the right precautions. In addition, immediate benefits from AI such as vacuum cleaners, care for the elderly, robotic surgeons, faster development of medicine, and so forth should be our main concern for now.

    But we do know some things – we know that regardless of what the exact precautions should be, we can start to foster a mindset of caution among AI researchers, so that they will always remember to think about the dangers involved when doing research. (Note that fostering a certain sort of mindset in an entire community is not a fast process by any means – if we want all AI researchers to have it in, say, 20 years, we better start educating them now.) We can try to guide people into fields that seem likely to help in figuring out what the right precautions are. We can discourage AI techniques that currently seem to have a very high probability of being inherently unsafe.

    Even at the risk of sounding like a mindless parrot, I’ll again quote an excerpt from AI risk:

    The field of AI has techniques, such as neural networks and evolutionary programming, which have grown in power with the slow tweaking of decades. But neural networks are opaque – the user has no idea how the neural net is making its decisions – and cannot easily be rendered unopaque; the people who invented and polished neural networks were not thinking about the long-term problems of Friendly AI. Evolutionary programming (EP) is stochastic, and does not precisely preserve the optimization target in the generated code; EP gives you code that does what you ask, most of the time, under the tested circumstances, but the code may also do something else on the side. EP is a powerful, still maturing technique that is intrinsically unsuited to the demands of Friendly AI. Friendly AI, as I have proposed it, requires repeated cycles of recursive self-improvement that precisely preserve a stable optimization target.

    The most powerful current AI techniques, as they were developed and then polished and improved over time, have basic incompatibilities with the requirements of Friendly AI as I currently see them. The Y2K problem – which proved very expensive to fix, though not global-catastrophic – analogously arose from failing to foresee tomorrow’s design requirements. The nightmare scenario is that we find ourselves stuck with a catalog of mature, powerful, publicly available AI techniques which combine to yield non-Friendly AI, but which cannot be used to build Friendly AI without redoing the last three decades of AI work from scratch.

    [...]

    There is also a pragmatic objection which concedes that Friendly AI is an important problem, but worries that, given our present state of understanding, we simply are not in a position to tackle Friendly AI: If we try to solve the problem right now, we’ll just fail, or produce anti-science instead of science.

    And this objection is worth worrying about. It appears to me that the knowledge is out there – that it is possible to study a sufficiently large body of existing knowledge, and then tackle Friendly AI without smashing face-first into a brick wall – but the knowledge is scattered across multiple disciplines: Decision theory and evolutionary psychology and probability theory and evolutionary biology and cognitive psychology and information theory and the field traditionally known as “Artificial Intelligence”… There is no curriculum that has already prepared a large pool of existing researchers to make progress on Friendly AI.

    The “ten-year rule” for genius, validated across fields ranging from math to music to competitive tennis, states that no one achieves outstanding performance in any field without at least ten years of effort. (Hayes 1981.) Mozart began composing symphonies at age 4, but they weren’t Mozart symphonies – it took another 13 years for Mozart to start composing outstanding symphonies. (Weisberg 1986.) My own experience with the learning curve reinforces this worry. If we want people who can make progress on Friendly AI, then they have to start training themselves, full-time, years before they are urgently needed.

    As for the immeadite benefits of AI – well, of course we should develop those as well (and it’d be silly to even try to avoid that). That doesn’t mean that we couldn’t simultaneously start taking precautions.

    We’re already at existential risk. We’re unsure of the effects of Global Warming because our intelligence and resources require a very long time to understand the causes and effects — if we had more intelligent machines we could have it work on that problem and solution for us (gather data, process data, make predictions).

    Which is why we should develop AI. But do so in a careful way.

    There a growing consensus that there is no such thing as a “Golden Algorithm” that will create a general intelligence. Rather, current research indicates that the AI problem is essentially an architecture and integration problem: We’ll need to understand the smaller systems of the mind, and then architecture a larger system from these parts.

    And architecture and integration is exactly what some of those PhD’s in question are working on.

    As for evolutionary mechanisms (I assume this was your reason for saying that they’re unlikely to work? I didn’t spot anything else in your comment addressing it), I don’t think the “no golden algorithm” argument really applies to them. After all, evolutionary mechanisms are a special case – they’re not the magical algorithm that is the basis for intelligence (which, AFAIU, is what’s usually referred to when talking about the “golden algorithm”) – it’s a mechanism for generating architectures. “We have to know how an AI will work” is a poor argument for why an evolutionary approach wouldn’t work, because natural evolution certainly didn’t “know” how humans would work… it created them through trial and error and the equivalent of pure brute force.

    I’m starting to think that neither of us is going to be able to properly convince the other one without a very, very long debate — as obviously we’ve both given this a lot of thought.

    Agreed.

    I’m in no rush. ;)

    As a sidenote, I consider this a very weak argument. The Dartmouth proposal dates back to 1955 — networking & internet didn’t venture outside labs until ca. 1980. Without direct web access to knowledge bases and information, any AI created would be severely limited with regards to how fast and how much it learns (basically limited to physical interactions). Similarly, our society didn’t depend nearly as much on computer tech as it does today.

    Good point, you’re probably right. I hadn’t thought of it that way.

  41. Bog

    I fear what the very rich people will do with intelligent machines.

  42. mari

    Hi there, great site!
    I’ve got a question, do you know who is the maker/company of the ‘friendly robot holding flowers’? Many thanks, mari

  43. Hi Mari and thank you. The maker is Tatsuya Matsui and his studio is called Flower Robotics. The robot’s name is Posy.

  44. North

    I am a 41 year old woman who between 2004 and 2008 was in the middle of the biggest ghost stalking that I have ever heard of. If you watched “Poltergeist ” a million times it wouldn’t touch what they did to me. It wasn’t just that every appliance in my home was possesed… something in the house ABSOLUTELY did not want me to use machines or technology for any reason. People barely believe in ghosts, let alone my fear of machines.

Reply to “Who’s Afraid of Robots?”

Please read the Terms of Use before commenting!

Basic HTML allowed (a, blockquote, strong, em)


Other ..

Think Artificial is a proud member of the 9rules blog community.