When Intelligent Systems Surprise Us

SONY AIBO playing with kidsWith the complexity of modern AI systems, they sometimes come up with solutions we don’t expect … when we least expect it. Here’s a great video example of this; an AIBO robot is presented with a problem that seems to have only one possible solution. Can you figure out more than one? The AIBO can.

Placed in a dark room with one lit button that turns on the lights, and one unlit button that opens the door out, the AIBO is given the goal of getting out of the room. It’s too dark for him to see the unlit Open button, so the idea is that he should learn to turn on the lights first. Watch the video to see what happens.

Did you figure it out before watching? The AIBO is part of a research program at the Rutgers Laboratory for Real-Life Reinforcement Learning, and the video was submitted to the AAAI-07 video competition earlier this year (and won for best short video).

These kinds of surprising solutions are often fun, and not that uncommon. I’ve experienced similar things myself, where creatures in a simulation I was developing found a way to get around a programming fault — effectively using their own intelligence to counter a mistake that human error had left in the fabric of their universe.

Links and References

No related posts detected.

6 Comments, Comment or Ping

  1. Interesting stuff; but is it clear that Aibo backed into the green switch on purpose? Or was it just random and not so creative? Or … uh, oh … is that the same thing?

    Assuming you have seen Zeno, but here’s the link anyway:


    hmmmmmmm…. The designer plans to be able to price the robot at a few hundred dollars and have it controllable by software running on a PC … the idea is to drop the price dramatically from robots like Aibo. Probably a good idea … but I still wonder if his target market — the approximately average consumer — is ready for a robot that resembles a child.


  2. I agree with Dale here. Did the robot do this on purpose, or was it just a coincidence? Is this a repeated behavior? Could this be a learned behavior, with the robot having repeated the experiment before and been “taught” where the button was? Finally, what is the possibility of emergent behavior in these things? Are they capable of it? (Also, why did the robot back up from the light? Does it have sensors? If so, how sensitive are they to the door?

  3. Hrafn

    I still wonder if his target market — the approximately average consumer — is ready for a robot that resembles a child.

    I’ve seen Zeno, and personally don’t fancy its design much (kids, however, have a bizarre perception of ‘prettiness’). I think that there’ll be demand for the robot provided they’ll pull of a good features. Child-looking dolls with facial animatronics are relatively popular. I think we’ll have to wait for more info on tech specs, and specifically some videos of Zeno in action; the current information I’ve seen is somewhat limited and designed to raise commercial interest.

    @Dale and Gnorb on the AIBO’s solutions
    Excellent observations on all parts. They warrant a whole new entry. I’m backed up with work these days though, so it might take me a while to get to it.

  4. Hrafn

    Found a very brief videoclip of Zeno.

  5. The Zeno video … argh! I can’t think of anything to write!

    The eyes seem creepy to me, too big, too much like someone who hasn’t slept in weeks and had creepy looking eyes to begin with…. but you’re probably right about kids reacting to it more positively than I am.

    Suddenly I can’t stop thinking of an old Twilight Zone episode starring Telly Savalas and a doll that wouldn’t shut up…. yikes!

    Thanks for the vid… bye!

Reply to “When Intelligent Systems Surprise Us”

Please read the Terms of Use before commenting!

Basic HTML allowed (a, blockquote, strong, em)

Other ..

Think Artificial is a proud member of the 9rules blog community.