Goodbye Steve. You’ll be missed.

I’ve been using Macintosh computers ever since I remember. I grew up using MacPaint, playing Toxic Raveen, Marathon, and arguing with my friends over loving Macs more than PCs.

That was a long time ago and the arguments are long behind, yet today I’m still sitting in front of an Apple who someone’s taken a bite out of; writing on a MacBook Pro with my iPhone on my desktop. Watching NetFlix on my AppleTV while my docs are backing up onto my Time Capsule.


Picture of Steve Jobs. Thanks for everything.
Image courtesy of Apple.com

I’ve known of the father of Apple for equally long. And, frankly, I don’t remember feeling so sad over the loss of any person or celebrity I never knew personally.

His ingenuity and technology has been what enabled many of my creations and achievements. Thank you for everything Steve.


Diminishing reality: removing objects via augmented reality

I knew it. AR tech will definitely allow us to change colors of houses, add objects and, then, remove any we don’t want to see. Personally, I’m looking forward to being able to remove pervasive billboard advertisements on the street.


[Subscribers, please visit the post if you can't see this video]

Samsung’s Galaxy Tab dabbles with augmented reality

Seems Samsung is moving further into augmented reality, shown briefly in the video embedded below along with some other typical iPaddish features. Looks pretty good from where I’m standing—looking forward to seeing the price when it hits the shelves.

The Galaxy Tab was unveiled this Thursday and the fellas over at PC World already have a nice comparison chart for iPad vs. GT. The tablet runs Google’s Android operating system, version 2.2.

Here’s to hoping that the AR features will make Apple push extra hard to unveil the augmented reality software that I’m sure they’ve been working on for at least 3 years.

levelHead – Augmented reality puzzle game

Despite coming of age; this demo remains one of my favorite examples of augmented reality applications. This one uses your computers camera (or an external one, no mobile versions that I know of) to view a cube with markers attached to each side. Through naked eyes, it’s a paper cube with cryptic symbols—but with the aid of cameras and computer program; digital metamorphosis produces something entirely different.

If you still haven’t already succumbed to skipping my ramblings—go watch the demo of levelHead by Julian Oliver! (embedded video below.)

levelHead Video

levelHead v1.0, 3 cube speed-run (spoiler!) from Julian Oliver on Vimeo.

Main points of fascination

  • Physical object interaction; virtual worlds are dependent- and intertwined with physical objects (the cubes) in the environment. (Opposed to displaying virtual objects that have no connection to reality, which in my opinion is removing the “reality” out of “augmented.)
  • The cube, simple as it is, gives the impression of a gateway into an entirely different world.
  • Simplicity. With the environment shaded and lit, the flat white character is simple and adds a mysterious touch to the experience.
  • The cube is the controller as well as viewer; an intuitive solution for containing the game experience entirely within a simple paper cube.
  • The game is easy to replicate if desired; all you need is the right program and a paper cube with printouts.
  • Considering the Future: Remember Myst? I can easily envision an entire game in that style: purely contained within a paper cube, or even interchangeable shapes such as orbs or other simple ones for different environments (perhaps even a few in-game tools).

LevelHead information excerpt

Using tilt motions, the player moves a character through rooms that appear inside one of several cubes on a table. Each room is logically connected by a series of doors, though some doors lead nowhere (they are traps).

The player has 2 minutes to find the exit of each cube, leading the character into the entrance of the next.

Work is also being done to use invisible markers such that the cube itself appears entirely white to the naked eye.

Visit the project page of Julian Oliver’s levelHead

Thanks for an inspirational game concept, Julian!

Latest version of LittleDog from CLMC and Boston Dynamics (video)

LittleDog from Boston Dynamics and CLMC at USC doing an obstacle courseOur last look BigDog from at Boston Dynamics, most agreed that its movements were beginning to look eerily life-like. The latest version of LittleDog, shown in the video below, is nothing short of breathtaking. Six teams were provided with the LittleDog chassis and funding; this version contains AI software created by the Computational Learning and Motor Control Lab at USC (specific project webpage). See the video after the jump.

Some of my augmented reality domains are up for grabs

I normally don’t sell anything in my posts here on Think Artificial. In fact, I’ve never done so nor allowed anyone else to. However, this is uniquely related to our topic of interest and may benefit some lucky readers.

I’m considering selling a few of my augmented reality domains; originally intended to be put to use, other projects have gotten in the way. It seems a shame to leave them parked. So, here they are.

The domain list

  • Augmented-Realities.net
    • AugmentedRealityMobile.net
    • AugmentedRealityMobile.org
  • AugmentedRealityOnline.com
  • AugmentingRealities.com
  • HandheldAugmentedReality.com
  • iPhoneAugmentedReality.net
  • MMORPGAugmentedReality.com


Make an offer

Contact me directly to make me an offer on any of these domains. I intend to keep prices fair. I want to see them go to deserving individuals and startups.

I reserve the right to accept or deny offers based on personal preference.

Correctly predicted: Major stores adopt augmented reality

Look into the all-seeing eye.The latest results are in for my augmented reality prediction series. I’m sure you’ll like this one. It came true. Oh yes.

Prediction & results

Predicted on August 28th, 2009: “In January, 2010 the first major store announces mobile AR support; possibly an app that indicates product locations in shelves, or one that shows information about products. There are rumors of at least 3 other stores preparing a launch.

  • Actual turn of events (written Feb 9th, 2010): Came true. To quote SFGate.com’s article on AR smart phone apps:

    Companies including Best Buy Inc., Jack in the Box and Puma are already advertising on Loopt, serving up coupons or banners when people near their stores.“[sic]


Prediction was originally published August 28th, 2009 in the post More predictions and a page to list them

Have any material related to this prediction/event? Post it in a comment below!

Regarding predictions: Mobile phones becoming powerful tool when shopping

It looks like the predicted adaptation of augmented reality by major stores could happen sooner than expected. The New York Times has a piece today titled Mobile Phones Become Essential Tool to Holiday Shopping. In it they discuss consumer use of phones in to shop online and offline; scanning barcodes, comparing prices and, upon finding lower ones, buying online instead of in the store.

Aware of the power of mobile phones, some offline retailers are using the technology to fight back.

If someone standing in one store scans a product with ShopSavvy, for example, a retailer down the street could deliver the shopper a coupon for the same item. A major retailer is already doing that in a few test cities, including Seattle, said Alexander Muse, co-founder of Big in Japan, the start-up that created ShopSavvy.

Other applications, including Yowza, use the GPS location information in cellphones to send shoppers coupons for stores within walking distance of where they’re standing.

“This empowers consumers to make a smart decision,” Mr. Muse said. “Already, retailers are starting to figure out, ‘I need to be in this game.’

It’s especially intriguing that GPS is an increasing part of the mainstream deal as its a key part of AR. One down.

Read the full article on the New York Times (you may need an account).

Google Goggles – Photo recognition in real-time for augmented reality info

Here’s something fresh from Google’s oven: the Google Goggles app for Android phones. Despite my let down when I realized they weren’t real Goggles, this is a mark of things getting interesting. Mobile AR apps are mutating and shifting into various forms and possibilities of the tech are certainly starting to form a big picture in the heads of developers. It’s here to stay allright.

The image recognition tech sounds exciting—image search and recognition in real time! I wouldn’t be surprised to see Google and Apple go heads on in a bloodsport match as they race towards the AR advertising market (incidentally bringing with them a wave of exciting apps and even AR goggle interfaces. Real ones.).

But, it’s best to let the video do the talking (read: I’m lazy). Here’s Google Goggles.


If you’re having trouble seeing the video, either visit the ThinkArtificial.org post or go directly to YouTube.

Intelligent systems of 1993; Hrafn visits the MIT Media Lab

My Reykjavik University Aperio advisor surprised me yesterday when he mentioned how cool I was in that YouTube video. I had no idea what the hell he was going on about and made an expression similar to those in surprise-photo-shoots. As the expression wore off he explained my brother had uploaded a video of my visit to the MIT Media Lab in 1993. At the time he was working on multi-modal AI systems, which I happily agreed to test—the result of which is in the video below =)

The Advanced Human Interface Group (AHIG), MIT Media Lab. The ICONC System, demonstrated by Hrafn Th. Thorisson, Summer 1993. The system enabled the use of co-occurring, natural speech and gesture to interactively describe the arrangement and movements of objects in a room. The computer would interpret the user’s actions and figure out which objects the user was talking about and how to arrange them based on spatial information in the user’s speech and gesture. [Excerpt from the YouTube description, continued below]




[If you can't see the video, click here to visit the post]

The main authors of this work were David Koons (spatial knowledge, multimodal integration) and Carlton Sparrell (gesture recognition), directed by Richard A. Bolt. This technology is described in part in the paper “Integrating simultaneous input from speech, gaze, and hand gestures” by D B Koons, C J Sparrell, K R Thorisson (1993).

UPDATE (Oct. 30th, 2009): The article stated, wrongly, that this took place in 1994. This has been corrected.

Links & references

Next -


Other ..

Think Artificial is a proud member of the 9rules blog community.

Think Artificial Sponsors