I knew it. AR tech will definitely allow us to change colors of houses, add objects and, then, remove any we don’t want to see. Personally, I’m looking forward to being able to remove pervasive billboard advertisements on the street.
Seems Samsung is moving further into augmented reality, shown briefly in the video embedded below along with some other typical iPaddish features. Looks pretty good from where I’m standing—looking forward to seeing the price when it hits the shelves.
Despite coming of age; this demo remains one of my favorite examples of augmented reality applications. This one uses your computers camera (or an external one, no mobile versions that I know of) to view a cube with markers attached to each side. Through naked eyes, it’s a paper cube with cryptic symbols—but with the aid of cameras and computer program; digital metamorphosis produces something entirely different.
If you still haven’t already succumbed to skipping my ramblings—go watch the demo of levelHead by Julian Oliver! (embedded video below.)
Physical object interaction; virtual worlds are dependent- and intertwined with physical objects (the cubes) in the environment. (Opposed to displaying virtual objects that have no connection to reality, which in my opinion is removing the “reality” out of “augmented“.)
The cube, simple as it is, gives the impression of a gateway into an entirely different world.
Simplicity. With the environment shaded and lit, the flat white character is simple and adds a mysterious touch to the experience.
The cube is the controller as well as viewer; an intuitive solution for containing the game experience entirely within a simple paper cube.
The game is easy to replicate if desired; all you need is the right program and a paper cube with printouts.
Considering the Future: Remember Myst? I can easily envision an entire game in that style: purely contained within a paper cube, or even interchangeable shapes such as orbs or other simple ones for different environments (perhaps even a few in-game tools).
LevelHead information excerpt
Using tilt motions, the player moves a character through rooms that appear inside one of several cubes on a table. Each room is logically connected by a series of doors, though some doors lead nowhere (they are traps).
The player has 2 minutes to find the exit of each cube, leading the character into the entrance of the next.
Work is also being done to use invisible markers such that the cube itself appears entirely white to the naked eye.
I normally don’t sell anything in my posts here on Think Artificial. In fact, I’ve never done so nor allowed anyone else to. However, this is uniquely related to our topic of interest and may benefit some lucky readers.
I’m considering selling a few of my augmented reality domains; originally intended to be put to use, other projects have gotten in the way. It seems a shame to leave them parked. So, here they are.
The domain list
Make an offer
Contact me directly to make me an offer on any of these domains. I intend to keep prices fair. I want to see them go to deserving individuals and startups.
I reserve the right to accept or deny offers based on personal preference.
Predicted on August 28th, 2009: “In January, 2010 the first major store announces mobile AR support; possibly an app that indicates product locations in shelves, or one that shows information about products. There are rumors of at least 3 other stores preparing a launch.”
Here’s something fresh from Google’s oven: the Google Goggles app for Android phones. Despite my let down when I realized they weren’t real Goggles, this is a mark of things getting interesting. Mobile AR apps are mutating and shifting into various forms and possibilities of the tech are certainly starting to form a big picture in the heads of developers. It’s here to stay allright.
The image recognition tech sounds exciting—image search and recognition in real time! I wouldn’t be surprised to see Google and Apple go heads on in a bloodsport match as they race towards the AR advertising market (incidentally bringing with them a wave of exciting apps and even AR goggle interfaces. Real ones.).
But, it’s best to let the video do the talking (read: I’m lazy). Here’s Google Goggles.
My Reykjavik University Aperio advisor surprised me yesterday when he mentioned how cool I was in that YouTube video. I had no idea what the hell he was going on about and made an expression similar to those in surprise-photo-shoots. As the expression wore off he explained my brother had uploaded a video of my visit to the MIT Media Lab in 1993. At the time he was working on multi-modal AI systems, which I happily agreed to test—the result of which is in the video below =)
The Advanced Human Interface Group (AHIG), MIT Media Lab. The ICONC System, demonstrated by Hrafn Th. Thorisson, Summer 1993. The system enabled the use of co-occurring, natural speech and gesture to interactively describe the arrangement and movements of objects in a room. The computer would interpret the user’s actions and figure out which objects the user was talking about and how to arrange them based on spatial information in the user’s speech and gesture. [Excerpt from the YouTube description, continued below]
The main authors of this work were David Koons (spatial knowledge, multimodal integration) and Carlton Sparrell (gesture recognition), directed by Richard A. Bolt. This technology is described in part in the paper “Integrating simultaneous input from speech, gaze, and hand gestures” by D B Koons, C J Sparrell, K R Thorisson (1993).
UPDATE (Oct. 30th, 2009): The article stated, wrongly, that this took place in 1994. This has been corrected.
The now famed Layar announced yesterday that it’s planning a major addition to their augmented reality platform: an ability to view 3D objects, animation and place 3D tags on buildings, etc. The addition is scheduled to be released in November, allowing 500+ developers to play with it through API.
Looks like Layar is going to keep their lead in the field; from their press release:
Layar 3D makes use of OpenGL, the accelerometer, the GPS and the compass of the phone. Developers can place 3D objects in their content layers based on coordinates. Objects can be optimized in size and orientation to create an immersive and realistic experience. The 3D capabilities support live downloading and rendering of 3D objects. Actions such as “open link” or “play music” can be assigned to 3D objects. [Press release]
Looking forward to early results from the minds of their developers. Embedded videos after the jump
Today we’re launching a special page to store past and present predictions regarding future technology developments. At the moment all are in the area of augmented reality. Below is a list of new predictions; the complete list can be found on the new Predictions page. The page can also be accessed through its link in Think Artificial’s header-menu.
Prediction: Apple releases initial support to iPhone augmented reality apps before September 15th, 2009. Actual: Announced 11 days after the prediction; Apple’s iPhone OS 3.1 supports augmented reality applications; expected release is in September (as predicted).
Only eleven days after the prediction news began rushing in; among many others reporting, MacRumors said on July 24th 2009:
The L.A. Times reports that Apple will begin allowing developers access to the tools they need to produce augmented reality applications starting with upcoming iPhone OS 3.1. [So far, AR applications] have used unpublished APIs which prevent them from being allowed on the App Store. Apple, however, told one developer that the tools necessary would become available with iPhone 3.1. [MacRumors]
In short, Apple is releasing their initial support to augmented reality applications. The Los Angeles Times posted the article that broke news that Apple told developers of the Nearest Tube AR train finder (Acrossair) that augmented reality apps will be allowed in the iPhone App Store in September, as predicted… let’s see if it turns out to be September 15th