Today, we announce the release of Interactive Spaces, a new API and runtime which allows developers to build interactive applications for physical spaces.

Imagine walking into a room where the room recognizes where you are and responds based on your position.


You can see an example above. There are cameras in the ceiling which are doing blob tracking, in this case the blobs are people walking on the floor. The floor then responds to the blobs by having colored circles appear underneath the feet of someone standing on the floor and then having the circles follow that person around.

Interactive Spaces works by having “consumers” of events, like the floor, connect to “producers” of events, like those cameras in the ceiling. Any number of “producers” and “consumers” can be connected to each other, making it possible to create quite complex behavior in the physical space.

Interactive Spaces is written in Java, so it can run on any operating system that supports Java, including Linux and OSX and soon Windows.

Interactive Spaces provides a collection of libraries for implementing the activities which will run in your interactive space. Implementing an activity can require anything from a few lines in a simple configuration file to you creating the proper interfaces entirely from scratch. The former gets you off the ground very quickly, but limits what your activity can do, while the latter allows you the most power at the cost of more complexity. Interactive Spaces also provides activities’ runtime environment, allowing you to deploy, start, and stop the activities running on multiple computers from a central web application in your local network.

Additional languages like Javascript and Python are supported out of the box. Native applications can also be run, which means packages like openFrameworks which use C++ are also supported out of the box. Plans are also underway for supporting the Processing language.

Sound like fun? Check it out on Google Code.

By Keith Hughes, The Experience Engineering Team