SEAR-RL

SEAR-RL stand for ‘See Hear – Real Life’ it is a continuation of a project I started long ago.

AKA:  Probably my last hurrah in technology.

AKA:  Seeing with “Blobs of Sound” “Sight with Sound” “Video to Audio Translation”

 

Hard Hat with Kinect on it.

Background:

A little over a decade over when I was finishing up college, I got the question in my head of what a audio-only video game would be like.  I was taking a multimedia class at the time, so as my project for the class, toyed around with a couple of game mechanics to see if it could be done.

After college I somehow fell in with working with a couple of accessibility focused companies, Orcca Technology and All-In-Play Games.  The nice thing was it kept my interest in the accessibility and how to create an audio-only game in my head.

Now you have to understand, audio-only games are not a new idea.   There is a whole subculture to them, as you can check out here audiogames.net.  But most of the games I tried out never really seemed to solve the problem of translating graphics adequately for me.

So the question evolved into how who one go about translating graphics into sound so someone who is visually impaired could play something like Doom and Unreal.

There have been attempts in the past, but the solution always seemed to be what I call the ‘Tagging’ issue.   Objects could be tagged in a game to tell the player what and where they are, but the tag do not always represent the volume or description of the object.  Also an environment needs tagging before a visually impaired player can effectively play the game.  What is one supposed to do if one cannot always guarantee a tagging pass?

Eventually, I went to graduate school at the ETC at Carnegie Mellon in 2005, and I spent a semester in an independent study trying to prototype a simple idea how one might go about getting around  that problem of translating  a 3D environment into sound.

I would say the project was generally a success.   Yes, there were a couple of issues, but in  translating the visuals on-screen into a useful sound-scape, my advisor and I thought there was something there.

However, it was while working on the project, I realized if one could feed in 3D data realtime to the system, one could translate the real world into audio which people could use to navigate the world.

I wanted to expand the idea further, but 3D scanners were pretty expensive and not so portable at the time. So I graduated and moved to other things.

Then last year Microsoft released the Kinect, and more recently the official Windows SDK.  I realized with a little work, the Kinect could be a portable 3D Scanner.

So I dug out my old project and merged the two together, you can see the results in the video.

Now there have been similar systems in the past like the vIOCe and more recently the NAVI system  from University of Konstanz.

Where vIOCe uses 2D static images I am trying to use 3D real world data.   NAVI uses 3D data but uses haptics and tags to communicate data about objects in the real world.  With SEAR I am trying to use only sound to give impressions of volume of objects in the real world.

Anyway, this little personal project has been a long time in the making.  I figure I might as well share my results.

Hope you enjoy.

-Stan

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.