10,000 more baby steps to go.

Ok the 4th of July through me off, but after two weeks there is a lot to update about.

First, I finally got a basic Structure app working on my iPhone. Photos below.

Camera image of Sofa on left, Structure Sensor Depth image on right.

Camera image of Sofa on left, Structure Sensor Depth image on right.

Camera image of desk and computer on left, Structure Sensor Depth image on right.

Camera image of desk and computer on left, Structure Sensor Depth image on right.

Camera image of backpack on chair on left, Structure Sensor Depth image on right.

Camera image of backpack on chair on left, Structure Sensor Depth image on right.

The biggest challenges what getting use to YpCbCr color space. Also, there was a small bug in the Structure SDK that cause the Synchronized frame calls not to work. However, considering my end goals it was not critical to use synchronized frames, yet.

Also I was able to get a very basic Audio and MIDI application running on the iPhone. This was crazy because I had to pull information from multiple sources to figure out how to do that. The current Apple example, is pushing AVFoundation, but I needed CoreAudio.

The point is, I am just glad I have been able to accomplish baby steps in both areas, using the sensor and generating sound. Now there is only like 10,000 more baby steps to go.

Another 2 forward, 1 step back.

My current grumpiness with XCode, you cannot refactor Swift Code.

One would think that is a pretty basic feature an IDE should have, but as of the writing of this it does not exist.

I discovered it when I tried to refactor some of my code for SEAR-RL last week. Xcode threw up an alert warning it could only do that for C and Obj-C code. So I decide to press ahead and just manually refactor everything. I think that was a bad idea.

I started having a lot of issues in my UI, like the Application just could not find certain assets [at least that is basis of the common warnings XCode keep telling me]. Unfortunately, since Apple and XCode tries to take care of a lot of stuff ‘behind the scenes’ from the developer, I could not figure out what I had disconnected [broke] with my factorization to fix the problems. So I had to trash all my code and start over with a new project.

I should not be surprised, if seem every time I start a new coding endeavor, I have about 2, 3, or even 4 false starts before I actually start getting anywhere. However, that doesn’t stop it from being annoying.

The upside, is these false starts usually help me develop a better idea how to structure the underlining project.

The last two weeks…

OK. I admit, I missed a week of updates. I am lame, but I think I made some positive progress last week.

Mainly, I have started trying to code up a little bit of the project. Right now it is getting the Structure Sensor and the on-board Camera to work for future debugging.

Fortunately, I found a demo on GitHub from Adrian Smith, where he recreated the Structure View demo in Swift. Comparing the demo to Occipital’s own Viewer app, provides a good roadmap how to translate Objective-C code into Swift. Also recreating the demo has finally given me a start on understanding Apple’s iOS.

The only real downside, was I wasted a little bit of time trying to integrate Google VR into the project. I thought, maybe implementing Google VR, might help with creating the debug screens. However, in the end even though I got Google VR to work, it ended up being a waste of time. There was really nothing to gain from using it. However, I was smart enough to create a Git Branch to use for experimenting with Google VR. So when I decided to throw everything away, my depot was left in a pretty clean state. This was good, because one has to use CocoaPods for Google VR. CocoaPods does sort of make a mess of your project structure, until you understand how it works, so being able to get rid of that mess helped save some anxiety.

Also, in the time since my last update, I found a pretty good open source Kanban based task tracking software called Kanboard. In fact, it is nice to see a simple web better than most of the big names out there. And it ended up being easy to set up on my own website [this one].

Baby steps, but walking.

It looks like that the starting of the week is turning into documentation for that previous week.

Honestly, that is not really a terrible thing because when I think about it, that is a pretty smart and smooth way to get back into the flow. That opposed to just jumping into code or design. At least having to write things up first, allows one a moment of retrospective with out having to get completely hard code.

Anyway, I was to start a few UML diagrams for the project. I have the core Use Cases diagram complete. I also have started some of the activity diagrams. However, most importantly I was always able to build a SoundFont Sample Bank to use as a sound source for SEAR-RL.

Now my long term goal is to build a project AudioUnit Synthesizer. However, after years of doing other projects I know that can be dangerous rabbit hole. Being a big fan of a prototyping, I know it is just better to get something that will work up and running quickly. So building a sample bank was the easiest way to go. Still, I promise the AU will happen.

Still I have to say I think I was smart about how I made the sample bank as well. I wrote a Python-3 script that uses SOX, to generate waveforms and a SFZ file, and then imported that into Polyphone to build the actual SoundBank.

Finding the right tools for the job.

So last week, was the first ‘official’ week of working on my Masters Project.

In my humble opinion I think It was a tepid start. Part of the problem is that I want to keep track of working on the project with scheduling and design like one would properly in the real world. However, when examining the software tools available to aid with planning and software design, are pretty diverse. Added, that is am really just a software development team of one, makes doing such documentation keeping somewhat odd.

First let’s talk about about software design tools. Sure there are quite a few UML design tools out there and some of them are open-source or free. The problem is tending to be ‘drawing’ software trying to make the act of diagraming easier they really do not do much else. There was really only application I found that allowed me to do a bit of everything. One that a person creates a brainstorm or mind-map, and take the elements on them to develop into deeper UML diagram and into planning that I found was Visual Paradigm. However, even its subscription model is sort of expensive. So I had to bite the bullet and get a short subscription.

Planning software was even more crazy. I prefer Kanban over Scrum, and I like Evidence Based Scheduling [EBS] for task time tracking. However, there is not a lot of software that supports both Kanban and EBS. So I settled on a little OS X Native application called InShort, which focuses on using PERT.

Neither are perfect, but they both do most of what I need and will work.

SEAR-RL Version 2 [or how to build your own Microsoft Hololens, VRVANA Totem, MagicLeap, and/or [most probably] Apple’s VR Headset* today.]

Considering how much technology has improved the past couple of years, and the fact I am coming to the end of my Masters this year I have decided to try to build a better [more portable version] of SEAR again.

This is brain-dead hack, but I am posting this here in-case it isn’t obvious to some people.   If you are inching to do some Augmented Reality prototyping development and cannot wait for the Microsoft Hololens, VRVANA Totem, MagicLeap, [or Apple’s VR/AR Headset*] to be released; this may be a solution.   Plus, as a bonus, it is very mobile.

You need 4 things.

Items for home-brew Augmented Reality

Items for home-brew Augmented Reality

You need an:

  • iPhone 6
  • Google Cardboard v2
  • Occipital Structure Sensor
  • mounting bracket for Structure Sensor.

 

Now, you might be able to use an iPhone 5 but you will probably have to build your own bracket [more on that in a bit].

You can either build you own Google CardBoard from a pizza box.  You can find instructions for that here on:

However, most of those plans are for CardBoard V1, which I do not recommend because of the magnet issue. [Seriously Google, what were you thinking?]  But if you don’t mind paying about $20, you can buy a simple kit all over the internet.  Amazon has a ton.

The Structure Sensor is from Occipital.  It is the same Kinect like sensor that PrimSense made for Microsoft and Kinect 1.

Finally the mounting bracket. Occipital provides a CAD kit if you want to build your own bracket and you have access to a 3D printer.  They also had a contest in partnership with Shapeways [a 3D Print-on-Demand Company] for the best user-created bracket.   You can purchase some of the winning designs here.

Although, I really liked the Grand Prize Winner Max Tönnemann’s design,  I decided to go with Brian Smith’s design.  The reason is Brian’s design is just slightly shorter.   Unlike Max’s design in which the Structure Sensor is mounted on top of the bracket, Brian’s design mounts the camera in the bracket, which makes the overall height lower.   This translates to a closer center of gravity on your face.

VR and AR cameras are already weird enough to wear so you want to decrease the front heaviness as much as possible.   Still I do plan on checking out Max’s design at some point in the future.

Now for the magic.

You want to cut holes in the Google CardBoard to the match the holes in the bracket for the sensor and the phone’s camera as such:

Demo of hole cut into Google Cardboard to fit Structure Sensor.

Holes cut into Google Cardboard.

 

Now you will need to make the hole for the iPhone camera slightly bigger then the bracket’s hole.  Otherwise, you are are not going to get the full frame for view for the iPhone camera.

Once you do that, put the sensor into the bracket.

Structure Sensor Mounted in a bracket.

Structure Sensor Mounted in a bracket.

After that place the sensor/bracket assembly in the CardBoard with the Sensor going thought the larger hole you just cut.  Then close up the Google CardBoard.

The resulting should look like such:

Structure Sensor mounted in a modified Google CardBoard V2.

Structure Sensor mounted in a modified Google CardBoard V2.

Yes, the front of the CardBoard will jut out, but this is good as you need the extra room to attach the Structure Sensor cable.

At this point you are ready to go.

Just remember the entire headset is now a bit heavier, so you may have to re-enforce the velcro straps if you use them to hold the CardBoard to your head, but otherwise you are done.

Hope this helps people with AR Prototyping.

As for me, I have to rewrite some code… again.  *Sigh*

Peace

Stan

 

* It is total speculation if Apple is making a VR/AR headset and what it will be like.  However, if they are, I feel like it is a safe bet it will something like this, just with a wider Field of View

**  Occipital, if you sales spikes because of this you owe me a second sensor at least [or a trip to Boulder, I miss Colorado].