You can’t delegate yourself.

This week, I tried to kick into high gear with building SEAR-RL since it was the first full week of the semester. Strangely, the main focus of my work was refactoring a lot of the experiential and prototype code I wrote during the summer. Specially, a lot of my code to use the Structure Sensor existed in the controller in the main MVC of what I had so far. So, I decided to partition that code off into a proper model. The challenging part, is that Occipital followed Apple’s style of accessing non-immediate systems and classes that access the Structure Sensor work as a singleton.

I have no problem with the use of singletons. I know there is some controversy with some programmers over the use of singletons in Object Oriented Programming, but I approve of them. Still, the down side is singletons cannot really be subclassed. So my idea to only create one master class to be a hub between the Sensor and my code was not going to happen. To get around this, I created an intermediate class called STSensorManagement. I also designed it like a singleton too, which on second thought now I am not sure why I did that. I think it was the naïve idea that to play friendly with STSensorController [the Occipital supplied class to access the sensor] it needed to be the same. [I think once things stabilize in the future I will revisit that.]

The trickiest thing about developing this assistance class to manage STSensorController commutation, is that I had to create a few protocols [class interfaces in Swift] to facilitate communication between the Controller and this new Model. The reason I mention this here is that dealing with delegation [or function pointers in C/C++] has been a topic and pattern I only recently have come to feel comfortable with. Thinking back on my software engineering education and reviewing resources online the concept of delegation [and related, forwarding] never seems to be covered as thoroughly as it should be in my opinion. There seems to be a sense that understanding delegation and using function pointers are just implicit, especially after one learns about pointers and system level programming. I find this odd considering that delegation is programming pattern that is found everywhere, for all sorts of things. My difficulty with it, is that it took a long time to understand when you delegate a function you are just passing a signature, and that someone else will call the function. I could never let go that something could happen outside of a thread I was controlling.

Anyway, when all was said and done it still was a successful refactoring, so by organizing things now I feel like that makes thing better for later on.

Impressionism and Software Development

A few thoughts this week. After getting to know Apple’s CoreAudio and AudioUnits for a bit, I decided to take a step back and have a second look at AVFoundation. AVFoundation is supposed to be easier to deal with, but my initial learning of it, it didn’t seem as powerful. I may have been mistaken about that. I have to say that giving AVFoundation and the AVAudio Classes a second lookg was a smart move. It definitely seems to be more powerful then I originally thought, and easier to use. I have a feeling it is probably going to be the level I do the audio work for the project. It is not that I would not want to become a CoreAudio wizard, but seriously working with CoreAudio is like living in the Upside Down.

I am also thinking I am also going to start using XCode 8 beta, so when Apple releases the next everything in a month or so it isn’t a shock to relearn everything. Plus, after checking out the beta, I have to say the Documentation for Apple’s API is presented much better. Sure, there is a lot they could still add, but it is an improvement to how it is presented in XCode 7.

Finally, I have to say something about the fact the iPhone 7 is not going to have a headphone jack. I am not exactly happy with this, but then again I am not surprised. This the pretty typical of the changes [curveballs] that has been thrown my way in trying to develop for Apple. The problem is that the Structure Sensor needs the Lightning port, and I use the headphone port on my iPhone 6 for the audio. I need both ports for this project to work.

Now, I have no pressing interest to get this working on iPhone 7, so I am not too bothered by it. My guess is that the iPhone 6 models will probably be available for a while well, and they are just about perfect for this project. Still, I am hoping some third party company comes out with a Lightning port splitter. I am sure there are a lot of people like me which still use the AUX audio port for listening to their iPhones in their cars and recharge their phones with the Lightning port at the same time.

That is all for now.

The dead of summer.

OK. I admit I have been sort of unfocused the past couple of weeks.

Basically, Summer is my worst time of the year, especially for productivity. It is not like I want to be out in the Sun, in fact just the opposite. There is the just something about hot weather and humidity, that just takes it out of me. In a way it is like Seasonal Affective Disorder [SAD] but reversed. Normally, people with SAD get depressed in the middle of winter with the shorter days, for me, it is the opposite. [Plus, I got hooked on Stranger Things.]

I have done work, but it is frustrating because it does not seem to amount to a lot of progress.

For example, realizing that I am probably going to have to build a proper AudioUnit to complete this project, I started to research their structure. Mainly Apple does provide a couple of sample code sets and helper libraries to build AudioUnit. Unfortunately, like I said in my earlier post they are not well documented. Especially since they are written in C++ which Apple seems to treat as reluctantly at possible.

Fortunately, I got the bright idea to use Doxygen to build the dependency trees for all the C++ code, and that helped a bit. Something about seeing how everything is laid out, and what needs what in terms of a graph helps a lot. Information, they I cannot pick up easily looking at the source code.

Also, I have spent a bit of bit going back and reviewing the UML designs and making some changes. Specifically, it just seems the best way to deal with Audio and MIDI in iOS app is to use singletons. I don’t have a probably with them, but I know some people do.

Anyway I am hoping with the semester starting up in a few weeks I can get back into the groove.

 

The simplest things are often the hardest.

So had a few more successes last week.

I got CoreAudio/AudioUnit Swift code running in a XCode Playground. This was a big help in trying to understand how AudioUnits work, and how to work with C Pointers in Swift.

I also got a very simple CoreLocation application running, which retrieved the compass direction. [That is probably the easiest thing I have done so far.]

Now I have to vent just a little. In developing this app for iOS I am completely baffled about one thing about trying to develop and app for an Apple platform.

I really wish Apple’s documentation for development was a lot better. I cannot think of a single thing I have yet to try to do on iOS or macOS yet, in which I did not have to look up a tutorial or an example for how to do something from a third party source.

Compared to Microsoft’s MSDN it just seems that Apple’s developers’ documentation is pretty thin.

Sure Apple supplies some guides and sample code. However, I tend to find Apple’s choice of a sample code examples to be a little esoteric. Especially when I just happen to need just a really brain-dead example to help me grasp the concept on how I should use a framework.

I don’t know, I just find it funny.

10,000 more baby steps to go.

Ok the 4th of July through me off, but after two weeks there is a lot to update about.

First, I finally got a basic Structure app working on my iPhone. Photos below.

Camera image of Sofa on left, Structure Sensor Depth image on right.

Camera image of Sofa on left, Structure Sensor Depth image on right.

Camera image of desk and computer on left, Structure Sensor Depth image on right.

Camera image of desk and computer on left, Structure Sensor Depth image on right.

Camera image of backpack on chair on left, Structure Sensor Depth image on right.

Camera image of backpack on chair on left, Structure Sensor Depth image on right.

The biggest challenges what getting use to YpCbCr color space. Also, there was a small bug in the Structure SDK that cause the Synchronized frame calls not to work. However, considering my end goals it was not critical to use synchronized frames, yet.

Also I was able to get a very basic Audio and MIDI application running on the iPhone. This was crazy because I had to pull information from multiple sources to figure out how to do that. The current Apple example, is pushing AVFoundation, but I needed CoreAudio.

The point is, I am just glad I have been able to accomplish baby steps in both areas, using the sensor and generating sound. Now there is only like 10,000 more baby steps to go.

Another 2 forward, 1 step back.

My current grumpiness with XCode, you cannot refactor Swift Code.

One would think that is a pretty basic feature an IDE should have, but as of the writing of this it does not exist.

I discovered it when I tried to refactor some of my code for SEAR-RL last week. Xcode threw up an alert warning it could only do that for C and Obj-C code. So I decide to press ahead and just manually refactor everything. I think that was a bad idea.

I started having a lot of issues in my UI, like the Application just could not find certain assets [at least that is basis of the common warnings XCode keep telling me]. Unfortunately, since Apple and XCode tries to take care of a lot of stuff ‘behind the scenes’ from the developer, I could not figure out what I had disconnected [broke] with my factorization to fix the problems. So I had to trash all my code and start over with a new project.

I should not be surprised, if seem every time I start a new coding endeavor, I have about 2, 3, or even 4 false starts before I actually start getting anywhere. However, that doesn’t stop it from being annoying.

The upside, is these false starts usually help me develop a better idea how to structure the underlining project.

The last two weeks…

OK. I admit, I missed a week of updates. I am lame, but I think I made some positive progress last week.

Mainly, I have started trying to code up a little bit of the project. Right now it is getting the Structure Sensor and the on-board Camera to work for future debugging.

Fortunately, I found a demo on GitHub from Adrian Smith, where he recreated the Structure View demo in Swift. Comparing the demo to Occipital’s own Viewer app, provides a good roadmap how to translate Objective-C code into Swift. Also recreating the demo has finally given me a start on understanding Apple’s iOS.

The only real downside, was I wasted a little bit of time trying to integrate Google VR into the project. I thought, maybe implementing Google VR, might help with creating the debug screens. However, in the end even though I got Google VR to work, it ended up being a waste of time. There was really nothing to gain from using it. However, I was smart enough to create a Git Branch to use for experimenting with Google VR. So when I decided to throw everything away, my depot was left in a pretty clean state. This was good, because one has to use CocoaPods for Google VR. CocoaPods does sort of make a mess of your project structure, until you understand how it works, so being able to get rid of that mess helped save some anxiety.

Also, in the time since my last update, I found a pretty good open source Kanban based task tracking software called Kanboard. In fact, it is nice to see a simple web better than most of the big names out there. And it ended up being easy to set up on my own website [this one].

Baby steps, but walking.

It looks like that the starting of the week is turning into documentation for that previous week.

Honestly, that is not really a terrible thing because when I think about it, that is a pretty smart and smooth way to get back into the flow. That opposed to just jumping into code or design. At least having to write things up first, allows one a moment of retrospective with out having to get completely hard code.

Anyway, I was to start a few UML diagrams for the project. I have the core Use Cases diagram complete. I also have started some of the activity diagrams. However, most importantly I was always able to build a SoundFont Sample Bank to use as a sound source for SEAR-RL.

Now my long term goal is to build a project AudioUnit Synthesizer. However, after years of doing other projects I know that can be dangerous rabbit hole. Being a big fan of a prototyping, I know it is just better to get something that will work up and running quickly. So building a sample bank was the easiest way to go. Still, I promise the AU will happen.

Still I have to say I think I was smart about how I made the sample bank as well. I wrote a Python-3 script that uses SOX, to generate waveforms and a SFZ file, and then imported that into Polyphone to build the actual SoundBank.

Finding the right tools for the job.

So last week, was the first ‘official’ week of working on my Masters Project.

In my humble opinion I think It was a tepid start. Part of the problem is that I want to keep track of working on the project with scheduling and design like one would properly in the real world. However, when examining the software tools available to aid with planning and software design, are pretty diverse. Added, that is am really just a software development team of one, makes doing such documentation keeping somewhat odd.

First let’s talk about about software design tools. Sure there are quite a few UML design tools out there and some of them are open-source or free. The problem is tending to be ‘drawing’ software trying to make the act of diagraming easier they really do not do much else. There was really only application I found that allowed me to do a bit of everything. One that a person creates a brainstorm or mind-map, and take the elements on them to develop into deeper UML diagram and into planning that I found was Visual Paradigm. However, even its subscription model is sort of expensive. So I had to bite the bullet and get a short subscription.

Planning software was even more crazy. I prefer Kanban over Scrum, and I like Evidence Based Scheduling [EBS] for task time tracking. However, there is not a lot of software that supports both Kanban and EBS. So I settled on a little OS X Native application called InShort, which focuses on using PERT.

Neither are perfect, but they both do most of what I need and will work.

SEAR-RL Version 2 [or how to build your own Microsoft Hololens, VRVANA Totem, MagicLeap, and/or [most probably] Apple’s VR Headset* today.]

Considering how much technology has improved the past couple of years, and the fact I am coming to the end of my Masters this year I have decided to try to build a better [more portable version] of SEAR again.

This is brain-dead hack, but I am posting this here in-case it isn’t obvious to some people.   If you are inching to do some Augmented Reality prototyping development and cannot wait for the Microsoft Hololens, VRVANA Totem, MagicLeap, [or Apple’s VR/AR Headset*] to be released; this may be a solution.   Plus, as a bonus, it is very mobile.

You need 4 things.

Items for home-brew Augmented Reality

Items for home-brew Augmented Reality

You need an:

  • iPhone 6
  • Google Cardboard v2
  • Occipital Structure Sensor
  • mounting bracket for Structure Sensor.

 

Now, you might be able to use an iPhone 5 but you will probably have to build your own bracket [more on that in a bit].

You can either build you own Google CardBoard from a pizza box.  You can find instructions for that here on:

However, most of those plans are for CardBoard V1, which I do not recommend because of the magnet issue. [Seriously Google, what were you thinking?]  But if you don’t mind paying about $20, you can buy a simple kit all over the internet.  Amazon has a ton.

The Structure Sensor is from Occipital.  It is the same Kinect like sensor that PrimSense made for Microsoft and Kinect 1.

Finally the mounting bracket. Occipital provides a CAD kit if you want to build your own bracket and you have access to a 3D printer.  They also had a contest in partnership with Shapeways [a 3D Print-on-Demand Company] for the best user-created bracket.   You can purchase some of the winning designs here.

Although, I really liked the Grand Prize Winner Max Tönnemann’s design,  I decided to go with Brian Smith’s design.  The reason is Brian’s design is just slightly shorter.   Unlike Max’s design in which the Structure Sensor is mounted on top of the bracket, Brian’s design mounts the camera in the bracket, which makes the overall height lower.   This translates to a closer center of gravity on your face.

VR and AR cameras are already weird enough to wear so you want to decrease the front heaviness as much as possible.   Still I do plan on checking out Max’s design at some point in the future.

Now for the magic.

You want to cut holes in the Google CardBoard to the match the holes in the bracket for the sensor and the phone’s camera as such:

Demo of hole cut into Google Cardboard to fit Structure Sensor.

Holes cut into Google Cardboard.

 

Now you will need to make the hole for the iPhone camera slightly bigger then the bracket’s hole.  Otherwise, you are are not going to get the full frame for view for the iPhone camera.

Once you do that, put the sensor into the bracket.

Structure Sensor Mounted in a bracket.

Structure Sensor Mounted in a bracket.

After that place the sensor/bracket assembly in the CardBoard with the Sensor going thought the larger hole you just cut.  Then close up the Google CardBoard.

The resulting should look like such:

Structure Sensor mounted in a modified Google CardBoard V2.

Structure Sensor mounted in a modified Google CardBoard V2.

Yes, the front of the CardBoard will jut out, but this is good as you need the extra room to attach the Structure Sensor cable.

At this point you are ready to go.

Just remember the entire headset is now a bit heavier, so you may have to re-enforce the velcro straps if you use them to hold the CardBoard to your head, but otherwise you are done.

Hope this helps people with AR Prototyping.

As for me, I have to rewrite some code… again.  *Sigh*

Peace

Stan

 

* It is total speculation if Apple is making a VR/AR headset and what it will be like.  However, if they are, I feel like it is a safe bet it will something like this, just with a wider Field of View

**  Occipital, if you sales spikes because of this you owe me a second sensor at least [or a trip to Boulder, I miss Colorado].