Behold life on iOS 10.

My apologies if this report is a little late. I have been under the weather the past 24 hours and only now have a clear head [so additional apologies if this report is rough]. In general, a lot of little things happened this week.

First, I think my idea to scan the depth data along a spiral path is going to work. Before, I got too far in idea, I spoke to my friend in Australia, Titus Tang, who has done a lot of similar visualization work. He was able to offer some good feedback on my algorithm idea and how to manage retrieving of points from the depth field for analyzing them. The biggest idea, Titus suggested it that I rescan the closest points first for each consecutive pass. Fortunately, I think that can be handled using a priority queue but I am still trying to refine a couple of details in my head, before I put anything to code.

Still, I feel like I have enough of the scanning system, that I can move on to building the audio tree and linking them to each region. This is actually the scariest part for me. Dealing with audio programming is always a little nerve racking for me. It is basically real-time programming. With graphic and most data programming, one is able to freeze the state of the machine to examine to debug it. One cannot actually do that with audio. Audio only exists in the context of passing time. You either code everything right and hear something or you do not. Plus, since will be the first time I am use AVFoundation so I am not sure what to expect, even though it does seem it will be easier than using Straight CoreAudio.

Finally, I moved to iOS 10, this week on my iPhone for development. This transition actually was not all that painful because I made the smart to move to shift development to Swift 3.0 and XCode 8.0 a couple of weeks ago. When, I recompiled for the new OS, I did not have any problems which was a relief. Still, trying to develop for and on ‘bleeding’ edge technology is challenging. The main reason, is the documentation for much of iOS and Swift’s API does not seem to keep up. For example, in trying to build the scanning points set I mentioned above, I started using Grand Central Dispatch, iOS’s concurrency technology. On the plus side Apple, greatly simplified the syntax for creating queues [high-level threads] in Swift 3.0. On the down side, there are just not many examples online on the correct usage of the new syntax. So I had to spend a bit of time try to cobble together answers, and I am still not sure I doing everything 100% correct.

In other tangentially related tasks, I also started writing a first draft of the final report. This is going to be an interesting report to write not because I am afraid I might be tempted to write too much. I have been working on this idea for so long, I am constantly find myself having to exercise restraint from writing my life’s story and just focus on the project. I have also begun to apply for positions after school. It is always a little odd when I see positions describing projects similar to what I working on now. I do apply to them, but I wonder if I will stand out to all the other applicants.

Start to feel like things are happening.

To be honest, I was little excited this week. After a bunch of preliminary work, and 2-3 years of CS Graduate School, I finally got to the point where I am working on what I consider one of the core parts of SEAR-RL. Specifically, most of this week I focused  on an idea I had for a more efficient way to analyzing the 3D from the Structure Sensor. It is works then the running time of the algorithm would be less then O(n) [where n is all the points coming from the scanner]. That in itself should help, since I want to make this a real-time application.

The most troublesome parts so far actually haven’t been anything to do with the idea, but how to debug it. Specifically, I been trying to create a visual overlay to the data views I receive from the sensor. This is so I can see that the points I m scanning are the correct points I want to scan. To do this I have had to dive into iOS GUI code once again on how to present the data. However, this task has not been that ominous as I feared as Apple’s introduction of the XCode Playgrounds have been a significant help. The Playgrounds are Apple’s REPL system to just experiment with code. They have been a help because I have been able to quickly experiment with the iOS GUI and see the results. It helped me focus on this specific part of my project, and not worry about accidentally breaking another part. But even with Playgrounds I have to say getting answers from Apple’s Documentation is not the best. I feel like I am having to search the interweb to cross-reference how to do something in Apple’s documentation I feel I should not have to.

But aside from that, yes there are still a few little issues, but the fact I am finally getting to see if this idea will work soon has got me excited. I just find it funny, that it seems that everything from Linear Algebra to Algorithms has been helpful in figuring this one problem out, so I guess coming back to graduate school was not a terrible idea.

Another new iPhone.

I think I finally finished the last of the preliminary development coding. The main issue standing in my way was trying to figure out a way to debug the project while testing. Occipital’s Structure sensor occupies the Lightning Port when attached to the iPhone. This is a problem for development because XCode needs to be connected to the iDevice via the Lightning port in order to debug. Fortunately, Occipital realized this and added a class STWirelessLog, which rebroadcasts debug messages over the IP Wireless. One just needs to run ‘netcat’ on a nearby machine to receive the messages. The problem for me, is that the way STWirelessLog is written requires a hardcoded IP address to receive the message. Since I am moving around on campus that means my IP address is always changing. So I added some helper functions to be able to modify the receiving netcat receiver machine on the fly.

Similarly, another problem related to the Lightning port had to do with Apple’s announcement of their new iPhone this week. The biggest news about the new iPhone was probably the fact they are removing the headphone port. So now if you want to listen to music with head phones on future iPhones, you either have to use a Lightning based pair of wired headphones, or wireless headphones. This could be problematic for future proofing my project. Since I need one Lightning port for the Structure Sensor, I am immediately short a port for the audio needs. Belkin has announce they are releasing an adapter called “Lightning Audio + Charge RockStar™”, which initially seems like a Lightning port doubler. However, it remains to be seen if it actually works that way since it is advertised as a way to listen to music through one port while the device charges through the other. If it does not allow for using two Lightning devices at the same time, then I am left with using wireless headphones. I am skeptical about this approach. The reason is that most wireless headphones use some form of audio compression to pass music from the device to the headphone to save bandwidth. Bluetooth headphones I have listened to in the past, I was not impressed with the quality. Since, audio is such a critical aspect of this project and people need to discern even the slightest of frequencies. I am worried that using Bluetooth headphones will degrade too much audio information.

Fortunately, I am safe for the moment since my current development platform is the iPhone 6 and it still has a traditional headphone jack. However, I probably should keep these issues in mind if I plan to do anything with this project after this semester.

You can’t delegate yourself.

This week, I tried to kick into high gear with building SEAR-RL since it was the first full week of the semester. Strangely, the main focus of my work was refactoring a lot of the experiential and prototype code I wrote during the summer. Specially, a lot of my code to use the Structure Sensor existed in the controller in the main MVC of what I had so far. So, I decided to partition that code off into a proper model. The challenging part, is that Occipital followed Apple’s style of accessing non-immediate systems and classes that access the Structure Sensor work as a singleton.

I have no problem with the use of singletons. I know there is some controversy with some programmers over the use of singletons in Object Oriented Programming, but I approve of them. Still, the down side is singletons cannot really be subclassed. So my idea to only create one master class to be a hub between the Sensor and my code was not going to happen. To get around this, I created an intermediate class called STSensorManagement. I also designed it like a singleton too, which on second thought now I am not sure why I did that. I think it was the naïve idea that to play friendly with STSensorController [the Occipital supplied class to access the sensor] it needed to be the same. [I think once things stabilize in the future I will revisit that.]

The trickiest thing about developing this assistance class to manage STSensorController commutation, is that I had to create a few protocols [class interfaces in Swift] to facilitate communication between the Controller and this new Model. The reason I mention this here is that dealing with delegation [or function pointers in C/C++] has been a topic and pattern I only recently have come to feel comfortable with. Thinking back on my software engineering education and reviewing resources online the concept of delegation [and related, forwarding] never seems to be covered as thoroughly as it should be in my opinion. There seems to be a sense that understanding delegation and using function pointers are just implicit, especially after one learns about pointers and system level programming. I find this odd considering that delegation is programming pattern that is found everywhere, for all sorts of things. My difficulty with it, is that it took a long time to understand when you delegate a function you are just passing a signature, and that someone else will call the function. I could never let go that something could happen outside of a thread I was controlling.

Anyway, when all was said and done it still was a successful refactoring, so by organizing things now I feel like that makes thing better for later on.