October and the Technical Debt monster.

So October arrives, along with the Technical Debt Monster.

I was ready this week to finally get to jump into the audio system this week. Instead the Technical Debt Monster decided to turn up. Fortunately, it wasn’t as savage as I have seen on projects in the past. This one was ending up being an annoying little yapping dog, as opposed to a rampaging elephant. Specifically, it showed up as I was trying to transfer my work from the scanning system from the XCode playgrounds where I had been developing it into the actual project. In doing so I learned several things.

First, in terms of code executing speed XCode playgrounds and the Swift’s REPL is nowhere near as fast as running Swift code on the iPhone or even the iOS emulators. I was surprised by this after many years of writing Python code in Python’s REPL environments. I think the correct observation is that I took the speed of Python’s REPLs for granted and just assumed all REPLs were similar. This actually ended up being a good thing, because it helped be realized that some multi-threading I had added to my code to speed things along in the Playground was not needed. However, just because my code worked in the Playground, does not means it was perfect yet inside the project.

Second, I am seriously beginning to think the second law of thermodynamics does not apply to me in terms of coding. In all the software development I have done I have noticed I have a habit of over-designing code before I write anything. In general, this isn’t a bad thing. I like being organized and not wasting effort. However, my final code solutions always seem to be significantly simpler than what I expect them to be. Moving the scanning system over was no different. I realized I had too much redundant code and seeing the code run in the final implementation helped me notice bugs that were not apparent in the Playground. For example, I had not properly created a Model-View-Controller paradigm in the Playground, by not creating a proper controller class. I had just used the Main() loop as a controller. In doing so, my functions were referencing parts of the model and views that were valid in the Playground, but not in a true application.

In the end once I fixed all the bugs, I realized my code was much easier than what I envisioned. I know this because I wanted to properly document everything in the project, I decide to go back and update my original UML documentation for the designs. After making the changes I noticed that the UML designs were much clearer.

 

The third thing I noticed this week, was not so much a lesson as a realization. First, my idea to use a Priority Queue to keep track of the scanning points worked. Actually, I ended up creating a Priority Queue of Priority Queues. Seems a little weird and it was a little confusing to design but it does seem to be doing the job well. The second thing was in order to debug the scanning points, I had to write a simple scaling linear transformation to covert from the model data to the view. I know neither of these two thing seem particularly unique, but what got me is seeing something that we have learned as theory in school become relevant and practical. I don’t know why, but particularly seeing the linear transformation work just seemed like magic. I think it is because along with over-designing my code I have an annoying need to test every line of code I write with something small and work bigger [a good habit, but tedious at times]. With the set of points, I didn’t do that, I just ran it and it worked. If there is a lesson, I think it is just the universe trying to tell me to have a little more faith in my code and all the theory I have learned for computer science.

 

Finally, I think I am a little behind in terms of where I was hoping to be in the project. Looking back, I have to admit my scheduling was a little too ambitious. I think part of the reason is when I was thinking through the scheduling I broke one of my own rules. Years ago I did an internship at KET and my manager there taught me a really elegant rule to planning and scheduling that I notice really does seem to work out more times than not. His rule was, when trying to calculate scheduling and supplies out for a project, take whatever you calculate as fairly optimal and add a third of everything to that: time, resources, whatever. About 95% of the time, your totals will work out to exactly what is needed. In my anecdotal observations of projects his idea seems to be right on the money. With this semester, I forgot to do that for a variety of reasons most of them just based in ambition. However, when I think of everything taking the 1/3 rule into account my scheduling does seem about right. That makes me think that a friend’s observation I might be putting too much pressure on myself to get through it is correct. Fortunately, with October and Autumn [my favorite month and season] arriving I feel like I will probably makeup any time I have lost. I always seem to work a lot better when the oppressive summer has finally broken.

Behold life on iOS 10.

My apologies if this report is a little late. I have been under the weather the past 24 hours and only now have a clear head [so additional apologies if this report is rough]. In general, a lot of little things happened this week.

First, I think my idea to scan the depth data along a spiral path is going to work. Before, I got too far in idea, I spoke to my friend in Australia, Titus Tang, who has done a lot of similar visualization work. He was able to offer some good feedback on my algorithm idea and how to manage retrieving of points from the depth field for analyzing them. The biggest idea, Titus suggested it that I rescan the closest points first for each consecutive pass. Fortunately, I think that can be handled using a priority queue but I am still trying to refine a couple of details in my head, before I put anything to code.

Still, I feel like I have enough of the scanning system, that I can move on to building the audio tree and linking them to each region. This is actually the scariest part for me. Dealing with audio programming is always a little nerve racking for me. It is basically real-time programming. With graphic and most data programming, one is able to freeze the state of the machine to examine to debug it. One cannot actually do that with audio. Audio only exists in the context of passing time. You either code everything right and hear something or you do not. Plus, since will be the first time I am use AVFoundation so I am not sure what to expect, even though it does seem it will be easier than using Straight CoreAudio.

Finally, I moved to iOS 10, this week on my iPhone for development. This transition actually was not all that painful because I made the smart to move to shift development to Swift 3.0 and XCode 8.0 a couple of weeks ago. When, I recompiled for the new OS, I did not have any problems which was a relief. Still, trying to develop for and on ‘bleeding’ edge technology is challenging. The main reason, is the documentation for much of iOS and Swift’s API does not seem to keep up. For example, in trying to build the scanning points set I mentioned above, I started using Grand Central Dispatch, iOS’s concurrency technology. On the plus side Apple, greatly simplified the syntax for creating queues [high-level threads] in Swift 3.0. On the down side, there are just not many examples online on the correct usage of the new syntax. So I had to spend a bit of time try to cobble together answers, and I am still not sure I doing everything 100% correct.

In other tangentially related tasks, I also started writing a first draft of the final report. This is going to be an interesting report to write not because I am afraid I might be tempted to write too much. I have been working on this idea for so long, I am constantly find myself having to exercise restraint from writing my life’s story and just focus on the project. I have also begun to apply for positions after school. It is always a little odd when I see positions describing projects similar to what I working on now. I do apply to them, but I wonder if I will stand out to all the other applicants.

Start to feel like things are happening.

To be honest, I was little excited this week. After a bunch of preliminary work, and 2-3 years of CS Graduate School, I finally got to the point where I am working on what I consider one of the core parts of SEAR-RL. Specifically, most of this week I focused  on an idea I had for a more efficient way to analyzing the 3D from the Structure Sensor. It is works then the running time of the algorithm would be less then O(n) [where n is all the points coming from the scanner]. That in itself should help, since I want to make this a real-time application.

The most troublesome parts so far actually haven’t been anything to do with the idea, but how to debug it. Specifically, I been trying to create a visual overlay to the data views I receive from the sensor. This is so I can see that the points I m scanning are the correct points I want to scan. To do this I have had to dive into iOS GUI code once again on how to present the data. However, this task has not been that ominous as I feared as Apple’s introduction of the XCode Playgrounds have been a significant help. The Playgrounds are Apple’s REPL system to just experiment with code. They have been a help because I have been able to quickly experiment with the iOS GUI and see the results. It helped me focus on this specific part of my project, and not worry about accidentally breaking another part. But even with Playgrounds I have to say getting answers from Apple’s Documentation is not the best. I feel like I am having to search the interweb to cross-reference how to do something in Apple’s documentation I feel I should not have to.

But aside from that, yes there are still a few little issues, but the fact I am finally getting to see if this idea will work soon has got me excited. I just find it funny, that it seems that everything from Linear Algebra to Algorithms has been helpful in figuring this one problem out, so I guess coming back to graduate school was not a terrible idea.

Another new iPhone.

I think I finally finished the last of the preliminary development coding. The main issue standing in my way was trying to figure out a way to debug the project while testing. Occipital’s Structure sensor occupies the Lightning Port when attached to the iPhone. This is a problem for development because XCode needs to be connected to the iDevice via the Lightning port in order to debug. Fortunately, Occipital realized this and added a class STWirelessLog, which rebroadcasts debug messages over the IP Wireless. One just needs to run ‘netcat’ on a nearby machine to receive the messages. The problem for me, is that the way STWirelessLog is written requires a hardcoded IP address to receive the message. Since I am moving around on campus that means my IP address is always changing. So I added some helper functions to be able to modify the receiving netcat receiver machine on the fly.

Similarly, another problem related to the Lightning port had to do with Apple’s announcement of their new iPhone this week. The biggest news about the new iPhone was probably the fact they are removing the headphone port. So now if you want to listen to music with head phones on future iPhones, you either have to use a Lightning based pair of wired headphones, or wireless headphones. This could be problematic for future proofing my project. Since I need one Lightning port for the Structure Sensor, I am immediately short a port for the audio needs. Belkin has announce they are releasing an adapter called “Lightning Audio + Charge RockStar™”, which initially seems like a Lightning port doubler. However, it remains to be seen if it actually works that way since it is advertised as a way to listen to music through one port while the device charges through the other. If it does not allow for using two Lightning devices at the same time, then I am left with using wireless headphones. I am skeptical about this approach. The reason is that most wireless headphones use some form of audio compression to pass music from the device to the headphone to save bandwidth. Bluetooth headphones I have listened to in the past, I was not impressed with the quality. Since, audio is such a critical aspect of this project and people need to discern even the slightest of frequencies. I am worried that using Bluetooth headphones will degrade too much audio information.

Fortunately, I am safe for the moment since my current development platform is the iPhone 6 and it still has a traditional headphone jack. However, I probably should keep these issues in mind if I plan to do anything with this project after this semester.

You can’t delegate yourself.

This week, I tried to kick into high gear with building SEAR-RL since it was the first full week of the semester. Strangely, the main focus of my work was refactoring a lot of the experiential and prototype code I wrote during the summer. Specially, a lot of my code to use the Structure Sensor existed in the controller in the main MVC of what I had so far. So, I decided to partition that code off into a proper model. The challenging part, is that Occipital followed Apple’s style of accessing non-immediate systems and classes that access the Structure Sensor work as a singleton.

I have no problem with the use of singletons. I know there is some controversy with some programmers over the use of singletons in Object Oriented Programming, but I approve of them. Still, the down side is singletons cannot really be subclassed. So my idea to only create one master class to be a hub between the Sensor and my code was not going to happen. To get around this, I created an intermediate class called STSensorManagement. I also designed it like a singleton too, which on second thought now I am not sure why I did that. I think it was the naïve idea that to play friendly with STSensorController [the Occipital supplied class to access the sensor] it needed to be the same. [I think once things stabilize in the future I will revisit that.]

The trickiest thing about developing this assistance class to manage STSensorController commutation, is that I had to create a few protocols [class interfaces in Swift] to facilitate communication between the Controller and this new Model. The reason I mention this here is that dealing with delegation [or function pointers in C/C++] has been a topic and pattern I only recently have come to feel comfortable with. Thinking back on my software engineering education and reviewing resources online the concept of delegation [and related, forwarding] never seems to be covered as thoroughly as it should be in my opinion. There seems to be a sense that understanding delegation and using function pointers are just implicit, especially after one learns about pointers and system level programming. I find this odd considering that delegation is programming pattern that is found everywhere, for all sorts of things. My difficulty with it, is that it took a long time to understand when you delegate a function you are just passing a signature, and that someone else will call the function. I could never let go that something could happen outside of a thread I was controlling.

Anyway, when all was said and done it still was a successful refactoring, so by organizing things now I feel like that makes thing better for later on.

Impressionism and Software Development

A few thoughts this week. After getting to know Apple’s CoreAudio and AudioUnits for a bit, I decided to take a step back and have a second look at AVFoundation. AVFoundation is supposed to be easier to deal with, but my initial learning of it, it didn’t seem as powerful. I may have been mistaken about that. I have to say that giving AVFoundation and the AVAudio Classes a second lookg was a smart move. It definitely seems to be more powerful then I originally thought, and easier to use. I have a feeling it is probably going to be the level I do the audio work for the project. It is not that I would not want to become a CoreAudio wizard, but seriously working with CoreAudio is like living in the Upside Down.

I am also thinking I am also going to start using XCode 8 beta, so when Apple releases the next everything in a month or so it isn’t a shock to relearn everything. Plus, after checking out the beta, I have to say the Documentation for Apple’s API is presented much better. Sure, there is a lot they could still add, but it is an improvement to how it is presented in XCode 7.

Finally, I have to say something about the fact the iPhone 7 is not going to have a headphone jack. I am not exactly happy with this, but then again I am not surprised. This the pretty typical of the changes [curveballs] that has been thrown my way in trying to develop for Apple. The problem is that the Structure Sensor needs the Lightning port, and I use the headphone port on my iPhone 6 for the audio. I need both ports for this project to work.

Now, I have no pressing interest to get this working on iPhone 7, so I am not too bothered by it. My guess is that the iPhone 6 models will probably be available for a while well, and they are just about perfect for this project. Still, I am hoping some third party company comes out with a Lightning port splitter. I am sure there are a lot of people like me which still use the AUX audio port for listening to their iPhones in their cars and recharge their phones with the Lightning port at the same time.

That is all for now.

The dead of summer.

OK. I admit I have been sort of unfocused the past couple of weeks.

Basically, Summer is my worst time of the year, especially for productivity. It is not like I want to be out in the Sun, in fact just the opposite. There is the just something about hot weather and humidity, that just takes it out of me. In a way it is like Seasonal Affective Disorder [SAD] but reversed. Normally, people with SAD get depressed in the middle of winter with the shorter days, for me, it is the opposite. [Plus, I got hooked on Stranger Things.]

I have done work, but it is frustrating because it does not seem to amount to a lot of progress.

For example, realizing that I am probably going to have to build a proper AudioUnit to complete this project, I started to research their structure. Mainly Apple does provide a couple of sample code sets and helper libraries to build AudioUnit. Unfortunately, like I said in my earlier post they are not well documented. Especially since they are written in C++ which Apple seems to treat as reluctantly at possible.

Fortunately, I got the bright idea to use Doxygen to build the dependency trees for all the C++ code, and that helped a bit. Something about seeing how everything is laid out, and what needs what in terms of a graph helps a lot. Information, they I cannot pick up easily looking at the source code.

Also, I have spent a bit of bit going back and reviewing the UML designs and making some changes. Specifically, it just seems the best way to deal with Audio and MIDI in iOS app is to use singletons. I don’t have a probably with them, but I know some people do.

Anyway I am hoping with the semester starting up in a few weeks I can get back into the groove.

 

The simplest things are often the hardest.

So had a few more successes last week.

I got CoreAudio/AudioUnit Swift code running in a XCode Playground. This was a big help in trying to understand how AudioUnits work, and how to work with C Pointers in Swift.

I also got a very simple CoreLocation application running, which retrieved the compass direction. [That is probably the easiest thing I have done so far.]

Now I have to vent just a little. In developing this app for iOS I am completely baffled about one thing about trying to develop and app for an Apple platform.

I really wish Apple’s documentation for development was a lot better. I cannot think of a single thing I have yet to try to do on iOS or macOS yet, in which I did not have to look up a tutorial or an example for how to do something from a third party source.

Compared to Microsoft’s MSDN it just seems that Apple’s developers’ documentation is pretty thin.

Sure Apple supplies some guides and sample code. However, I tend to find Apple’s choice of a sample code examples to be a little esoteric. Especially when I just happen to need just a really brain-dead example to help me grasp the concept on how I should use a framework.

I don’t know, I just find it funny.

10,000 more baby steps to go.

Ok the 4th of July through me off, but after two weeks there is a lot to update about.

First, I finally got a basic Structure app working on my iPhone. Photos below.

Camera image of Sofa on left, Structure Sensor Depth image on right.

Camera image of Sofa on left, Structure Sensor Depth image on right.

Camera image of desk and computer on left, Structure Sensor Depth image on right.

Camera image of desk and computer on left, Structure Sensor Depth image on right.

Camera image of backpack on chair on left, Structure Sensor Depth image on right.

Camera image of backpack on chair on left, Structure Sensor Depth image on right.

The biggest challenges what getting use to YpCbCr color space. Also, there was a small bug in the Structure SDK that cause the Synchronized frame calls not to work. However, considering my end goals it was not critical to use synchronized frames, yet.

Also I was able to get a very basic Audio and MIDI application running on the iPhone. This was crazy because I had to pull information from multiple sources to figure out how to do that. The current Apple example, is pushing AVFoundation, but I needed CoreAudio.

The point is, I am just glad I have been able to accomplish baby steps in both areas, using the sensor and generating sound. Now there is only like 10,000 more baby steps to go.

Another 2 forward, 1 step back.

My current grumpiness with XCode, you cannot refactor Swift Code.

One would think that is a pretty basic feature an IDE should have, but as of the writing of this it does not exist.

I discovered it when I tried to refactor some of my code for SEAR-RL last week. Xcode threw up an alert warning it could only do that for C and Obj-C code. So I decide to press ahead and just manually refactor everything. I think that was a bad idea.

I started having a lot of issues in my UI, like the Application just could not find certain assets [at least that is basis of the common warnings XCode keep telling me]. Unfortunately, since Apple and XCode tries to take care of a lot of stuff ‘behind the scenes’ from the developer, I could not figure out what I had disconnected [broke] with my factorization to fix the problems. So I had to trash all my code and start over with a new project.

I should not be surprised, if seem every time I start a new coding endeavor, I have about 2, 3, or even 4 false starts before I actually start getting anywhere. However, that doesn’t stop it from being annoying.

The upside, is these false starts usually help me develop a better idea how to structure the underlining project.