So big SEAR-RL new update. After a long time I have it running with ARKit. Video below of the highlights.

But for those of you which are TL/DW here are the highlights:

  • Running on iPad Pro 2020 with the LiDAR sensor.
  • Entirely ARKit based.
  • Core SEAR-RL uses a brand new AudioUnit based synthesis engine of my own design, thanks to AudioKit!
  • Multithreaded
  • Making use of ARKit’s Scene Reconstruction to identify specific type of objects by sound.
  • Simple text recognition (still a work in progress, but it is there).
  • Just making good use of general Covid-19 downtime.

Mapping SwiftUI’s MVVM to MVC for UIKit Storyboard people.

Learning Apple’s new SwiftUI has been a bit of a learning curve for me.

Although the advantages of SwiftUI promises to bring are obvious, the truth is I have programmed with the MCV mindset so long I was not grasping the fundamentals of MVVM which is a must to make the most of SwiftUI.

If you are having the same problems, I thought I would share my observations and maybe it will help. The solution is to think of it as a mapping the part of MCV to their corresponding parts of the MVVM. This is tricky, because most tutorials I have come across tend to gloss over this.

Here is the mapping:

Model -> Model.

Your Models in MCV can remain your models in MVVM.

View -> View’S’

The trick here is your views stay basically the same, but there are more of them.

Whether you used storyboards or created views programmatically you are basically trading those in for SwiftUI based view structs. In some aspects because Apple has always pushed Storyboards in the past thinking that a view can exist as pure code may even seem alien to a few. Largely we never saw Storyboard views as pure Swift (or old Objective-C) code. Even in their ‘native’ form storyboards file are just large XML files (more on that in a bit) which Swift (and Objective-C code) would interact via the IBOutlet and IBAction bindings. Apple reasoning has always been that interface builder took care of a lot of boiler plate code (true) that would be tedious if you have to build it entirely in native code (Swift/Objective-C). That was kind for them to do but led to a problem: Large XML Storyboard files equal large View Controller files.

We all know the joke that MCV stands for ‘Massive view controllers’, but I always thought was a little pejorative. Just because something is large does not means it is unwieldy. Most gripes about large controller files tend to come about because they are poorly organized, and developers are lazy to take the extra step to manage them, but I digress. In many ways, Controllers in MCV with Storyboards only became Massive because of the views they were dealing with.

First, most ‘controllers’ are ‘View Controllers’, and View Controllers tend to be associated with the largest encompassing elements of a UI, the page. You can get away with A LOT, using system defaults and customization just inside Interface Builder. In fact, Interface Builder, can be a little discouraging for those who do like to build views programmatically under the Storyboard paradigm. It is easier to add a function for customization to the View Controller for a contained view in your UI, then it is to create a band new Nib and associated code files and integrate that. It is because of that why controllers get that bad ‘massive’ rap. But fortunately, SwiftUI brings an alternative.

Controller -> (many) View Models.

This is the biggest change with SwiftUI. Take your controllers and blow them up, into many, many, smaller controllers. These will become your View Models.

Take your View Controller, which probably has your Text Field delegates, your Table delegates, your Table data source, your Picker Delegates, you Picker Data Source, navigation delegates etc, etc, etc and break them up into their component parts. Sure, you probably want to keep things like Table delegate and Table data sources together, but break them up. These parts will now become your View Models, and they do the exact same job as the controllers. They act an intermediator between the views and the models.

So, in truth nothing has changed. However, instead of having one central (View) Controller, we now have many smaller View Models (controllers) for each view in our UI. The C of MVC has become VM of MVVM.

Something lost, something gained — Variables

In general, this is all great, smaller parts are easier to test, but in losing our Controller in favorite of smaller View Models, we do lose one handy thing, (variable) state. But fortunately, Apple supplies us a replacement to make up for that: variable bindings.

The nice thing about the old (view) controllers is if you have to keep track of some data, just make a variable and store it. Heck, IBOutlets were basically just variables. But with view models, passing any changes to a variable back and forth to the views could get super tedious really fast. So, use bindings to get around that.

The most basic binding is probably @State, and they are found in Views. When you stick this attribute in front of a variable, you are basically telling the computer, “Hey dude this variable is going to be changing so pay attention when it does and do not get too attached to one state”.

The second most needed binding is probably @Binding, found in Views also. When you stick this attribute in front of a variable, you are basically telling the computer, “Hey dude whatever that @State variable is doing, make this variable the same.”

Using @State and @Binding is how different views can stay in sync with one another.

@ObserveredObject and @Published are responsible for binding data between views and view models. With @ObserveredObject a view can watch a class that conforms to the ObservableObject protocol and update itself when something happens. Remember only watch a view model, never a model directly.

@Published is a variable in a View Model puts out for the world and can change. Think of it as accessor like public and private, however in this case it is like public+. When subscribed to an @Published variable the listener (@ObserveredObject) will get notified of any changes made to the @Published variable. This allows the view model to manage the data without doing anything that may affect (crash) the view. The best example of this, is the view model receiving data from a URL call. One the view models get the data it formats the data to whatever the view needs and let the view know it is ready.

There are several ways to ‘publish’ data via Apple’s new Combine framework and ObservableObject. So, you don’t always have to use @Published, but if it is something in your view model that needs to change or be updated in your UI, it has to be watched via @ObserveredObject.

@EnviromentObject is the attribute I can see is going to be abused the most, and why I put it last. In many ways @EnviromentObject does everything @State and @Binding does all in one. It seems simpler to use to why not just use it? The trick is @EnviromentObject(s) should only be used for stuff that is global to your UI views, like on the application level. Maybe something like your UI’s theme color. This is why it is perfect for use in the PreviewProvider in previewing SwiftUI in XCode, because it can act as a catch all grab bag for all of our data. But we should try to avoid this, because it will break the principle of low coupling / high cohesion in good software development.

In any case sometimes a picture is worth a thousand words, so if you are used to seeing the diagram of the old MVC style looking like this:

Old MVC Pattern

Your can now imagine the new MVVM as this:

New SwiftUI MVVM Design Pattern (sort of)

I will admit I am being no means an expert on SwiftUI, yet. However, once I made these observations between the differences of MVC and MVVM my comprehension became a little better. So, I thought it would be good to share if it might help others. For the experts who know more than me, if I got something wrong feel free to drop a line and I will update everything, in the meantime I hope this can be some help to those which are still confused about MVVM.

SynthOne now with Accessibility!

I helped out the AudioKit guys make their SynthOne project accessible so it plays well with Apple’s VoiceOver.

The version with the new accessibility features went live today.

I am really of proud of this one because music software is not exactly known for being super accessible, and I finally got to work on a proper piece of audio software.

Please check it out, and it is free!

How to Set Up Voice-Over on iPad for AudioKit SynthOne Testing

Staque’s ‘Non-impaired guide to setting up an iPad for Accessibility Testing’

Now for people who do not have to use VoiceOver regularity on an iPad, getting it going can be intimidating.

However, there is a really simple thing a non-impaired person can do to allow you to experiment and test VoiceOver and without it becoming overwhelming.

Go to the Accessibility Menu in settings:

And you come to the Accessibility Settings Page:

Now you could just turn on VoiceOver here and call it a day, but there is something better you can do!

Scroll down the page and you will see two additional setting options.

First, the Home Button:

Then at the very bottom the Accessibility Shortcut button:

Tap the Accessibility Shortcut and you will see the Accessibility Shortcut menu:


Here you want to sure that VoiceOver is selected.

What this does, is allows you to turn VoiceOver on and off by triple-tapping the Home Button.

Once you have that set you can then begin exploring VoiceOver.

Now, with VoiceOver on, your device with respond to touches differently.

You can find a complete list here.

However, here is a quick starter.

1-Finger to select, or swipe to move to the next Accessible Control.

1-Finger Double Tap to interact with a control.

If you want a challenge:

3 Finger Triple Tap – will turn the Screen Curtain on or off.

But be careful

3 Finger Double Tap  – will turn the speech on or off.  So if everything goes silent, and you know VoiceOver is on and the volume is up, try this.

There is a lot of other little tips [like the VoiceOver Rotor] but this will get you started.

You will also will probably want to increase the speech rate and download another voice after using VoiceOver a bit.  This can be selected in the Accessibility Menu.  [I personally cannot stand the Samantha Voice]


Finally exploring ARKit.

So message around with ARKit, to see if I can convert SEAR-RL to it.

In the same way that the Structure Sensor and direct sunlight does not get along;  ARKit really does not like plain, white/beige, untextured surfaces.


CS-660 Interactive Machine Learning Final Paper.

Finally finished my semester project to ID stairs using Convolutional Neural Nets (CNN) for SEAR-RL.

Read it here if you want.

The Effect of Data Content and Human Oracles on Convolutional Neural Networks and Incremental Learning

If you just want to tl;dr here it is (nothing ground breaking):

  1. If you want to ID 3D objects with a CNN, you are better off using 3D data (point clouds) then 2D image data (even for 2D CNNs like I used).
  2. Using human reinforcement in incremental training of Neural Nets does not really improve training.  It might help if you are adding new classes to ID along with the data, but that would be future work to explore.

You can check out the code for the project here:

Although you need to get the data I collected for training from here:

(the data is too big to store on GitHub)

To run everything you need.

  • Anaconda 5.0 / Python 3.6
  • TensorFlow 1.1.0
  • Keras 2.0.8

And if you want to check out the data collection Application for iOS (of just need a start Occipital Structure App written for Swift 4.0) you can get that here:

Teaching the Machine.

Progress update.

Well, I have not stopped working on SEAR-RL. I have just taken a small break to focus on a different aspect of it.

While I look for a full-time job, I have continued to take classes UK, this semester it was an Interactive Machine Learning graduate level class. As usual, my education has been a trial by fire, but for the most part I am enjoying it and learning a lot.

For, my semester project I decided to work on something to extend SEAR-RL. I am curious if I can build a neural-net model using the depth data from the Occipital Structure to identify important pedestrian obstacles. Specifically, I want SEAR-RL to id stairs, for walking up and down, and ledges someone could fall off. I have completed the data collection portion of the project, and collected about 5 gigs of data. Now, comes the building the deep neural net.

Fortunately, while taking the UK class, I have been supplementing my education with Andrew Ng’s Deep Learning Course, as well as a great book “Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems.”

It is scary because like most things I end up wanting to do in life, there is no formal process to learn what I want to learn and do. For example, unfortunately I was just off-cycle in the curriculum to be able to take the Machine Learning classes during my proper graduate studies. So now that I taking a graduate level class, I am having to learn everything at high speed and by fire just to complete the course. [I have tried to complain to the management of the universe I get tired of this recurring theme in my life, but oh well. Maybe someday, I can do something I want without feeling under the gun for once.]

In general, I guess I just wish I knew what to do with SEAR-RL. I am not sure it is enough to turn into a business. Not to mention, business acumen is not my thing. But you think in a world where tech companies keep trying to push Augmented Reality tech, there might be a place.

I just do not know.

Anyway, that is the update.

The Great Refactoring

After a couple of slow months, I finally finished the great refactoring of SEAR-RL.

Hopeful, I can now clean up and add new features, with a lot less pain now.

So yes, I am still working on it.

It is now April

So a couple small updates.

Even though I made a fair Sample Bank for all the sounds in SEAR, but preferred goal has always been to have an actual software synth producing the noises. In this case, that would be an iOS AudioUnit.

However, trying to learn how to program AudioUnits is like one of more annoying things I have tried to learn in iOS.   There are a few good resources, and those sources are usually either dated or piecemeal.

Still, I have created a new GitHub depot as a way to document everything I learn in trying to build one.  You can find it here.

Also, I have run into a really nasty SegFault bug in SEAR.  I think I might have a solution, but it is one of those bugs that makes you step away from a project to collect yourself and to build up the energy to deal with it.

All, of this because, Jaromczyk asked me to demo SEAR this summer as some Engineering Summer Camp UK is host.  So if I don’t have a job by then, I am using it as a ‘artificial’ deadline to get a few more things done.


Post-Masters Work

Ok, I know it has been a while since a blog update.

Yes, I passed my Masters and graduated but that has not stopped me from working on SEAR-RL.

With UK’s E-Day, coming up this weekend, and Prof. Jaromczyk wanted me to demonstrate it for the event it gave me a deadline to improve it. As, my old teacher Jesse Schell once said, “Deadlines are magic.”

So, I did some major clean-up to the code base and UI. However, the most important change, is gone is my old system for locating closest objects in the user’s view. I replaced it with a particle filter system to do the same thing [because all the cool kids are using Machine Learning in their projects in one way or another]. What is nice is that the particle filter does work much better than my old system. Sure, it is a little wacky at moments [like all machine learning algorithms can be] but overall it seems to be a win.

The hardest part was just self-teaching myself everything I needed to implement one. This GitHub project and this YouTube video probably helped me understand how Particle Filters work the best. Which is good, because most of the literature is just a little math-y, and I am not great when it comes to learning from books. I am definitively, a show me once [maybe twice] type of learner.

Now just a couple of random thoughts.

I am constantly, surprised how bad a lot of scientific/math/engineering writing is. I honestly, think it is one of the things that turns people off from science. Just because one is writing about complex things does not mean one needs to write complicatedly.

The rule I was always taught, is write toward an audience at 5th grade level, and I think there is something to be said for that. If I had never found that video and project, I am not sure I would have ever figured out how to build a particle filter. I think the scientific communities really needs to take a look at what is considered good writing for the public at large.

At much as SEAR-RL is a passion project for me, I think one of the reason my previous few blog posts seem sort of light was that working on anything for a long time can wear someone down.

Seriously, I have been working on this project for at least a decade time, and even though I still have a lot of ideas for improvement, I am still not sure about my personal future nor what to do with the project. So much of that can begin to hang on oneself like an albatross.

I really did not have anything to follow up that notion, but I think it does provide insights on how people like George Lucas can give up Star Wars and the rest of the LucasFilm to Disney. Even the best of things can wear on a person creating it.

Still, thanks to Jamie Martini’s for the suggestion to use a particle filter.

Finally, I think I finally thought of the perfect way to describe my project to people: Augmented Reality for the Blind and Visually Impaired. I will try that for a while and see how it works.