SynthOne now with Accessibility!

I helped out the AudioKit guys make their SynthOne project accessible so it plays well with Apple’s VoiceOver.

The version with the new accessibility features went live today.

I am really of proud of this one because music software is not exactly known for being super accessible, and I finally got to work on a proper piece of audio software.

Please check it out, and it is free!

https://itunes.apple.com/us/app/audiokit-synth-one-synthesizer/id1371050497?ls=1&mt=8

How to Set Up Voice-Over on iPad for AudioKit SynthOne Testing

Staque’s ‘Non-impaired guide to setting up an iPad for Accessibility Testing’

Now for people who do not have to use VoiceOver regularity on an iPad, getting it going can be intimidating.

However, there is a really simple thing a non-impaired person can do to allow you to experiment and test VoiceOver and without it becoming overwhelming.

Go to the Accessibility Menu in settings:

And you come to the Accessibility Settings Page:

Now you could just turn on VoiceOver here and call it a day, but there is something better you can do!

Scroll down the page and you will see two additional setting options.

First, the Home Button:

Then at the very bottom the Accessibility Shortcut button:

Tap the Accessibility Shortcut and you will see the Accessibility Shortcut menu:

 

Here you want to sure that VoiceOver is selected.

What this does, is allows you to turn VoiceOver on and off by triple-tapping the Home Button.

Once you have that set you can then begin exploring VoiceOver.

Now, with VoiceOver on, your device with respond to touches differently.

You can find a complete list here.

However, here is a quick starter.

1-Finger to select, or swipe to move to the next Accessible Control.

1-Finger Double Tap to interact with a control.

If you want a challenge:

3 Finger Triple Tap – will turn the Screen Curtain on or off.

But be careful

3 Finger Double Tap  – will turn the speech on or off.  So if everything goes silent, and you know VoiceOver is on and the volume is up, try this.

There is a lot of other little tips [like the VoiceOver Rotor] but this will get you started.

You will also will probably want to increase the speech rate and download another voice after using VoiceOver a bit.  This can be selected in the Accessibility Menu.  [I personally cannot stand the Samantha Voice]

 

Finally exploring ARKit.

So message around with ARKit, to see if I can convert SEAR-RL to it.

In the same way that the Structure Sensor and direct sunlight does not get along;  ARKit really does not like plain, white/beige, untextured surfaces.

 

CS-660 Interactive Machine Learning Final Paper.

Finally finished my semester project to ID stairs using Convolutional Neural Nets (CNN) for SEAR-RL.

Read it here if you want.

The Effect of Data Content and Human Oracles on Convolutional Neural Networks and Incremental Learning

If you just want to tl;dr here it is (nothing ground breaking):

  1. If you want to ID 3D objects with a CNN, you are better off using 3D data (point clouds) then 2D image data (even for 2D CNNs like I used).
  2. Using human reinforcement in incremental training of Neural Nets does not really improve training.  It might help if you are adding new classes to ID along with the data, but that would be future work to explore.

You can check out the code for the project here:

https://github.com/ForeverTangent/CS-660-Semester-Project-V2

Although you need to get the data I collected for training from here:

https://drive.google.com/open?id=1bwJsnJfwcYEMXWGummS_NNne8PGf9P3r

(the data is too big to store on GitHub)

To run everything you need.

  • Anaconda 5.0 / Python 3.6
  • TensorFlow 1.1.0
  • Keras 2.0.8

And if you want to check out the data collection Application for iOS (of just need a start Occipital Structure App written for Swift 4.0) you can get that here:

https://github.com/ForeverTangent/SEAR-DC

Teaching the Machine.

Progress update.

Well, I have not stopped working on SEAR-RL. I have just taken a small break to focus on a different aspect of it.

While I look for a full-time job, I have continued to take classes UK, this semester it was an Interactive Machine Learning graduate level class. As usual, my education has been a trial by fire, but for the most part I am enjoying it and learning a lot.

For, my semester project I decided to work on something to extend SEAR-RL. I am curious if I can build a neural-net model using the depth data from the Occipital Structure to identify important pedestrian obstacles. Specifically, I want SEAR-RL to id stairs, for walking up and down, and ledges someone could fall off. I have completed the data collection portion of the project, and collected about 5 gigs of data. Now, comes the building the deep neural net.

Fortunately, while taking the UK class, I have been supplementing my education with Andrew Ng’s Deep Learning Course, as well as a great book “Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems.”

It is scary because like most things I end up wanting to do in life, there is no formal process to learn what I want to learn and do. For example, unfortunately I was just off-cycle in the curriculum to be able to take the Machine Learning classes during my proper graduate studies. So now that I taking a graduate level class, I am having to learn everything at high speed and by fire just to complete the course. [I have tried to complain to the management of the universe I get tired of this recurring theme in my life, but oh well. Maybe someday, I can do something I want without feeling under the gun for once.]

In general, I guess I just wish I knew what to do with SEAR-RL. I am not sure it is enough to turn into a business. Not to mention, business acumen is not my thing. But you think in a world where tech companies keep trying to push Augmented Reality tech, there might be a place.

I just do not know.

Anyway, that is the update.

The Great Refactoring

After a couple of slow months, I finally finished the great refactoring of SEAR-RL.

Hopeful, I can now clean up and add new features, with a lot less pain now.

So yes, I am still working on it.

It is now April

So a couple small updates.

Even though I made a fair Sample Bank for all the sounds in SEAR, but preferred goal has always been to have an actual software synth producing the noises. In this case, that would be an iOS AudioUnit.

However, trying to learn how to program AudioUnits is like one of more annoying things I have tried to learn in iOS.   There are a few good resources, and those sources are usually either dated or piecemeal.

Still, I have created a new GitHub depot as a way to document everything I learn in trying to build one.  You can find it here.  https://github.com/ForeverTangent/AUv3Breakdown

Also, I have run into a really nasty SegFault bug in SEAR.  I think I might have a solution, but it is one of those bugs that makes you step away from a project to collect yourself and to build up the energy to deal with it.

All, of this because, Jaromczyk asked me to demo SEAR this summer as some Engineering Summer Camp UK is host.  So if I don’t have a job by then, I am using it as a ‘artificial’ deadline to get a few more things done.

 

Post-Masters Work

Ok, I know it has been a while since a blog update.

Yes, I passed my Masters and graduated but that has not stopped me from working on SEAR-RL.

With UK’s E-Day, coming up this weekend, and Prof. Jaromczyk wanted me to demonstrate it for the event it gave me a deadline to improve it. As, my old teacher Jesse Schell once said, “Deadlines are magic.”

So, I did some major clean-up to the code base and UI. However, the most important change, is gone is my old system for locating closest objects in the user’s view. I replaced it with a particle filter system to do the same thing [because all the cool kids are using Machine Learning in their projects in one way or another]. What is nice is that the particle filter does work much better than my old system. Sure, it is a little wacky at moments [like all machine learning algorithms can be] but overall it seems to be a win.

The hardest part was just self-teaching myself everything I needed to implement one. This GitHub project and this YouTube video probably helped me understand how Particle Filters work the best. Which is good, because most of the literature is just a little math-y, and I am not great when it comes to learning from books. I am definitively, a show me once [maybe twice] type of learner.

Now just a couple of random thoughts.

I am constantly, surprised how bad a lot of scientific/math/engineering writing is. I honestly, think it is one of the things that turns people off from science. Just because one is writing about complex things does not mean one needs to write complicatedly.

The rule I was always taught, is write toward an audience at 5th grade level, and I think there is something to be said for that. If I had never found that video and project, I am not sure I would have ever figured out how to build a particle filter. I think the scientific communities really needs to take a look at what is considered good writing for the public at large.

At much as SEAR-RL is a passion project for me, I think one of the reason my previous few blog posts seem sort of light was that working on anything for a long time can wear someone down.

Seriously, I have been working on this project for at least a decade time, and even though I still have a lot of ideas for improvement, I am still not sure about my personal future nor what to do with the project. So much of that can begin to hang on oneself like an albatross.

I really did not have anything to follow up that notion, but I think it does provide insights on how people like George Lucas can give up Star Wars and the rest of the LucasFilm to Disney. Even the best of things can wear on a person creating it.

Still, thanks to Jamie Martini’s for the suggestion to use a particle filter.

Finally, I think I finally thought of the perfect way to describe my project to people: Augmented Reality for the Blind and Visually Impaired. I will try that for a while and see how it works.

It worked!

OK, I know this all is long overdue for an update.

 

Wow, the last blog post was over a month ago. So, I guess I should fill people in so here is the short version.

 

I got my project running at the end of October. Then I had to schedule some user testing, to see if all my work was good. Despite promise of pizza my first round of user-testing was sort of bust. Only 8 people showed up. I had to improvise a second round of user testing over Thanksgiving. Since family was coming to our house for Excess Carbs Day, I built a maze in our ‘playroom’ in the house, and had everyone walk through.

 

Here is a video of Sky doing the test.

 

In general user-testing was a success, in that I just wanted to test if people could easily pick up and use the system. Mostly everyone was able to use sound of the system to get through the simple maze. The most interesting part, and this is just anecdotal evidence is despite scientific research stating that men do better on special tests, women tended to get through the maze faster than the men.

 

Then two weeks after that, I had my Masters Defense.

 

 

 

I guess you can figure it out. I passed and I am now a Master of Computer Science.

 

I probably should write something more introspective about the semester, but honestly, I have really been up in my head space since my Masters Defense, trying to figure a bunch of stuff out. So maybe I will add a few thoughts on how everything when eventually figure that out.

 

Along with making it through my masters, and I am just glad to know, that I got a more practically prototype to work, and that the idea itself is viable. Now I just need to figure out what happens next.

 

I got the kit.

Just a short update.

Got the Occipital Structure VR Kit today. In short, I have a proper headset to hold the Structure Sensor for testing on Saturday.

 

 

I am not sure if I need the Wide Angel Lens but I think Bridge needs it.

 

It is nice to have a proper headset, but I cannot wear my glasses wearing the headset, also there is no trigger button like on a normal Google Cardboard. At least unless you stick your finger through the nose space.

 

Still, I am glad to have a second sensor for Saturday. I am not sure how long the charges last, so it will be nice to have a backup.