Research and Development

1 Step Forward, 2 Steps Backwards | R&D

Overview and History:

The script was initially designed to incorporate technology and more specifically, weave it into the story as developmental tools. The goal was (and continues to be) to integrate and intercept technology and live theatre/film. Since we are on the forefront of this field, the majority of the work is done in house by ourselves and our software developer. I personally have been working on the third (and hopefully final!) iteration of the Multimedia Augmented Reality system — or as we call it, mARs. This post will delve deeper into the programs that we developed and the hardware that we used over this nearly 2 year long project.

The Original Idea: Live Head Tracking

Arduino and Unity

The first idea (and the easiest) was to use an Arduino, BlueTooth, and a host computer running a program written in Unity/C#. Why Unity? Our lead software developer was very familiar with the program and VR applications. Since this project merges VR with multimedia, this turned out to work incredibly well. However, we ran into issues with BT (more specifically the virtualization of com ports) and the portability of the system.

TinyDuino + USB Shield + 9-Axis IMU

We initially used a TinyDuino with the BlueTooth module as well as the 9-axis IMU. The program that we wrote took the data straight from the gyroscope and sent it over the serial port. From there, Unity read the serial port just as fast as we sent it and moved a virtual camera around 360 degree video. Think of VR without a headset. Of course, we were overjoyed when this finally worked. So what went wrong? Unfortunately, many things were not perfect. Due to the nature of BlueTooth, we had multiple issues with the location of com ports and the packagability of the entire system.

What Features Worked:

  • Arduino + Gyroscope printing to the Serial Port
  • Unity as our video interpreter and exporter

What Features Didn’t Work:

  • Bluetooth
  • Portability (requires multiple external batteries/host computer)
  • Cost (approx. $100-125)

I decided that it would be interesting to see if we would be able to reduce the amount of computing power required to run our software. I knew that we were locked into the Unity system and we needed to reoptimize the code to run on lower quality/portable devices like a Raspberry Pi. The next goal was to make our code run on a Raspberry Pi and create a bridge between the Arduino and the RPi USB ports.

A LattePanda, Raspberry Pi, and Windows IOT Walk into a Bar

Part 1: Raspberry Pi with Windows IOT

Our Raspberry Pi 3 Model B+ wired with an analog video transmitter.

We looked at the compiling options in Unity and we determined that we could compile the program for ARM processors and specifically Windows IOT. We chose to use a smaller, more portable SoC because I wanted the entire device to run on the actor — to show how the technology could easily be integrated into any multimedia show. Due to this requirement, we needed to get wireless video. Our first steps were to exploit the Raspberry Pi’s analog video out and use a recreational 5.8Ghz video transmitter for wireless video. This turned out to be the easiest part of the entire build. Once this was accomplished, we attempted to compile the program for the Raspberry Pi.

Test of the wireless 5.8Ghz video with a professional grade 5.8Ghz ground station receiver.

Unfortunately, Windows IOT (at least for us) was terrible to work with. We were unable to run our program on the platform, most likely due to the performance demand as we changed very little in the code. Soon after, we abandoned the idea of using a RPi since we were unable to code for ARM processors natively. For us, at this time, we believed that we needed to find a device that was in between a Raspberry Pi and a laptop. In other words, the device needed to be: x86 compatible, run Windows 10, be very portable.

Part 2: LattePanda, because who doesn’t need more coffee?

By now, both Zeezee and I were attending our respective universities, far enough away from each other that work needed to be split up into two teams. I formed my team with an experienced Arduino programmer and we delved into the depths of the LattePanda. However, this time, Zeezee and I had a much larger due date: The AISNE MS Diversity Conference. We planned for the announcement and demonstration of our technology at the conference so we geared up for working hard.

3D Printed case designed in Fusion 360 for the LattePanda.

I chose the LattePanda because it seemed too good for what we needed. Approximately the same size of the RPi, more powerful, runs Windows 10, plenty of GPIO with direct access to the IO on the Intel processor, a built in Arduino Leonardo. The list goes on and it makes it seem ever better the more we read. We purchased the LattePanda as well as a multitude of accessories. I redesigned a 3D printed case based on the design for the Raspberry Pi and printed it at my university.

Part 3: Why are we moving on from this application?

Unfortunately, the good news and potential for the LattePanda died off exponentially as the days went by. My team and I learned that programming for the LattePanda was much harder than expected, especially since the documentation was not great and the user base was smaller than we thought.

static Arduino arduino = new Arduino(); // We thought this snippet was hilarious. 

We wanted the LattePanda to be great. I know that if we continued to work with the platform, it may work but due to the cost of the device and our ability to break things, we cannot continue with the development of the platform. After the conference, we would take a break from the project and reconvene during our winter break.

Google Cloud and More Raspberry Pi’s

I had a crazy idea that stemmed from an even crazier idea. I was working on a bot and found that Python 3 interacted with the web very, very well. I also knew that Python was one of the native programming languages for the Raspberry Pi. Something clicked and I figured that maybe we could return to the platform but using it in a different way. This idea would end up merging all of the previous iterations of the project. Here are the things that we needed to get done:

  • Real-time gyro data
  • High quality video output (up to 4K/potential for higher)
  • At least 24 fps
  • Low cost and ease of replication

The further I thought about this, the more it seemed reasonable. Many products use ‘the cloud’ to process real time data (i.e., natural language processing) so would this be possible? From my bot that I was previously working on, I was running the bot on a virtual machine on a Google Cloud Compute Core. This gave me the idea to look into IOT running on the Google Cloud platform.

Cloud Framework Programming


I tossed the idea around with our lead programmer, Harry. While he figured it was a backwards route (which I completely understood and previously knew), he was intrigued with the idea of off loading video compute power to ‘the cloud’. We discussed the potential issues with the idea and we kept running into the issue of encoding/decoding video nearly instantly. From our experience, we determined that the latency between the gyroscope and the projection needed to be no greater than 500ms or half a second. This will most likely be the hardest part of the programming but we will find out once this is completed (or near complete).

What are Our Next Steps?

I believe that we can get to the h.264 Encoder part of the flow chart. However, I do not know if this iteration will ever be complete. Harry and I both agree that whether or not this will work, it will be a learning experience for both of us. We do have a plan B if this does not work by our expected deadline (August 2019).

We will continue to work on this and I personally will begin development of the gyroscope uplink to Google Cloud. We will continue to update the GitHub as well as the blog with any and all data that we have!

Wish us luck!

Ben