X

Step Up to Modern Cloud Development

  • Java
    October 12, 2018

Matrix Bullet Time Demo Take Two

Javed Mohammed
Community Manager

By Christopher Bensen and Noel Portugal, Cloud Experience Developers

If you are attending Oracle Code One or Open World 2018, you will be happy to hear that the Matrix Bullet Time Demo will be there. You can experience it by coming to Moscone West in the Developer Exchange and inside the GroundBreakers Hub.

Last year we went into the challenges of building the Matrix Bullet Time Demo (https://developer.oracle.com/java/bullet-time). A lot of problems were encountered after that article was published so this year we pulled the demo out of storage, dusted it off and began refurbishing the demo so it could make a comeback. The first challenge was trying to remember how it all worked.

Let’s backup a bit and describe what we’ve built here so you don’t have to read the previous article. The idea is to create a demo that takes a simultaneous photo from camera’s placed around a subject, stitch these photos together and form a movie. The intended final effect is for it to appear as though the camera is moving around a subject frozen in time. To do this we used 60 individual Raspberry Pi 3 single-board computers with Raspberry Pi cameras.

Besides all of the technical challenges, there are some logistical challenges. When setup, the demo is huge! It forms a ten foot diameter circle and needs even more space for the mounting system. Not only is it huge, it’s delicate. Wires big and small are going everywhere. 15 Raspberry Pi 3s are mounted to each of the four lighting track gantries, and they are precarious at best. And to top it off we have to transport this demo to where we are going to set it up and back. An absolutely massive crate was built that requires an entire truck. Because of these logistical challenges the demo was only used at Open World and they keynote at JavaOne.

Last year at Open World the demo was not working for the full length of the show. One of the biggest reasons is aligning 60 cameras to a single point is difficult at best and impossible with a precariously delicate mounting system. So software image stabilization was written which was done by Richard Bair on the floor under the demo.

If you read the previous article about Bullet Time, then you’d know a lighting track system was used to provide power. One of the benefits of using a lighting track system is that it handles power distribution. You provide the 120 volt AC input power to the track and it carries that power through copper wires built into the track. At any point where you want to have a light, you use a mount designed for the track system, which transfers the power through the mount to the light. A 48 volt DC power supply sends 20 amps through the wires designed for 120 volts AC. Each camera has a small voltage regulator to step down to the 5 volts DC required for a Raspberry Pi. The brilliance of this system is, it is easy to send power and transmit the shutter release of the cameras and transfer of the photos via WiFi. Unfortunately WiFi is unreliable at a conference, there are far too many devices jamming up the spectrum, so that required running individual Ethernet cables to each camera which is what we were trying to avoid by using the lighting track system. So we end up with a Ethernet harness strapped to the track.

Once we opened up the crate, and setup BulletTime, only one camera was not functioning. On the software side there are four parts:

 

  1. A tablet that the user interacts with providing a name and optional mobile number and a button to start the countdown to take the photo.
  2. The Java server receives countdown, sends out a UDP packet to the Raspberry Pi cameras to take a photo. The server also receives the photos and stitches them together to make the video.
  3. Python code running on the Raspberry Pi listens for a UDP packet to take a photo and know where to send it.
  4. The cloud software uploads the video to a YouTube channel.  And a text message with the link is sent to the user.

The overall system works like this:

  1. The user would input their name on the Oracle JavaScript Extension Toolkit (Oracle JET) web UI we built for this demo, which is running on a Microsoft Surface tablet.
  2. The user would then click a button on the Oracle JET web UI to start a 10-second countdown.
  3. The web UI would invoke a REST API on the Java server to start the countdown.
  4. After a 10-second delay, the Java server would send a multicast message to all the Raspberry Pi units at the same moment instructing them to take a picture.
  5. Each camera would take a picture and send the picture data back up to the server.
  6. The server would make any adjustments necessary to the picture (see below), and then using FFMPEG, the server would turn those 60 images into an MP4 movie.
  7. The server would respond to the Oracle JET web UI's REST request with a link to the completed movie.
  8. The Oracle JET web UI would display the movie.

In general, this system worked really well. The primary challenge that we encountered was getting all 60 cameras to focus on exactly the same point in space. If the cameras were not precisely focused on the same point, then it would seem like the "virtual" camera (the resulting movie) would jump all over the place. One camera might be pointed a little higher, the next a little lower, the next a little left, and the next rotated a little. This would create a disturbing "bouncy" effect in the movie.

We took two approaches to solve this. First, each Raspberry Pi camera was mounted with a series of adjustable parts, such that we could manually visit each Raspberry Pi and adjust the yaw, pitch, and roll of each camera. We would place a tripod with a pyramid target mounted to it in the center of the camera helix as a focal point, and using a hand-held HDMI monitor we visited each camera to manually adjust the cameras as best we could to line them all up on the pyramid target. Even so, this was only a rough adjustment and the resulting videos were still very bouncy.

The next approach was a software-based approach to adjusting the translation (pitch and yaw) and rotation (roll) of the camera images. We created a JavaFX app to help configure each camera with settings for how much translation and rotation was necessary to perfectly line up each camera on the same exact target point. Within the app, we would take a picture from the camera. We would then click the target location, and the software would know how much it had to adjust the x and y axis for that point to end up in the dead center of each image. Likewise, we would rotate the image to line it up relative to a "horizon" line that was superimposed on the image. We had to visit each of the 60 cameras to perform both the physical and virtual configuration.

Then at runtime, the server would query the cameras to get their adjustments. Then, when images were received from the cameras (see step 6 above), we used the Java 2D API to transform those images according to the translation and rotation values previously configured. We also had to crop the images, so we adjusted each Raspberry Pi camera to take the highest resolution image possible, and then we cropped it to 1920x1080 for a resealing hi-def movie.

If we were to build Bullet Time version 2.0 we’d make a few changes, such as powering the Raspberry Pi using PoE, replace the lighting track with a stronger less flexible rolled aluminum square tube in eight sections rather than four, and upgrade the camera module with a better lens. But overall this is a fun project with a great user experience.

 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.