ACMI Flipbook

We were approached by ACMI to develop a new permanent interactive at their museum. The interactive is one of the first things visitors would see as they enter the museum, so they wanted a solution that would be fun and engaging. Based on this we were tasked with creating an interactive for users to create a digital and physical flipbook that captured their image in motion in a unique and interesting way. This would involve allowing users to stand in front of a large touch screen and camera and generate a motion graphic which contains themself along with an interesting visual effect. The user would then be able to review the video and re-capture or print the output into a physical “flipbook”, whereby they can flip through the pages to “playback” the animation.

The project required the development of real-time photo and video manipulation that worked well for digital and print purposes in a flipbook style.

The first stage of the project involved prototyping various visual effects that looked interesting and were similar to the design concepts, while maintaining real time performance. This resulted in about 20 to 30 different effects which we then tested against various lighting conditions and user skin tones. While these prototypes produced some interesting results none of them were usable in their current form, either due to not being close enough to the design concepts, performance issues or concerns relating to various skin colours. We anticipated that we would need to use the additional computation resources to reduce visual noise in the background to isolate the user in clear and recognisable frames.

To achieve this we decided to use an Azure Kinect as the capture device due to the fact that it contains RGB, depth and IR cameras and to begin with we assumed we would need the depth camera to isolate users with darker skin tones. However after testing against our prototype effects we discovered that the RGB camera produced the best results and removed the need for additional computation. Ultimately we still ended up using the Azure Kinect, but we could have used a different and lower specified camera to achieve similar results. However, what this solution does grant us is the ability to update the work over time that takes advantage of the depth camera if any issues arise onsite due to issues outside our control. To get the Azure Kinect working within Electron we extended the Azure Kinect node c++ extension. From here we narrowed down to a single effect where the RGB colour channels were split into separate data streams, offset by x frames and then recombined. This resulted in streams of colour behind a user when there was a lot of movement. The visuals of this effect worked well, however were extremely taxing from a performance point of view and were far below the required frame rate.. The first attempt at resolving this was to move the computation into a higher performing language. This was done by adding a c++ extension and writing the logic to split and recombined the RGB colour channels. This did result in better performance, but it was still far too slow. The next attempt was to upload each frame to the GPU and run the calculations in the GLSL shading language. Because GPU shading languages run 1000’s of operations in parallel our RGB effect computation ran remarkably fast. While there was still a bottleneck with transferring the data from the CPU to the GPU it was able to generate and save frames in real time. There was also a requirement to playback the captured frames later in the experience. This was challenging because the first attempt at saving a video stream while applying an effect resulted in dropped frames. To overcome this, instead of replaying a video, data was stored in RAM with no transformation and then each frame was dynamically generated from the stored data.

Role and responsibilities

  • App Development
  • Stop motion rotoscoping
  • Installation

Credits

  • Jamie Foulston – Digital Designer
  • Pete Shand – Lead Developer
  • Liv Howard – Producer