Funan Interactive Games is a project that challenged us in many ways. It pushed us to introduce some new products and caused some Lightact’s features to mature. In this article, we’ll explain the process that led to the successful completion of the project in May 2019.
But first, let’s go through what this project is all about. To do that, you can either head to the project page or just watch the project video below.
The task that our Client, Hexogon Solution, gave us, was to provide tracking, content and the media server for an 11 by 12 meters large interactive projection area on the floor of Funan, an iconic shopping mall in Singapore, that was rebuilt from the ground up.
When you need to track people on a large floor surface you usually use infrared scanners. But in our case, this wasn’t possible because we couldn’t install anything on the floor. The only option we had was to install some kind of tracking camera above the floor.
So the first task we had to do was to choose the right camera technology.
Tracking
It was clear from the beginning that, as we are projecting dynamic content onto the area we need to track, we can’t use any kind of standard video camera, because the projected light would disturb the tracking. So, the next technology we investigated was infrared time-of-flight technology of which Microsoft Kinect and Intel RealSense are the most prominent examples.
The ceiling above the projection area is quite high. A section of it is 9 m above the area, but it’s offset from the center quite a lot. This means the camera would look at the tracking area at quite a steep angle, which would reduce the precision of the tracking.
There’s another section of the ceiling that’s positioned more centrally above the projected area, but it is 13 m high. As the range of Kinect and RealSense is much shorter than that, we had to look for another solution.
Next, we turned to thermal cameras. As always, though, their price was prohibitively high and besides that, we couldn’t find a camera with a wide enough view angle. As the installation height is around 13 m and the projection area is 11 by 12 m you need a pretty wide view angle to effectively cover the whole area.
So that didn’t work either.
Stereo Vision
An additional benefit of this technology is that by changing the distance between the 2 cameras, you can adjust the depth precision and the range. The more apart they are, the longer the range. In this project, for example, where the installation height is 13 m above the floor, we placed them 1 m apart. This gave us depth precision of around 6 cm at distances of around 12 meters. That’s pretty good.
Of course, we had to test the whole thing so we rented a lift and one frigid morning went 13 m high up in the air.
Everything went as expected and we were able to get depth maps good enough for tracking.
In the video above, you can see the raw StereoView footage, the white-on-black blobs image you get after initial processing in Lightact, the results of finding minimum enclosing circles for these blobs and the Unreal Engine output.
Creating the Content in Unreal Engine
The project brief said that the content should switch between branded interactive visual effects (motion graphics) and a multi-player game that the visitors could play by walking on the projection area. Before choosing Unreal Engine we looked at several other options such as coding everything ourselves using cinder framework or perhaps using real-time VFX engines such as Notch. However, due to the speed of prototyping we were able to achieve with Unreal’s Blueprint system, the ability to create an actual game logic (think score-keeping, game level management and so on) and a huge pool of Unreal Engine experts we could work with, we chose Unreal Engine very early in the process.
So how does it all work?
We were able to quickly integrate Unreal Engine with Lightact so that we could pass textures and variables from one application to the other.
Unreal Engine receives the information about the location and radius (x,y,r) of all the circles, uses this information for gameplay logic and outputs the final render via Spout to Lightact. Lightact then warps and blends this texture and sends it out to 4 projectors.
We also upgraded Lightact Manager so that it monitors Unreal Engine game and makes sure that it always runs as it should.
The end-client can control the installation with Lightact WebUIs.
How did we create the content?
The Client forwarded us their branding guidelines based on which we created all of the content from scratch in Unreal Engine.
In the final project, everything is rendered in real-time and there is no pre-rendered content at all. Altogether there are 6 interactive visual effects and 4 interactive games which sums up to 10 separate Unreal levels.
Conclusion
There are hundreds of things we learned when we combined technologies that, to our knowledge at least, weren’t used together before. Things like what to do when tracking results disappear for a split second due to various hardware factors, how to integrate all the components into a reliable and user-friendly system and so on.
All in all, the Client, our partners and ourselves are very pleased with the result.
On to the next one!