Object tracking in interactive projections has a relatively long history – as much as that can be said for the AV industry anyway. In the past few years, we’ve seen a big increase in the use of various tracking technologies that rely on placing beacons on a performer on a stage. Usually, these beacons are tracked with cameras that, with the help of dedicated software, pass the information about beacon location to a media server.[/vc_column_text][vc_row_inner][vc_column_inner][mk_fancy_text color=”#000000″ highlight_color=”#000000″ highlight_opacity=”0″ size=”32″ line_height=”50″ font_weight=”300″ margin_top=”40″ font_family=”none”]But what if we want to track objects that don’t have beacons on them?[/mk_fancy_text][/vc_column_inner][/vc_row_inner][vc_column_text]In this case, the approach has usually been to use one of the following technologies:[/vc_column_text][mk_custom_list style=”f03d” margin_bottom=”60″]

  • Standard RGB cameras: here the most common approach is to detect silhouettes. This works best if the person is not actually lit with the projection.
  • Depth cameras such as Microsoft Kinect® or Intel RealSense®: they work in the infrared spectrum, so they are not bothered by the projection. A big show stopper, however, is their limited range, which in the case of RealSense doesn’t exceed 10 meters (if at all). You also cannot modify the field-of-view as the lenses are non-changeable.
  • Lidars: lidars are all the rage now, but mostly because they are used in the automotive industry for self-driving cars. Consequently, most of the products on the market have a very narrow vertical field-of-view (a few degrees), a bulky frame and quite a high price-tag.
  • Laser scanners: they work by creating an invisible infrared curtain (for example just above the ground) and if any object breaks it, they detect the range of this object. They are quite useful for large multi-touch surfaces and floor projections, but only if it is possible to install the sensor on the ground.

[/mk_custom_list][vc_row_inner][vc_column_inner width=”1/2″][mk_blockquote text_size=”24″]Stereo vision works on a frame-by-frame basis and is completely independent of the projection.[/mk_blockquote][/vc_column_inner][vc_column_inner width=”1/2″][mk_padding_divider size=”20″][vc_column_text]However, what if you want to create a large-scale interactive projection on the floor (say 10 by 10 meters) and don’t have the ability to install a laser scanner. Or what if you want to detect performers on the stage and don’t have the budget for beacon tracking?[/vc_column_text][/vc_column_inner][/vc_row_inner][vc_column_text]At Lightact, we were looking for a reliable and versatile solution for quite some time and after months of research, we developed, in collaboration with our partners, a stereo vision module. Stereo vision works on a frame-by-frame basis so it is completely independent of the projection itself. As it uses standard cameras you have a wide choice of lenses giving plenty of field-of-view options.

Stereo vision is actually how humans see distances. It works by placing 2 cameras facing the same way but spaced a bit apart. The algorithm then compares the images to create a depth map of the scene. Furthermore, by modifying the distance between the cameras you can adjust the detection range.

We are very excited about this technology and hope that it will be the tool that will spark many more interactive installations.

We are going to release it at Prolight & Sound 2019 (Hall 4.0, booth B29).[/vc_column_text][/vc_column][/vc_row]

Recommended Posts