Background subtraction (also known as Foreground detection) is a computer vision algorithm that tries to distinguish foreground objects from the background. There are various approaches to this problem, however, Lightact uses an approach called MOG2 (if you want to delve deeper, check out OpenCV’s BackgroundSubtractorMOG2 class).
Background subtraction can be performed using MOG2 Background Subtraction node. The node needs a cvMat input, which is an OpenCV variable type designed for computer vision.
In this guide, we’ll be using this layer layout. It contains a Video reader node, which reads from a video file – see below, a Texture resize node, a Texture gaussian blur node, which reduces the noise, a Texture to cvMat node, which converts a Texture to cvMat, our MOG2 Background Subtraction node, a cvMat to texture node, which converts the cvMat back to a texture, and a Render to canvas node.
You can download the sample video file we are using in this guide here. It shows a FLIR recording from above of 2 people walking. To make programming easier, please make sure Ignore Pause boolean input on Video reader node is checked.
Node inputs
MOG2 Background Subtraction has just 4 inputs:
- Source is the cvMat input on which you want to perform MOG2 Background Subtraction analysis
- Learn rate is the learning rate that the algorithm uses. Any negative value means an automatic learning rate, which is what you’ll want to use in most cases.
- If Blur input is true and Blur & Threshold is false, then the output won’t’ be thresholded, just blurred. If both are false, then neither will apply.
- If Blur & Threshold is true, then the result is first blurred and then thresholded.
Usage
The output of MOG2 Background Subtraction can be fed directly into a Texture particle emitter in Lightact’s 3D scene or further to Find circles or Find contours node for additional recognition.