You are here:

Optical flow is a computer vision algorithm which, in a nutshell, analyzes the movement of pixels from frame to frame. As such it is particularly useful where you’ve got background movement which doesn’t allow you to use other methods. It is one of the quickest computer vision algorithms to set up although it is computationally quite intensive. This guide will explain how to use it in Lightact, but if you want to know more about the algorithm itself, head to this Wikipedia page. There are also a lot of different examples and videos online that you can find to learn more about this great algorithm.

optical flow node

The starting point in using an optical flow is to insert an Optical flow node (it uses calcOpticalFlowFarneback function in OpenCV library). The node needs a cvMat input, which is an OpenCV variable type designed for computer vision.

optical flow layer layout

In this guide, we’ll be using this layer layout. It contains a Video reader node, which reads from a video file – see below, a Texture to cvMat node, which converts a Texture to cvMat, our Optical flow node, a cvMat to texture node, which converts optical flow cvMat back to a texture, and a Render to canvas node.

[vc_message]You can download the sample video file we are using in this guide here. It shows a FLIR recording from above of 2 people walking. To make programming easier, please make sure Ignore Pause boolean input on Video reader node is checked.[/vc_message]

Node inputs

Optical flow node has several inputs. We won’t go into explaining what each of them does, but we will focus just on the ones you’ll want to modify most often (if you want to know the background of these parameters, the best resource would be official OpenCV documentation).

  • Input: here you should connect the cvMat you want to run optical flow analysis on.
  • Pyramid scale: in most cases, you should leave it at 0.5.
  • Levels: refers to pyramid levels and in most cases, you’ll want to leave it at 3.
  • Window size: refers to the averaging window size. The larger the number the more will smaller areas of movements get ignored and only large ones considered.
  • Iterations: in most cases, you’ll want to leave it at 3.
  • Poly neighbours: in most cases, you’ll want to leave it at 5 or set it to 7 for more blurry results.
  • Poly sigma: if Poly neighbours is 5, set it to 1.1, if it is 7, set it to 1.5.
  • Min val: sets the minimum value below which the output will be zero (black). If you’ve set dynamic magnitude (see below) to true, minimum value makes sure you don’t have some results even when there is no movement.

Node parameters

optical flow properties

Dynamic magnitude: if it is set to True, then the algorithm will automatically adjust the sensitivity (magnitude) so that there will be some results at all times. This can be disturbing if you want to track only movements that are larger than a certain value. In this case, you can set the Min val input so that movements smaller than this value will get ignored.

Tracking input determines the framerate (FPS) at which the algorithm runs. The default is 30, but in a lot of cases, you can lower it even further as this may save some cost.

Colors

Optical flow outputs the direction of the movement as well.

color Wheel

The color coding is based on a standard HSV wheel shown above, so that red means right, cyan means left, green-ish means down and so on.

For example, you can connect this texture directly to Texture particle emitter in 3D scene and the particle direction will match these colors. Check the video tutorial below for instructions.

[yotuwp type=”videos” id=”zald_tCU830″ ]

Previous Find circles and contours