Background Modelling using Octree Color Quantization

5 Dec 2014  ·  Aditya A. V. Sastry ·

By assuming that the most frequently occuring color in a video or a region of a video I propose a new algorithm for detecting foreground objects in a video. The process of detecting the foreground objects is complicated because of the fact that there may be swaying trees, objects of the background being moved around or lighting changes in the video. To deal with such complexities many have come up with solutions which heavily rely on expensive floating point operations. In this paper I used a data structure called Octree which is implemented only using binary operations. Traditionally octrees were used for color quantization but here in this paper I used it as a data structure to store the most frequently occuring colors in a video as well. For each of the starting few video frames, I constructed a Octree using all the colors of that frame. Next I pruned all the trees by removing nodes below a certain height and gave the leaf nodes a color which is dependant on the topological path from that node to its parent. Hence any two leaf nodes in two different octrees with the same topological path from themselves to the root will represent the same color. Next I merged all these individual trees into a single tree retaining only those nodes whose topological path to itself from the root is most common among all the trees. The colors represented by the leaf nodes in the resultant tree will be the most frequently occuring colors in the starting video frames of the video. Hence any color of an incomming frame that is not close to any of the colors represented by the leaf node of the merged tree can be regarded as belonging to a foreground object. As an Octree is constructed using only binary operations, it is very fast compared to other leading algorithms.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here