|Stochastic Systems Group|
Video Analytics over Camera Networks
Network of video cameras, developed in the last decade or so, permit today pervasive, wide-area visual surveillance. Visible-light cameras provide excellent temporal and spatial resolution, long range, wide field-of-view and low latency. The main difficulty is that video data by its nature produces a high degree of clutter and it is difficult to identify the truly relevant information from clutter particularly in urban environments. Conventionally, this problem has been handled by an object-based approach wherein an object may first be tagged, identified, classified, and tracked before behavior modeling and abnormal detection. This paradigm is neither scalable to complex urban environments or to networks of video cameras having limited communication capability.
We explore a location-based approach for behavior modeling and abnormality detection. We proceed directly with event characterization and behavior modeling at the pixel(s) level based on motion labels obtained from background subtraction. Our method requires little processing power and memory, is robust to motion segmentation errors, and general enough to monitor humans, cars or any other moving objects in uncluttered as well as highly-cluttered scenes. We establish ``essential'' invariance of our event characterization to zoom and look-angles, which enables autonomously establishing correspondence across different networked cameras.
Problems with this site should be emailed to email@example.com