World Coordinate Virtual Traffic Cameras: Edge-Based Transformation and Merging of Multiple Surveillance Video Sources, 2020 7th International Conference on Soft Computing & Machine Intelligence (ISCMI)

DOI: 10.1109/ISCMI51676.2020.9311597

All authors:

Raimonds Rava, Maksims Ivanovs, Ansis Skadiņš, Krišjānis Nesenbergs


Video cameras are an important high-fidelity source of surveillance information. They are especially useful in traffic monitoring scenarios in smart cities to reduce congestion, regulate traffic and enforce regulations. Unfortunately, increasing the number of cameras at a specific location makes it harder to keep quality attention on the scene due to high amount of unstructured data mixed with privacy breaching non-essential data, like faces of pedestrians etc. In this paper, we propose a method for real-time merging of multiple surveillance video sources to extract the required information in a single, world coordinate-based, virtual traffic camera view. Such a composite view allows for future improvements in quality of critical details using techniques as super resolution, while at the same time removing unnecessary private information. The implementation is validated on real world traffic data using NVIDIA Jetson TX2 as an edge device and consists of perspective transformation, image merging and object detection/tracking using YOLOv4 machine learning model for the extraction of significant objects only.