Vision sensors are becoming more important in Intelligent Transportation
Systems (ITS) for traffic monitoring, management, and optimization as the
number of network cameras continues to rise. However, manual object tracking
and matching across multiple non-overlapping cameras pose significant
challenges in city-scale urban traffic scenarios. These challenges include
handling diverse vehicle attributes, occlusions, illumination variations,
shadows, and varying video resolutions. To address these issues, we propose an
efficient and cost-effective deep learning-based framework for Multi-Object
Multi-Camera Tracking (MO-MCT). The proposed framework utilizes Mask R-CNN for
object detection and employs Non-Maximum Suppression (NMS) to select target
objects from overlapping detections. Transfer learning is employed for
re-identification, enabling the association and generation of vehicle tracklets
across multiple cameras. Moreover, we leverage appropriate loss functions and
distance measures to handle occlusion, illumination, and shadow challenges. The
final solution identification module performs feature extraction using
ResNet-152 coupled with Deep SORT based vehicle tracking. The proposed
framework is evaluated on the 5th AI City Challenge dataset (Track 3),
comprising 46 camera feeds. Among these 46 camera streams, 40 are used for
model training and validation, while the remaining six are utilized for model
testing. The proposed framework achieves competitive performance with an IDF1
score of 0.8289, and precision and recall scores of 0.9026 and 0.8527
respectively, demonstrating its effectiveness in robust and accurate vehicle
tracking.