Domain Adaptation from Daytime to Nighttime: A Situation-sensitive Vehicle Detection and Traffic Flow Parameter Estimation Framework
Document Type
Article
Publication Date
3-2021
Publication Title
Transportation Research Part C -Emerging Technologies
Abstract
Vehicle detection in traffic surveillance images is an important approach to obtain vehicle data and rich traffic flow parameters. Recently, deep learning based methods have been widely used in vehicle detection with high accuracy and efficiency. However, deep learning based methods require a large number of manually labeled ground truths (bounding box of each vehicle in each image) to train the Convolutional Neural Networks (CNN). In the modern urban surveillance cameras, there are already many manually labeled ground truths in daytime images for training CNN, while there are little or much less manually labeled ground truths in nighttime images. In this paper, we focus on the research to make maximum usage of labeled daytime images (Source Domain) to help the vehicle detection in unlabeled nighttime images (Target Domain). For this purpose, we propose a new situation-sensitive method based on Faster R-CNN with Domain Adaptation (DA) to improve the vehicle detection at nighttime. Furthermore, a situation-sensitive traffic flow parameter estimation method is developed based on the traffic flow theory. We collected a new dataset of 2,200 traffic images (1,200 for daytime and 1,000 for nighttime) of 57,059 vehicles to evaluate the proposed method for the vehicle detection. Another new dataset with three 1,800-frame daytime videos and one 1,800-frame nighttime video of about 260 K vehicles was collected to evaluate and show the estimated traffic flow parameters in different situations. The experimental results show the accuracy and effectiveness of the proposed method.
Repository Citation
Li, Jinlong; Xu, Zhigang; Fu, Lan; Zhou, Xuesong; and Yu, Hongkai, "Domain Adaptation from Daytime to Nighttime: A Situation-sensitive Vehicle Detection and Traffic Flow Parameter Estimation Framework" (2021). Electrical and Computer Engineering Faculty Publications. 490.
https://engagedscholarship.csuohio.edu/enece_facpub/490
DOI
10.1016/j.trc.2020.102946
Volume
124
Comments
Zhigang Xu is supported by National Key Research and Development Program of China (No. 2019YFB1600100), National Natural Science Foundation of China (No. 61973045), Shaanxi Province Key Development Project (No. S2018-YF-ZDGY-0300), Fundamental Research Funds for the Central Universities (No. 300102248403), Joint Laboratory of Internet of Vehicles sponsored by Ministry of Education and China Mobile (No. 213024170015), Application of Basic Research Project for National Ministry of Transport (No. 2015319812060). Hongkai Yu is supported by NVIDIA GPU Grant, Amazon Web Services (AWS) Cloud Credits for Research Award.