The Aim of this project is to create a device that can capture dashcam footage and alert the driver in case of a risk of collision. The Vehicle Collision Detection System must be able to :
• Capture real-time vehicle footage.
• Recognize vehicles and other obstacles.
• Alert the driver if an obstacle comes too close.
Three training models used in this project are the inbuilt Yolo weights, the manually generated Yolo weights using around 600 vehicles and the Tiny Yolo Algorithm. Each of these models are different in terms of speed and accuracy.
YOLO (You Only Look Once) is an object detection algorithm which targets real time applications. This algorithm is very fast and can work on a video with high frame rates per second. YOLO has 5 different versions (as of March 2021).
Yolo works by first dividing an object into a grid format and localising objects in a particular cell of the grid.
For this Project, 3 Yolo models are being compared. The Scaled In-Built Yolo, the manually made Yolo Model using 600 vehicle images and the Tiny Yolo Algorithm. Out of 1000 test vehicle photos (some images have more than 1 vehicle per image) the following observations were observed
With the scaled Yolo algorithm, vehicles are determined. If an object is approaching at a quick relative speed within the region of interest (front of the vehicle) an alert is sounded. The speed and accuracy of the project is sufficient to prevent accidents. Accuracy The accuracy of the pre-trained model is much more. However, it shows more false positives than our model. In a test of 1000 vehicle pictures, the pre-trained model showed 1256 vehicles. This could because of multiple vehicles appearing in a single picture. However, for our model, out of 1000 vehicle pictures, 1004 were shown. The tiny Model showed only 401 vehicles out of 1256 vehicles.
The low accuracy of manually trained models could be due to the low clarity of the video. However, the pre-trained model works well for the given clarity of the video.