Traffic rules violations detection using faster RCNN neural networks

Ajinkya Jawale

Ajinkya Jawale

Pune, Maharashtra

9 0
  • 0 Collaborators

The software is able to detect and store traffic violations that are related to intersections of vehicles with solid lines and traffic light violations. ...learn more

Project status: Under Development

Mobile, RealSense™, Internet of Things, Artificial Intelligence

Groups
Student Developers for AI

Intel Technologies
Intel Python, Other

Code Samples [1]Links [1]

Overview / Usage

This software was created for demonstration purpose and in some cases, it is not able to detect corresponding violations. In case of further development, it will be able to automatically detect most of the types of traffic violations and not only the ones which are related to solid line intersections and red light. Further, I describe technical details of implementation and make some illustrations.

Methodology / Approach

To analyze a video and to detect the violations which are present in the video, the following approach was taken. First, models of the road and of vehicle movements are obtained by processing the video stream. Second, these models are combined and analyzed to detect traffic violations.

While development of this project the point was to get an MVP as quickly as possible. So, in some cases, some approach is chosen in place of a better approach in order to save time. The project was implemented by reusing available algorithms and available opensource codebase as much as possible in order to write as few as possible code and spend as few as possible time. The project was developed in Python3, with extensive use of SciPy [https://www.scipy.org/] and OpenCV [https://opencv.org/]. The UI is based on GTK [https://www.gtk.org/].

Road analysis

To analyze the structure of the road a manual approach is taken. In this project, straight road marking lines are analyzed. Solid straight lines play a key role in structuring the road. Such lines are stop lines, edge lines, double lines and others.

First, the background image of the video is extracted by applying a background/foreground extraction algorithm [https://sagi-z.github.io/BackgroundSubtractorCNT/]. Background extraction allows to eliminate moving objects from the video and get a clear image of the road. Second, the background image is analyzed to detect straight lines of the road.

One of the most popular ways to find straight lines on an image is to preprocess the image by an edge detector algorithm and then apply hough transform [https://en.wikipedia.org/wiki/Hough_transform, https://docs.opencv.org/3.0-beta/modules/imgproc/doc/feature_detection.html?highlight=houghline#cv2.HoughLinesP ] to identify the lines. Implementations of these algorithms are available in OpenCV library. Although this algorithm is quite general, it didn’t give the results that were required. The problem was that in parallel to detection of straight lines that are visible on an image, other sections of the image, which contain a lot of edges, were also identified as lines. In case of increasing thresholds for a line, the road lines often are not identified as well. Sure, with further preprocessing of the image, e.g. with the removal of small non-straight edges, a better result could be achieved, but another approach is chosen.

Then, after detection of line segments, in order to identify whether several segments are on the same line, a graph is composed out of detected segments. The nodes are linked to each other when corresponding segments are almost on the same line and are close enough. Next, connected components of the graph are identified, by applying appropriate algorithm [. J. Pearce, “An Improved Algorithm for Finding the Strongly Connected Components of a Directed Graph”, Technical Report, 2005] implementation of which is available in SciPy [https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csgraph.connected_components.html]. Finally, the connected components are identified as straight lines.

Vehicle tracking

The next step is to get the model of positions and movements of a vehicle. To detect the vehicles YOLO neural network processes frames of video stream [https://pjreddie.com/darknet/yolo/].

To identify the vehicles and to smoothen fluctuations of predicted positions in a sequence of detections, SORT algorithm is applied to the detections. https://arxiv.org/pdf/1602.00763.pdf, [https://github.com/abewley/sort]. This algorithm uses rudimentary combination of techniques such as the Kalman Filter and Hungarian algorithm for tracking components, but at the same time, this approach achieves an accuracy comparable to state-of-the-art online trackers. After that, the movement path of the vehicle is composed based on tracking result and the path is smoothened.

Violation Logic

Up to this, I described how the path of a moving vehicle and road marking lines models are obtained. After detection of the lines, they are shown in the window of the application and the user can set any line type (Front, Parallel) or leave it without setting a type. Front lines are intended to be used across the carriageway, and parallel ones along the carriageway. Crossing of the lines is considered to be a violation. In case of Front line, traffic light color can also be selected to identify whether crossing of that line in a certain moment is a violation or not.

Illustrations

The following videos illustrate the software in action. Videos are created based on traffic cameras’ live streams found on YouTube. Because usually there are only few traffic violations during a short period of time, the lines are usually set to be red (Red light) in order to create artificial violations. It can be noticed that not all the appropriate violations are detected. In some cases, this is due to the fact that YOLO doesn’t detect the vehicles. This can be considerably improved by additional training of the neural network on images taken from traffic cameras. In other cases, the path of violating vehicle does not actually cross the line, which can be improved by fine-tuning the logic of the violation. For example, considering 2D projection of the 3D scene, a simple possible improvement of Front line crossing is to extend the lines to the upper direction of the camera. I decided to stop line analysis at this stage because in production I plan to use a different approach to obtain the model of the road, which will give a more precise and detailed representation.

Technologies Used

YOLO

Darknet

Opencv

Deep Learning

Intel AI Open vision

Repository

https://github.com/ajinkyajawale14/Giscle_Mask_RCNN

Comments (0)