Real-time Roadlane Detection System for Autonomous Vehicles

Ishna K A

Ishna K A

Thrissur, Kerala

7 0
  • 0 Collaborators

The driver support system is one of the most important features of modern vehicles to ensure driver safety and decrease vehicle accident on roads. Apparently, the road lane detection or road boundaries detection is the complex and most challenging tasks. ...learn more

Project status: Published/In Market

Artificial Intelligence

Groups
Student Developers for AI

Intel Technologies
Intel Python

Code Samples [1]

Overview / Usage

Driver support system is one of the most important feature of the modern

vehicles to ensure driver safety and decrease vehicle accident on roads. Apparently,

the road lane detection or road boundaries detection is the complex and most challenging tasks. It includes the localization of the road and the determination of the

relative position between vehicle and road. The system acquires the front view using

a camera mounted on the vehicle and detects the lanes by applying few processes.

The lanes are extracted using on Histogram and Sliding window based method .

The proposed lane detection system can be applied on both painted roads as well as

curved and straight road in different weather conditions. The proposed system re-

quires informations such as lane width and o�set between the center of the lanes. In

addition, camera calibration and coordinate transformation are also required. The

system was investigated under various situations of changing illumination, and shadows effects in various road types without speed limits. The system has demonstrated

a robust performance for detecting the road lanes under different conditions.

Methodology / Approach

The overall design of the Road-lane Detection software is very robust and

compact. The system receives frames captured by a 8 MP or more camera mounted

in front of the vehicle.Each frames captured real-time will undergo some image

pre-processing techniques to make each frames less noisy and compact for the lane

detection algorithms. Then a histogram and sliding window based lane finding algorithm take these pre-processed frames simultaneously and produces a result which simulates the detected lane lines on the original frames.

User Interface

I designed a simple UI for the Road-lane detection project which takes

a random frame from the input video and shows the intermediate results of pre-processing of the frame and also the final result both in the frame and in the whole

video.

The user can interact with the system using this UI. They can use a video

of the road as the input and can view the result as a video.

System Design

On a lower-level the Road-lane detection software divided into six modules

, Camera calibration Image undistortion module, Color and gradient threshold

module, Perspective Transformation module, Lane detection and t module, Lane

curvature and vehicle positioning module and Video Output module.

The five of them has distinct functions.All the functions they use are from

different libraries of Python, like OpenCV, Matplotlib,Numpy etc...

Because of the physical properties of a camera lens, the captured two-

dimensional image isn't perfect. There are image distortions that change the apparent size and shape of an object. More importantly it makes some objects appear

closer or farther away than they actually are. Fortunately, we can measure this

distortions and correct them. We can extract all the distortion information we need

by having pictures of objects which we know where the certain points should be

theoretically. Commonly used are chessboards on a

at surface because chessboards

have regular high contrast patterns. Its easy to imagine what an undistorted chess-

board looks like.

Gradient and Color Thresholding

To estimate the curvature of a road, we don't need all the information from

all the pixels of a road image. The road lane lines from the video stream are yellow

and white so we can mask everything out except the yellows and whites of the image.

As an added precaution, because we might see some yellow and while lines that are

not lanes, or there might be lanes which are not that distinctly yellow and white,

we use what is called a Sobel operator. A Sobel operator essentially measures the

rate of change in value (and its direction) between two locations of the image. This

is also technically called as the derivative or gradient. More on this later.

The Hue value is its perceived color number representation based on combinations of red, green and blue. The Saturation value is the measure of how colorful

or or how dull it is. Lightness is how closer to white the color is. The yellow lanes

are nicely singled out by a combination of lightness and saturation above a certain

value. The white lanes are singled out by having a really high lightness value regardless of the saturation and hue.

Applied the Sobel operator to the lightness value of the image. By Using

a combination of thresholding the gradient of the horizontal component, the magnitude of the gradient, and the direction of the gradient, to weed out locations of

gradient that don't have a large enough change in lightness. Thresholding the magnitude of the gradient as well as the x component of the gradient does a good job

with that. A little above 0 degrees (or about 0.7 in radians), and below 90 degrees

(or about 1.4 in radians). Zero implies horizontal lines and ninety implies vertical

lines, and our lanes (in vehicle view) are in between them.

Perspective Transformation

After having the image corrected, wed have an undistorted image of a road

from the perspective of a vehicle. We can transform this image to an image of the

road from the perspective of a bird in the sky. To extract all the information we

need to warp this image from a vehicle view to sky view, we just need location

coordinates. Specifically, all we need is one image which we have some locations

from the input perspective (vehicle view) and the respective locations of the desired

perspective (sky view). I call these location coordinates the source points and destination points. An easy image that we can eye-ball the correctness are straight

parallel lines. Whats also great about this is we also have the information we need

to warp from sky-view to vehicle-view as well.

Curve Fitting

We can fit a curve for each lane line with a second degree polynomial

function:

x = y2 + By + C

We have to find the coefficients for each lane [A, B, C]. We can use the built-in

function polyt(). All we have to do is feed it points and it outputs the coefficients

of a polynomial of a specfied degree of the curve that best ts the points fed.

To decide which pixels are part of a lane, lets implement this basic algorithm. we can take the histogram of all the columns of the lower half image and we

will get graph with two peaks similar to the graph above. The prominent peaks of

the histogram are good indicators of the x position of the base of the lane. So we

can use them as a starting point.

We can do a sliding window technique - one window on top of the other

that follows the lanes up the frame. The pixels inside a window are marked as pixels

of interest and added to the list of points in the lane. We average the x values of

these pixels to give a good indication of the base point of the next window above.

We can repeat this over and over until we get on top the lane. This way we have

accumulated all the pixels that we are interested in that we will feed to our polyt()

function which spits out the coefficients of the second degree polynomial. We can

get compute the radius of curvature of each lane lines when we know the equations

of the curve.

Lane Projection to Vehicle's View

Because we now have the lane line curves parameters we get the points of

the curves and the FillPoly() function to draw the lane region onto an image. We

can now project our measurement down the road of the image.

Technologies Used

Language : Python

Libraries : OpenCV

Repository

https://github.com/Ishna-Anhsi/Laneline-detection

Comments (0)