Design and Implementation of Advanced Driver Assistance System (ADAS): A circle of Safety


We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Junaid Bashir (14-EE-70)
Junaid Hassan(14-EE-110)


Dr. Inamul Hasan Sheikh
Assistant Professor


JULY 2018

Design and Implementation of Advanced Driver Assistance System (ADAS): A circle of Safety

Junaid Bashir 14-EE-70
Iraj Aqeel 14-EE-94
Junaid Hassan 14-EE-110

Project Supervisor: Dr. Inam-ul-Hasan Shaikh
Assistant Professor, EED, UET Taxila

New trainer drivers attractsa lot attention in terms of their involvement in road accidents in the starting period of their training due to their errors of attention, speed selection,visual search, hazard identification and control during emergency situations. Following the traffic safety, safety has become necessary asset for automotive industries. So the advance driver assistance system(ADAS) has shown potential to reduce road accidents. Advance driver assistance system uses cameras to perceive the environment in the surrounding of driver and then alert the driver by displaying warnings.
In our project we are using machine vision technology. Machine vision is the ability of computer to see; it basically uses one or more video camera, analog to digital conversion and then digital signal processing. Resulting data is then sent to the computer or robot controller.
Our project has three main objectives which are:
Car detection warning & warning on safe distance from other cars
Lane departure warning
Traffic signs warning
As our first objective is “Car detection warning & warning on safe distance from other cars”, for this we have developed program which will detect front and back cars and will find distance from those car and if this distance is less than 5 meter then it will display warning to alert the driver to avoid collision or accident.
Second objective is “Lane departure warning”, which will help the driver to keep the car in its lane, for this a program is developed which will detect lane makers on the road and it will display the warning if driver changes its lane.
Third objective “Traffic signs warning” will detect traffic signs on the sides of road e.g. “turn ahead”, “speed limit” etc. and after detecting traffic signs it will display warning and alert the driver to follow traffic signs.

For all these three objectives we have developed program using software “MATLAB” and using neural networks. Neural networks are basically trained networks which are basically used for real time interfacing, secondly it will reduces the time for the execution of program, therefore we used neural networks. After developing all these programs on software we displayed them on our “Electric toy car”.
I certify that research work titled “Design and Implementation of Advanced Driver Assistance System (ADAS): A circle of Safety” is our own work. The work has not been presented elsewhere for assessment. Where material has been used from other sources it has been properly acknowledged / referred.

Junaid Bashir

Iraj Aqeel

Junaid Hassan

We would like to thank our supervisor “Dr. Inamul Hasan Sheikh” for his guidance and support. We would like to thank all our teachers for guiding us and our project supervisor for his cooperation and support in bringing this project to completion.

Lastly, we would like to thanks our families for their support and help during difficult period of last months.

Table of Contents
Abstract 1
1.1 Introduction and Background: 8
1.2 Problem Statement 9
1.3 Objective of the project 10
1.4 Advantages 10
1.5 Scope of Project 11
1.6 Literature review: 12
Chapter 2 14
2.1Hardware Description 14
2.2 Webcam A4tech pk-930H 14
2.2.1 Features 14
2.3 Keton technologies stereo Usb cams 14
2.3.1 Features 14
2.4 Electric Car 17
2.5MATLAB 17
2.5.1 Necessity of MATLAB: 18
3.1 Programming in MATLAB Software: 20
3.1.1 Download MATLAB Software 20
3.1.2 The Initial Setup 21
3.1.3 Configuring Webcam’s with MATLAB: 22
3.2 Calibrating Cameras: 24
Lane Departure Warning system 26
4.1 Steps involved in Lane Departure Warning System has following steps 26
4.2 Hough Transform 27
5.1 Faster Region-based Convolution Neural Network (Faster RCNN) for Car Detection 29
5.2 Components of RCNN 30
5.2.1 Convolutional layer 30
5.2.2 Pooling 30
5.2.3 Fully connected layer 31
5.2.4 Weights 31
5.3 Training and Testing of data 31
CHAPTER 6……………………………………………………………………………………………………………………………………………..
6.1 Overview 31
6.2 Computer stereo vision 31
6.3 Stereo Vision for depth estimation 31
6.4 Stereo Vision for depth estimation 31
6.5 Distance measurement using Stereo Vision 31
CHAPTER 7……………………………………………………………………………………………………………………………………………..
CHAPTER 8……………………………………………………………………………………………………………………………………………..
8.1 Conclusion 31
8.2 Future Recommendation 31

List of Figures

U.S. ADAS market

WebcamA4tech pk-930H

Keton Technologies Stereo Usb Cam

Electric Car


Block Diagram
3.1 MATLAB 24
3.2 new script in MATLAB 25
3.3 Select ADD-ONs in MATLAB 26
3.4 Add-ons explorer 27
3.5 Types of radial distortion 28
3.6 Tangential Distortion 29
3.5 camera calibrator 29
3.6 calibration results 30
4.1 Lane departure warning 31
4.2 Hough Transform 33
5.1 Car detection 34
5.2 Convolution layers architecture
6.1 Traffic sign detection


1.1 Introduction and Background:

Accidents are increasing day by day especially in developed countries. Developed countries bear burden of deaths that occur due to road accidents, approximately 85 percent of annual deaths and 90 percent of the desirability-adjusted life years lost due to road accidents.

In order to overcome this problem, here came a new technology that is “ADVANCE DRIVER ASSISTANCE SYSTEM (ADAS)” to improve the safety of cars. Advance driver assistance systems automate and enhance vehicles system which provides safety and better driving. This automated system (ADAS) reduces road accidents and human error. Advance Driver Assistance Systems (ADAS) are the focus of increasing research and development in the automotive industry. Many ADAS systems have been developed based on the variety of different technologies. One of these technologies is stereo cameras systems which uses two cameras to detect and measure distance of any number of different objects. This system has safety features which avoid collision and accidents because it alert the drivers to avoid collisions and taking over control of the vehicle. Adaptive features are traffic sign detection, automatic brake to avoid collision, lane detection to keep driver in the correct lane, distance finding from the next car to aware the driver to keep distance from the next car to avoid accidents and so on.

1.2 Problem Statement

Analysis conducted in this field has concluded that majority of young people crashes stem from errors of attention, speed selection, visual speech etc. many studies suggest that these crashes are characterize due to alcohol, fatigue, extreme driving speeds and distraction. Lack of ability in individuals to handle vehicles in demanding situation result in road accidents. Lack of such skills results in collapse of strategic level and undermining road safety. The human cost of road accidents is increasing. The study by World Health Organization shows that worldwide an estimated 1.2 million peoples are killed in road accidents every year and almost 50 million injured. Road safety is one of the largest public health issues. In the year 2002, 49,718 people were killed in EUROPE due to road accidents and almost 1,700,000 were injured. Between 1992-2002, approximately 619,885 people died in car accidents.
Due to the Radial increase of the risk development among inexperienced drivers, safety has become an important asset for automotive industries. Numerous surveys and tests have been conducted to search solution of this problem.
Advance driver assistance system (ADAS) is the solution to this problem. These systems are designed to improve the safety of the roads. ADAS attempt to assist the drivers by giving warning messages to decrease risks and relieve the driver from manual controlling of the vehicle. ADAS could replace some of the human driver actions and decisions with machine tasks to reduce human errors which result in road accidents while reaching more steady and balanced vehicle control with more capacity related environmental benefits.

1.3 Objective of the project

Our objective is:
“To design advance driver assistance system equipped with stereo cameras with features: lane detection, traffic sign detection, finding the distance from the next car and car detection, and implement them on a small toy car”.

Our Advance Driver Assistance System has following feature:
Lane detection (to keep the vehicle in correct lane and alert the driver by giving the warning if lane changes).
Traffic sign detection (to detect traffic signs on road and give notification to the driver to follow the traffic rule)
Finding the distance from the next car (finding distance from the next car will help the driver to keep the distance from the next car to avoid collision)
Car detection (to detect the next cars and will warn the driver to take action)
1.4 Advantages

Improves road safety.
Less severe accidents tend to reduce the damage of vehicle and cost of repairs.
Reduces the risk of injury.
Some insurers offer discount for vehicles fitted with ADAS.
Fewer claims can help to improve your insurance premium.
Better efficiency and accuracy in the performance of vehicles.
Reduction of labor and loss of valuable life in industry.

1.5 Scope of Project

Technology has advanced so much in the last decade or two that it has made life more efficient and comfortable. Advance driver assistance systems (ADAS) are such type of technologies that increases the awareness among drivers that how to automate tasks in their cars.
Such modern automatic systems are strongly promoted by major companies including automotive and information technology (IT) industries. These industries represent a major step towards semi or wholly automated cars.
ADAS market is projected to reach USD 42.40 billion by 2021.
North America is estimated to be the largest market for ADAS during the forecast period which is credited to the second largest vehicle manufacturing country US. Moreover the technology adoption rate in the region is relatively high. North America has some peculiar vehicle safety regulations which are expected to boost the market for ADAS technologies.
The rising demand of ADAS is expected to impact industry over next seven years. The demand for ADAS system such that automatic brake system, adaptive cruise control are expected to rise exponentially to increase government regulations to reduce road accidents.
The ADAS industry is highly competitive and concentrated with top five companies accounting for 43% market share in 2016.

Fig 1.1: U.S. ADAS market

1.6 Literature review:

ADAS has been under development for nearly 60 years. In 1961 the theoretical working of adaptive cruise control and lane detection were described. Development of these systems has been continued from 1960s into the 1990s especially on “collision avoidance systems”. Automotive suppliers such as Delphi Automotive are recorded as having been developing adaptive cruise control systems since the late 1980s producing a product with functionality described as “Throttle controlled with limited braking. No stopped object identification. No warning to driver”.
In 1996, ADAS with stereo cameras were developed. Stereos cameras have more benefits, wider field of view and high resolution and reliability reducing size, cost and power consumption. The range of stereo cameras resolution is from 320 pixels up to 1080 pixels, and frame rates from 0.5fpsto 84 fps with capabilities such as vehicle detection, lane departure warning, traffic sign detection and forward collision warning
In 1997, SAE published performance guidelines for “Forward Collision warning System”. These guidelines include response time, measure range, field of view and such type of warnings that the systems provide to driver. These guidelines were useful for the evaluation of the system.
In the late 1990s, ADAS system was realized as move from research toward commercialization. At that time, standards and guidelines were begun to develop to guide the development of these systems.
In the early 20000s automakers described about adaptive cruise control that how it can be extended to more sophisticated safety system for autonomous driving. Since that time ADAS began to recognize. And industries began to focus on curve assist, lane keep assist, traffic sign detection, automatic brake system, blind spot detection to form main building blocks of autonomous car.
During 2000-2010 ADAS market share became wide by decreasing its cost and enabled automakers to install ADAS system more on cars. This increasing market was also assisted in US by two new regulations.
All the cars sold in US must be equipped with cameras by 2018 and with automatic brake systems by 2022. Similarly these rules will also affect European Union in 2015. Recent studies suggest that consumers are increasing awareness among public to choose vehicles equipped with ADAS system. ADAS market is expected to grow from 16-29% by 2020.

Chapter 2

2.1 Hardware Description:
2.2 WebcamA4tech pk-930H

A4tech PK-930 uses a sensitive sensor which ensures sharp video capture at low light conditions and it provides up to 30 frames per second.
2.2.1 Features:
It gets realistic pictures and videos. Videophone allows us to connect our device without installing drivers. So it is simple to connect camera via USB and work easily.
It has anti-refractive coating which prevents glow and enhance the brightness, saturated image
It has built-in microphones which provide clear and great quality sound.
Interface: USB 2.0
It can capture video up to 16 megapixels, 4608*3456
It has 1080p FULL HD sensor.
It is compatible with vista, window XP, window 7 and later versions

Fig 2.1: WebcamA4tech pk-930H

2.3 Keton Technologies Stereo Usb Cam:

A stereo camera is camera which has two or more lenses with image sensor for each lens. It enables camera to capture three-dimensional images by simulating human binocular vision. Stereo cameras are used for making stereo visions and 3D pictures for range imaging. The distance between the lenses of stereo cameras is equal to the distance between one’s eye, this distance is called intra-ocular distance and it is approximately 6.35cm. in 1950s stereo cameras gained popularity with Stereo Realistic and also other similar cameras employed 135 films for making stereo slides.
Stereo cameras are mostly used in cars for detection of lane’s width and distance finding from the next cars or objects on the road.
All the lens of cameras are not used for stereoscopic pictures. A twin lens reflex camera uses one lens for image to focus on screen and other lens are used to capture image on film.
Stereo cameras are preferred because they can see more data and thus produces less occlusion. Whereas with single cameras occlusion is caused due to surface geometry because it blocks projected data and thus camera’s view results in no data. On the other hand stereo cameras use three views effectively to capture data and triangulation of both left and right cameras. The calibration approach for stereo and single cameras is also different. Single cameras are calibrated in such a way that a triangulation relationship is established between camera and projector. And if the projected pattern moves due to any temperature changes i.e. when light source heats up, then 3D data become incorrect. so in contrast stereo cameras are preferred because they are calibrated between camera pair not with respect to computer, so any changes in temperature will not distort data.
2.3.1 Features:

It has 30mm Baseline.
Ultra compact design
It has Global Shutter Sensors
Lenses have 165° wide Angle
It is factory calibrated
It has standard C API with bindings
Windows and linux control
Low Level access and control

Fig 2.2: Keton Technologies Stereo Usb Cam

2.4 Electric Car:

Electric car is a toy car that can be controlled by remote as well as manually. The webcam’s and stereo cameras are fitted both on front and rear of the car in order to detect cars and measure distance from them. The motors of the car can be controlled using motor driver IC module along with MATLAB, as if turning the car to automatic driving car. The car is used to develop a prototype of our project.

Fig 2.3: Electric Car


MATLAB stands for Matrix Laboratory, this software built up vectors and matrices, and allows working easily with the whole matrices instead of one number at a time. It is also called fancy calculator because it is the basic tool for performing calculations on vectors and matrices. MATLAB provides environment for high performance visualization and numeric computation. MATLAB integrates matrix computation, numeric analysis, signal processing, and graphics which provide environment where problems and solutions are represented mathematically, without using much programming. It is a great tool for solving differential and algebraic equations and also for numerical integration.
MATLAB also provides powerful 2D and 3D graphic tools. It is a programming language used for writing mathematical programs.
MATLAB is used as a video converter and it also amend pictures for image analysis but it is not suitable for long video files because for long video files it takes time to convert and secondly its memory is limited.
MATLAB also provides toolboxes which provide the MATLAB functionality. These toolboxes are as follows:
Neural network Toolbox
Signal processing Toolbox
Statistics Toolbox
Financial Toolbox
Database Toolbox
Wavelet Toolbox
Bioinformatics Toolbox

2.5.1 Necessity of MATLAB:

As told above, that MATLAB is a versatile software/ computing platform which can be used for many purposes. MATLAB provides support for various hardware regarding machine vision and image processing is much easy on MATLAB as compared to other software’s like python and visual studio. Further we can obtain much more accuracy in MATLAB.

2.6 Methodology of Research:
In order to meet our objectives and aim, we devised the following methodology for research.
Collecting data relating machine vision
Creating a problem statement
Literature review
Defining goals
Defining aims
Devising expected results
Fig 2.4: Block diagram


The system is divided into several steps that will work collectively to build a Advanced Driver Assistance System (ADAS) that is controlled by human operator from outside using interfacing of hardware and software.
The first part is two webcam’s and two stereo cameras that are being used for image processing to achieve our goals.
The second part is image processing in MATLAB.
The third step is releasing output video and alarms to maintain driver’s attention on the road.

3.1 Programming in MATLAB Software:

MATLAB is an open-source prototyping platform based on flexible, easy-to-use hardware and software. It’s intended for engineers, students, hobbyists, and anyone interested in creating interactive programs.
3.1.1 Download MATLAB Software:

You’ll need to download the MATLAB Software package for your operating system from the MATLAB download page.
When you’ve downloaded and opened the application you should see something like this:

Fig 3.1: MATLAB

3.1.2 The Initial Setup:

Start the MALTAB using matlab.exe file after installation. A window open as shown in figure in above topic. Write the command in command line and press enter in order to execute command.
For example when you write the command
;; webcamlist
“dell integrated cam”
It shows the number of webcams connected to your computer at the time.
For saving a code, you need to open “new script” option available in top left corner of the matlab window.
It will open a window as shown in the figure below

Fig 3.2: new script in MATLAB
Here a new script is opened, where we can write and store the commands of our code. To save the code simply use Crtl+s . It will save an m file of the code in the given directory of MATLAB.
An m-file, or script file, is a simple text file where you can place MATLAB commands. When the file is run, MATLAB reads the commands and executes them exactly as it would if you had typed each command sequentially at the MATLAB prompt.
That is all one needs to get started with MATLAB.

3.1.3 Configuring Webcam’s with MATLAB:

The first step in this project is to configure webcam’s with MATLAB. Connect the camera to your pc by USB cable. Now go to the option present in upper part of the MATLAB window “Add-Ons”.
Fig 3.3: Select ADD-ONs in MATLAB
And select the option ” get hardware support packages”.
A window will pop up as shown below,

Fig 3.4: Add-ons explorer
In the search box type USB WEBCAMS, it will give you a support package for usb webcams, install it on your computer.
Now in MATLAB command line write
;; webcamlist %it will show you the cameras connected to your pc
;;camera= webcam(1) %it will initialize first camera shown in the list
;;preview(camera) %the camera will start giving video output

3.2 Calibrating Cameras:

Now the first need one needs to do is to check cameras for calibration errors. Camera Calibration errors can be defined as “the ratio of the image that is actually in real world to the image that is perceived by camera”.
Calibration errors arise due to distortion of lens in a camera.
Lens distortion can be of two types

Radial distortion
Tangential distortion
Radial distortion is most visible when taking pictures of vertical structures having straight lines which then appear curved. This kind of distortion appears most visibly when the widest angle (shortest focal length) is selected either with a fixed or a zoom lens.

Fig 3.5: Types of radial distortion
And when the lens is not parallel to the imaging plane (the CCD, etc) a tangential distortion is produced. Tangential distortion is shown in figure below;

Fig 3.6: Tangential distortion
Camera calibration can be defined as “the process of estimating parameters of the camera using images of a special calibration pattern”. The parameters include camera intrinsic, distortion coefficients, and camera extrinsic.
To calibrate your camera, one can either use the MATLAB camera calibrator app available in apps section of MATLAB, or use coding for this purpose.
A special calibration that is used mostly is checkerboard pattern. Take images of a checkerboard by different viewing angles. To check your camera for calibration errors, go to Camera Calibrator app and click add images, a window will pop up, to browse the directory of stored images and select those images. As shown in figures below,

Figure 3.7: camera calibrator
Now select the option add images to add images. Add at least 10 to 20 images to get the best results. And then click calibrate to get results.

Figure 3.8: calibration results
Second method is by using programming in MATLAB. That can also be easily done. For reading images use “imread” command.
For example;
;; I= imread(‘image1.jpg’) % it will read the image1.jpg file in the given directory
To add multiple images, use a for loop for the given number of images and use “fullfile” command in order to give the directory of images.
For detecting checkerboard pattern use, “detectCheckerboardpoints(files)”.
To check for calibration errors, write “estimateCameraParameters” in the command line.
The first step in the project completes here. That is all one needs to get introduced to the image processing in MATLAB.
Lane Departure Warning system

A lane departure warning system is one of the emerging systems for reducing traffic accidents. LDWS is a system designed to warn a driver when the vehicle begins to move out of its lane boundaries on roads. Different techniques to implement LDWS have been surveyed. Implementation of LDWS on different platforms is studied. The lane departure warning system is the system to detect road lane markers in a video stream and to highlight the lane in which the vehicle is driven. This information can be used to detect an unintended departure from the lane and issue a warning.

Fig 4.1: Lane departure warning
4.1 Steps involved in Lane Departure Warning System:

Detect lane markers in the current video frame.
Match the current lane markers with those detected in the previous video frame.
Find the left and right lane markers.
Issue a warning message if the vehicle moves across either of the lane markers.
Loops are applied on all the frames of the video, for each of the frames of the video, lane markers which are the line are detected by Hough transform. All the frames results are compared with the result of the previous frames, and this caparison gives us the position the vehicles in the lane whether it is in lane or is departing from right or left side, and a warning system is made which warns us if the lane departure occurs from left and right side. After the departure of vehicle from left or right side, a third lane is added to make the new lane as the next reference lane and now if the lane departure occurs from this lane then warning system will act.

4.2 Hough Transform

The Hough transform is a feature extraction technique used in image analysis, computer vision, and digital image processing. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a so-called accumulator space that is explicitly constructed by the algorithm for computing the Hough transform.
In automated analysis of digital images, a sub problem often arises of detecting simple shapes, such as straight lines, circles or ellipses. In many cases an edge detector can be used as a pre-processing stage to obtain image points or image pixels that are on the desired curve in the image space. Due to imperfections in either the image data or the edge detector, however, there may be missing points or pixels on the desired curves as well as spatial deviations between the ideal line/circle/ellipse and the noisy edge points as they are obtained from the edge detector. For these reasons, it is often non-trivial to group the extracted edge features to an appropriate set of lines, circles or ellipses. The purpose of the Hough transform is to address this problem by making it possible to perform groupings of edge points into object candidates by performing an explicit voting procedure over a set of parameterized image objects.
The simplest case of Hough transform is detecting straight lines. In general, the straight line y = mx + b can be represented as a point in the parameter space. However, vertical lines pose a problem. They would give rise to unbounded values of the slope parameter m. Thus, for computational reasons, Duda and Hart proposed the use of the Hesse normal form
r=?x cos??+ysin???
Where r is the distance from the origin to the closest point on the straight line, and ? (theta) is the angle between the axis and the line connecting the origin with that closest point.


Figure 4.2: Hough transform


In this system car will be detected using stereo cameras. Cameras are used for real time interfacing. Stereo cameras are also used for distance finding from the next car. We did car detection in MATLAB using convolution neural networks(CNN).
Fig 5.1: Car detection
A convolutional neural network (CNN or ConvNet) is one of the most popular algorithms for deep learning, a type of machine learning in which a model learns to perform classification tasks directly from images, video, text, or sound.
CNNs are particularly useful for finding patterns in images to recognize objects, faces, and scenes. They learn directly from image data, using patterns to classify images and eliminating the need for manual feature extraction.
5.1 Faster Region-based Convolution Neural Network (Faster RCNN) for Car Detection:

Recent advances in object detection are driven by the success of region proposal methods and region-based convolutional neural networks (R-CNNs). Although region-based CNNs were computationally expensive as originally developed in their cost has been drastically reduced thanks to sharing convolutions across proposals.
Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos. Well-researched domains of object detection include face detection and pedestrian detection. Object detection has applications in many areas of computer vision, including image retrieval and video surveillance.
5.2 Components of RCNN
5.2.1 Convolutional layer

Convolutional layers apply a convolution operation to the input, passing the result to the next layer. The convolution emulates the response of an individual neuron to visual stimuli. Each convolution neuron processes data only for its receptive field. Although fully connected feed forward neural networks can be used to learn features as well as classify data, it is not practical to apply this architecture to images. A very high number of neurons would be necessary, even in shallow (opposite of deep) architecture, due to the very large input sizes associated with images, where each pixel is a relevant variable. For instance, a fully connected layer for a (small) image of size 100 x 100 has 10000 weights for each neuron in the second layer. The convolution operation brings a solution to this problem as it reduces the number of free parameters, allowing the network to be deeper with fewer parameters. For instance, regardless of image size, tiling regions of size 5 x 5, each with the same shared weights, requires only 25 learnable parameters. In this way, it resolves the vanishing or exploding gradients problem in training traditional multi-layer neural networks with many layers by using back propagation.
5.2.2 Pooling
Convolution networks may include local or global pooling layers, which combine the outputs of neuron clusters at one layer into a single neuron in the next layer. For example, max pooling uses the maximum value from each of a cluster of neurons at the prior layer.11 Another example is average pooling, which uses the average value from each of a cluster of neurons at the prior layer.
5.2.3 Fully connected layer
Fully connected layers connect every neuron in one layer to every neuron in another layer. It is in principle the same as the traditional multi-layer perceptron neural network (MLP).
5.2.4 Weights
CNNs share weights in convolution layers, which means that the same filter (weights bankis used for each receptive field in the layer; this reduces memory footprint and improves performance.

Fig 5.2: Convolution layers architecture

5.3 Training and Testing of data

For many applications, little training data is available. Convolution neural networks usually require a large amount of training data in order to avoid over fitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights. This allows convolution networks to be successfully applied to problems with small training sets. The training of data requires a lot of time but once when the data is trained then it take very less time to test any other data.
For the case of training of network with vehicles data, we entered 267 pictures of vehicles and trained them with the label of the car and after few minutes the car data got trained and then we entered random picture of cars and other object to test our program and once we saw that the program is working ok on different random pictures then we apply that program on the real time faster RCNN.

Chapter 6
6.1 Overview

Stereovision is a form of vision comprising of a pair of image of the same set of objects. The best example of stereovision is our own vision i.e vision of humans and animals. Stereo vision can be used for depth estimation of the image and thus helping in distance measurement of a certain detected object from the viewpoint.
6.2 Computer Stereo vision

Computer stereo vision is the extraction of 3D information from digital images, such as those obtained by a stereo camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examining the relative positions of objects in the two panels.
6.3 Stereo vision for depth estimation

Stereo vision is the process of extracting 3-D information from multiple 2-D views of a scene. Stereo vision is used in applications such as advanced driver assistance systems (ADAS) and robot navigation where stereo vision is used to estimate the actual distance or range of objects of interest from the camera.The 3-D information can be obtained from a pair of images, also known as a stereo pair, by estimating the relative depth of points in the scene. These estimates are represented in a stereo disparity map, which is constructed by matching corresponding points in the stereo pair.
As mentioned above in order to find depth of the image we need to rectify the image. image rectification is the process of matching a corresponding point in one image that can be found in the same row in the other image of the pair.
6.4 Distance measurement using stereo vision

It is a very simple process in MATLAB. Follow the following steps to measure distance using stereo vision
Initialize the stereo camera.
Take an image using stereo camera and store it.
Apply object detection techniques on it.
Rectify pair of image
Un-distort pair of image
Using MATLAB in built command “triangulate” on the pair of image and find the distance from the detected objects
Display the output
Compare the results with original distance from detected object.

Traffic sign detection is a system which alerts the driver to follow traffic signs. The vehicle detects traffic signs and then displays warning. The system is using cameras for real time interfacing to detect traffic signs using software MATLAB.

Fig 6.1: Traffic sign detection

Traffic sign detection has been an active area of research in computer vision community for many years. Traffic sign detection can be generally treated as a pattern classification issue. As for visual object classification, feature expression has been in the central stage of computer vision research. Due to the powerful learning capabilities of convolution neural networks, traffic signs are preferred using CNN

Traffic signs can be detected by using convolution neural networks as well as object detection techniques. The convolution neural networks method is mentioned in chapter 4. Here object detection technique is used for traffic signs detection and recognition.
The first step in traffic sign detection is the identification of traffic signs in a video frame using image processing. Then second part is the identification of detected signs.
But this system has one disadvantage that traffic signs are only valid for limited distances and they do not have end signs which could be recognized by the system e.g. reduction of speed limit is made before cross roads. In most cases system cannot recognizes change, so in that case system will show wrong signs.


This thesis introduces the advance driver assistance system which is successfully developed and validated. Data was collected and analyzed to characterize the effect of active system. We used software MATLAB to get fully automotive analysis tool. The main part of the software is that it has used real time interfacing and then detected lanes, cars, traffic signs. And after detection it will display warning. Secondly stereo cameras are also used for distance finding from the next car. If the distance is very small then it will display warning and alert the driver to be careful.

The main result of the thesis is therefore that we have developed and efficient methodology and efficient tools for advance driver assistance system. This methodology provides quantitative measures for the performance of ADAS with confidence and accuracy as compared to the existing methods.
This project fits well in the field of safety systems. The results from this project are promising in the field of ADAS that will grow enormously in the foreseeable future.

AUTOMATIC DRIVING: This prototype can be used to develop automatic driving cars in future
SURVEYING: GPS system can be incorporated , thus can be used in surveying operations
NEW EXPLORATIONS: Its object detection features can be used in new explorations