每日大瓜

Skip to main content Skip to search

每日大瓜 News

每日大瓜 News

Researchers Using AI to Reduce Accidents in Self-Driving Cars Receive Award

Youshan Zhang, assistant professor of artificial intelligence and computer science, and Lakshmikar Polamreddy, a master鈥檚 candidate in artificial intelligence, developed a convolutional neural network model for self-driving cars. photo by Denton Field

By Dave DeFusco

Katz School researchers received the Emerging Research Award at the Future Technologies Conference for their work on a machine learning algorithm that could reduce the number of traffic accidents involving self-driving cars.

The conference is the world's leading forum for reporting research breakthroughs in AI, computer vision, data science, computing and related fields, and attracts top research think tanks, industry technology developers and academic researchers.

In their paper, 鈥,鈥 Youshan Zhang, assistant professor of artificial intelligence and computer science, and Lakshmikar Polamreddy, a master鈥檚 candidate in artificial intelligence, describe their convolutional neural network (CNN) model for self-driving cars, which aimed to address the limitations of previous work in this area.

The LaksNet model uses images and steering angles collected from a Udacity simulatoran open-source simulator for training and testing self-driving, deep-learning algorithms. The Udacity simulator includes a virtual representation of a car and its surroundings, allowing users to implement and test their algorithms for tasks, like perception, decision-making and control, providing a safe environment for learning and experimenting with self-driving technologies.

鈥淥ur approach involved building and training end-to-end, machine-learning models using extensive sets of data, typically in the form of images collected from cameras,鈥 said Zhang. 鈥淭hese models were trained to drive vehicles in a way that minimized accidents.鈥

CNNs, a type of artificial neural network designed specifically for image recognition and processing, were inspired by the visual processing of the human brain and are characterized by their ability to automatically learn spatial hierarchies of features from images. They have become the backbone of many computer-vision applications, including autonomous vehicles, facial recognition, image classification and medical image analysis.

Zhang and Polamreddy鈥檚 CNN model, called LaksNet, used a simulated environment provided by Udacity's self-driving car nanodegree program to generate training data and assess model performance. The approach involved training the CNN model using 130,000 images and their associated steering angles generated in the Udacity simulator. After training for the required number of epochs鈥攁n epoch is one complete pass through a training dataset鈥攖he model was used to predict steering angle values, which are then passed into the simulator.

鈥淭oo few epochs may result in underfitting, where the model hasn鈥檛 learned the underlying patterns in the data,鈥 said Polamreddy. 鈥淭oo many epochs, on the other hand, can lead to overfitting, where the model becomes too specific to the training data and performs poorly on new, unseen data."

With LaksNet, Zhang and Polamreddy evaluated the performance of a model developed by the company NVIDIA. That model is designed to predict steering angles directly from raw pixels in a camera feed. The researchers then explored whether pre-trained ImageNet models, which are often used for various computer vision tasks, including object recognition and detection in the context of self-driving cars, could outperform the NVIDIA model.

During the testing stage, they monitored the steering angles on the terminal and observed the car鈥檚 movements in the simulator window in real-time. It offered two different tracks for training and testing, and images were captured using three different cameras positioned at various angles. They also captured the car鈥檚 view, including the track, the car itself and the environment outside the track.

Since pre-trained models did not meet expectations set by the NVIDIA model, the researchers embarked on building their own CNN models tailored to the task. One of the custom CNN models outperformed the pre-trained models and the NVIDIA model by allowing the car to drive on the track autonomously for 150 seconds.

鈥淲e developed a novel model with two main objectives: achieving state-of-the-art performance and using fewer parameters for training,鈥 said Zhang. 鈥淥ur model is more efficient and effective for reducing accidents in autonomous driving.鈥

Share

FacebookTwitterLinkedInWhat's AppEmailPrint

Follow Us