News Nation Logo

Soon, new system to give driverless cars human-like reasoning

The Autonomous Control System Learns The Steering Patterns Of Human Drivers As They Navigate Roads In A Small Area, Using Only Data From Video Camera Feeds.

PTI | Updated on: 10 Jun 2019, 07:28:44 AM
Human drivers simply match what they see around to what they see on the GPS devices to determine the current location and destination. (File Photo)

Boston:

With an aim to incorporate human-like reasoning into autonomous vehicles, researchers at MIT have developed a system that uses simple maps and visual data to enable driverless cars to navigate routes in new, complex environments. The system, similar to human drivers, has the ability to detect any mismatches between its map and features of the road, determining if its position, sensors, or mapping are incorrect, in order to correct the car's course. The autonomous control system "learns" the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple global positioning system (GPS)-like map, researchers said.

"Our objective is to achieve autonomous navigation that is robust for driving in new environments," said Daniela Rus from Massachusetts Institute of Technology (MIT) in the US. Driverless cars, unlike human drivers, struggle with this basic reasoning and lack the ability to navigate on unfamiliar roads using observation and simple tools.

Human drivers simply match what they see around to what they see on the GPS devices to determine the current location and destination. In every new area, the cars must first map and analyse all the new roads, which is very time consuming.

The systems also rely on complex maps -- usually generated by 3D scans -- which are computationally intensive to generate and process on the fly. "With our system, you don't need to train on every road beforehand. You can download a new map for the car to navigate through roads it has never seen before," said Alexander Amini from MIT.

To train the system initially, a human operator controlled a driverless Toyota Prius -- equipped with several cameras and a basic GPS navigation system -- collecting data from local suburban streets including various road structures and obstacles, the researchers said. When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.

According to the research, the system uses a machine learning model called a convolutional neural network (CNN), commonly used for image recognition. During training, the system watches and learns how to steer from a human driver, according to a paper presented at the International Conference on Robotics and Automation in Montreal, Canada.

The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map. Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries, researchers said. "In the real world, sensors do fail. We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road,” Amini said.

For all the Latest Auto News News, Cars News, Download News Nation Android and iOS Mobile Apps.

First Published : 10 Jun 2019, 07:28:44 AM

Videos