, Sweat-proof “smart skin” takes reliable vitals, even during workouts and spicy meals
, Sweat-proof “smart skin” takes reliable vitals, even during workouts and spicy meals

Who’s the Better Decision-Maker: Self-Driving Car or Human?

Will self-driving cars be able to react better than a person can when something unforeseen happens on the road? That’s just one of many questions that auto manufacturers and the electronics industry will need to address in the coming years.

Sensors are essential technology for making it possible for vehicles to act independently. Automakers are now integrating into their systems a variety of key sensor types: LiDAR for generating 3D maps of the environment, sonar for short-range sensing, cameras for short-/mid-range sensing, and radar for mid-/long-range sensing. For many advanced driver assistance systems (ADAS) functions, decisions are made by fusing or aggregating data from multiple sensors. For instance, an obstacle or pedestrian detection function will typically fuse data from cameras as well as radar sensors.

But, of course, sensors are only a part of the equation. Just as important are the sophisticated algorithms that bring intelligence to the aggregated data and the DSPs to do all of the processing.

At Cadence, there’s a team of engineers in the IP Group that spends its time defining and developing such algorithms and DSPs for ADAS and communications applications. Recently, I had the opportunity to chat with two of the team members: Pierre-Xavier Thomas, design engineering group director, whose team develops software product collateral for Cadence Tensilica DSPs, such as DSP libraries, application use cases, and software signal processing example kernels; and Pushkar Patwardhan, design engineering architect.

Aggregating Data: in the Cloud or in the Car?

Now, while advances in algorithms and DSP and sensor technology have been impressive, the act of aggregating and then extracting useful insights from collected data remains a work in progress. According to Patwardhan, who leads development in radar algorithms, automotive electronics engineers are trying various approaches. “One of the main challenges for ADAS functions is to decide how to distribute the processing and data aggregation between the vehicle and the cloud,” he said. “In one school of thought, more data aggregation and processing are done in the vehicle, with lesser data communications overhead. Another approach is a more cloud-centric mechanism, with the vehicle requiring more communications with the cloud to obtain information about the environment, with lesser processing done within the vehicle itself. It’s not clear yet which approach is a winner.”

There’s also discussion in the industry about who owns the cloud-based elements of this system. Notes Thomas, “A car is a moving IoT application and if you have a lot of these cars with multiple types of sensors, the information from each car could be aggregated and shared to help all drivers and other end users. This would place different constraints on the full system and also raise questions around who owns, controls, and can benefit from the information. Who will differentiate their products with the aggregated information from the cars moving around and sensing and monitoring their close environment? The car manufacturers, the system company offering the sensors…?”

Indeed, we could even see a mix of the cloud vs. in-vehicle approaches that Patwardhan mentioned. One viable approach, he noted, would be a vehicle-to-infrastructure communication model providing broad access to the information needed. These are among some of the questions that will need to be ironed out as theautomotive ecosystem evolves.

So, Self-driving Car or Human Driver?

So, getting back to that question of who makes better decisions: self-driving cars or humans? As Thomas notes, with the integration of ICs into cars, manufacturers are putting in more processing capability than a person has to sense the environment and detect things. The challenge, he said, is to ensure that the machines are delivering better decisions from all the available information than a human can.

Certain convolutional neural network algorithms, which are used in pattern and image recognition, have demonstrated a detection rate accuracy higher than that of humans. And, as Patwardhan pointed out, our senses don’t provide the precise estimates of distance and speed that radar sensors can offer. What might happen in the face of something unexpected remains unknown. “We know humans, based on their experiences, can handle certain situations. It remains to be seen how automobiles can handle these situations,” he said.

We may see self-driving cars rolled out via a zoned approach where they’re allowed only in areas that are well mapped out to support them, said Patwardhan. Once these vehicles are out of the designated zones, he said, they may be required to turn off certain features and operate more like regular cars.

Comments are closed.