Enlarge/ NTSB officials inspecting the vehicle that killed Elaine Herzberg.NTSB

The fatal crash that killed pedestrian Elaine Herzberg in Tempe, Arizona, in March occurred because of a software bug in Uber's self-driving car technology, The Information's Amir Efrati reported on Monday. According to two anonymous sources who talked to Efrati, Uber's sensors did, in fact, detect Herzberg as she crossed the street with her bicycle. Unfortunately, the software classified her as a "false positive" and decided it didn't need to stop for her.

Distinguishing between real objects and illusory ones is one of the most basic challenges of developing self-driving car software. Software needs to detect objects like cars, pedestrians, and large rocks in its path and stop or swerve to avoid them. However, there may be other objects—like a plastic bag in the road or a trash can on the sidewalk—that a car can safely ignore. Sensor anomalies may also cause software to detect apparent objects where no objects actually exist.

Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren't there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object. According to Efrati, that's what happened in Tempe in March—and unfortunately the "real object" was a human being.

"There's a reason Uber would tune its system to be less cautious about objects around the car," Efrati wrote. "It is trying to develop a self-driving car that is comfortable to ride in."

"Uber had been racing to meet an end-of-year internal goal of allowing customers in the Phoenix area ride in Ubers autonomous Volvo vehicles with no safety driver sitting behind the wheel," Efrati added.

The more cautiously a car's software is programmed, the more often it will slam on its brakes unnecessarily. That will produce a safer ride but also one that's not as comfortable for passengers.

This provides some useful context for Efrati's March report that cars from Cruise, GM's self-driving car subsidiary, "frequently swerve and hesitate." He wrote that Cruise cars "sometimes slow down or stop if they see a bush on the side of a street or a lane-dividing pole, mistaking it for an object in their path."

You could read that as a sign that Cruise's software isn't very good. But you could also view it as a sign that Cruise's engineers are being appropriately cautious. It's obviously much better for software to produce a jerky, erratic ride than to provide a smooth ride that occasionally runs over a pedestrian. And such caution is especially warranted when you're testing in a busy urban environment like San Francisco.

Of course, the long-term goal is for self-driving cars to become so good at recognizing objects that false positives and false negatives both become rare. But Herzberg's death provides a tragic reminder that companies shouldn't get too far ahead of themselves. Getting fully self-driving cars on the road is a worthwhile goal. But making sure that's done safely is more important.

Uber declined to comment to The Information, citing confidentiality requirements related to an ongoing investigation by the National Transportation Safety Board. We've asked Uber for comment and will update if the company responds.

Original Article

[contf] [contfnew]

Ars Technica

[contfnewc] [contfnewc]