Self-driving cars will most likely be our main form of transportation in the future. As frightening as that may seem given past reports of cars crashing while driving on autopilot, the technology isn’t all bad. In the case of Tesla’s autopilot, it can actually be life-saving — just ask Joshua Neally.
Late last month, Neally was driving home from his office in Springfield, Missouri. While on the highway, he felt shooting pain through his lungs and into his chest. Rather than pull over and wait for an ambulance, he asked his Tesla Model X car to find the nearest hospital and put it in autopilot. “[I didn’t want] to cross over the interstate and slam into somebody or slam into one of the big rock walls,” Neally told local news outlet KY3. Neally thought the Tesla’s autopilot feature would be a safer driver than he would be, he explained to Slate, since “he might have lost control of the car and in effect become a deadly projectile when those first convulsions struck.”
Neally’s prudence was rewarded. The Tesla drove him 20 miles to the nearest hospital. He pulled into the emergency room and was treated for a pulmonary embolism, a potentially fatal artery obstruction in the lungs. He was released that night, and is now being treated for the condition. “I’m very thankful I had it for this experience,” Neally told KY3, “It’s the coolest technology I’ve ever seen I would say, let alone owned.”
Neally’s story is encouraging, not just because it saved his life but because it redeems the Tesla’s autopilot feature in light of the previous fatal crash caused by the system. The autopilot feature is a major selling point for Tesla’s self-driving cars. It allows a car to steer, accelerate, and even brake on its own for long stretches of freeway driving. As Neally described to KY3, the autopilot system “allows him to take his hands off the wheel for up to four minutes at a time. Then the car lets him know he needs to take the wheel at, least briefly, or else it looks for a spot to pull over and stop on the side of the road.”
That feature was responsible for a fatal Florida crash this past May. That crash was in a Tesla Model S and featured an earlier version of the autopilot system. That system failed to detect an oncoming semi truck due to its camera not being able to detect the color of the truck. That mistake cost driver Joshua Brown his life, though there are reports stating that Brown may have been watching a Harry Potter movie at the time of the crash.
Another crash in Montana attributed to the Tesla’s Model S autopilot drew attention from the National Highway Traffic Safety Administration. As Slate reports, “the National Transportation Safety Board was [also] examining whether autonomous driving technology was a hazard to safety.” Even the Securities and Exchange Commission investigated Musk’s timing in releasing data from the Florida crash. Between those federal investigations and backlash from rival manufacturers, the public is fearful of Tesla’s technology.
That said, Tesla believes that disabling the autopilot feature because of the Florida crash is shortsighted. “In fact,” Slate reports, “the company argues that the critics have it backward: Given that its internal testing data suggest the feature drives more safely than humans do, Tesla maintains that it would be irresponsible and dangerous not to offer autopilot to its customers.” Tesla has data to back this up, though they have yet to release it. However, according to Elon Musk, 500,000 people would have been saved by the autopilot feature if it was available on every car.
Whether or not that’s true is yet to be seen, but the potential for the technology is promising. Particularly since the purpose of developing technology is to make our lives better. As Brad Templeton, Track Chair for Computing at Singularity University, told us, 33,000 Americans are killed in auto crashes every year, “more killed in car accidents in the United States than in its entire history of war going back to the Revolutionary War.” Cars with autopilot features like Tesla’s could really save lives. They just need the flexibility to develop it. The next iterations will be rough, but they should work themselves out, as Neally admitted to KY3: “It’s not going to be perfect, there’s no technology that’s perfect, but I think the measure is that it’s better and safer.”
Templeton explains how here: