Sorting fact from fancy and fear isn’t always easy. In just the past three days we’ve heard all three about self-driving cars. First, Duke University roboticist Missy Cummings testified before Congress that auto companies were “rushing to market” before self-driving cars are ready, and “someone is going to die.” “Many of the sensors on self-driving cars are not reliable in good weather, in urban canyons, or places where the map databases are out of date,” she explained in arguing for federal standards for self-driving technology.
No one argues that the technology is ready today and no one argues that it will reduce fatalities to zero. Cummings may have been trying to say that a car with no features other than adaptive cruise control and lane centering will encourage drivers to fall asleep in the back seat, but it isn’t clear how federal regulation would prevent that since those technologies are already available on many cars.
Ironically, just a few days before, Ford explained how its self-driving cars would overcome all of the problems cited by Cummings. As the Antiplanner described a few months ago, Ford and other companies are relying heavily on precise maps that can be automatically updated every time an appropriately equipped car drives down a particular route (which can then update the maps for other cars). If an occupant wants to take an unmapped route, the self-driving car would refuse to go there without a human driver. This would solve all of Cummings’ issues without government intervention.
Researchers for Baidu (a Chinese competitor of Google) argued in Wired this week that, even with the mapping technology used by Ford and other manufacturers, we’ll still need to change infrastructure to make self-driving cars work. The article specifically mentions road workers directing traffic in construction zones; sensors blinded by direct sunlight; and “complex situations” such as children running in traffic. But other self-driving car designers have been aware of these types of problems from the beginning, and new or improved infrastructure is not the only and far from the best solution.
Furthermore, obnoxious drugs such as marijuana, alcohol and heroin mostly invites sexual dilemma like generic levitra erectile dysfunction. cialis for woman The formulation comes as white milky powder and is insoluble in water however freely soluble in most organic solvents as well as oil. While posterior blepharitis – often referred to as “meibomian”, meibomitis or meibomian gland dysfunction (MGD) – can also be comprised of the capability viagra online france to reduce the incapability and can avail man to get over the trouble. Safed Behmen improves functioning of sildenafil super active nerves and cardiac system. We might want to ask road workers to use consistent signage, for example by placing stop or directional signs at a fixed height above the pavement. That won’t require new infrastructure. When blinded by the sun, self-driving cars would do the same as human drivers: slow down. But there’s no reason to think we can’t build sensors that can see the road in any conditions just as well as humans can. As for complex situations, a self-driving car would treat anything it sees that isn’t fixed and on its map as a potential obstruction and be prepared to avoid it.
Meanwhile, the Guardian again raises the lame fear that self-driving cars will be susceptible to hacking. No, most self-driving cars being tested today are independent of the technology that would allow cars to be hacked. The only electronic signals those cars receive are from GPS satellites, and I don’t think anyone has figured out a way to use GPS satellites to hack one particular receiver.
The technology that will allow cars to be hacked is the vehicle-to-infrastructure (V2I) system that the Obama administration wants to mandate on all new cars, not just self-driving cars. Engineers for Google, Ford, Volkswagen, and others say their self-driving cars can do without it, but that’s the system that people should fear. If all cars use the same government-mandated V2I system, then they will be far more susceptible to hacking than if different manufacturers offer different systems and people have a choice of whether to use one or not.
The good news comes from a Wall Street Journal report that people can buy a self-driving car for $20,000. This is the kind of thing that Cummings was talking about because the car isn’t really self-driving, but it does have adaptive cruise control and lane centering, theoretically allowing drivers to take their hands off the wheel for short periods of time. The car in question is a Honda Civic which, when equipped with an optional Honda sensing package (which includes things like adaptive cruise control, lane centering, collision warning and braking), has a list price of $20,440 plus shipping.
I’m not sure why this is news now as the car was available last October. I guess something isn’t true until it is reported in a “newspaper of record” like WSJ. But the Journal‘s valid point is that auto manufacturers are making technologies like this available in low-end cars for minimal additional cost only a few years after they were available in high-end cars. The Honda Sensing package adds just $1,000 to the base price of the Civic, Accord, CR-V, and Pilot (but not the Fit or Odyssey). Coincidentally, the Antiplanner predicted a couple months ago that adding self-driving capabilities to new and many recent cars would cost about $1,000.
It’s all about fear. People fear change. It’s because of fear of change that they result to slippery slope and other specious arguments. It’s out of fear they make outrageous predictions.
It’s because of fear of change that people claimed cars and horses couldn’t share the road. It’s because of fear of change that people claimed humans would never fly, and when humans did fly, it was because of fear of change that they claimed that commercial flight was impractical or would be too dangerous.
It’s not just “someone” who is going to die; it’s everyone, and thank god. Otherwise nothing would ever change.
I am somewhat of an expert on this subject. Not only did I build three self-driving cars for DARPA, but I once wrote a report for the U.S. Air Force on GPS vulnerability of unmanned aircraft.
Futzing with the GPS receiver on a car would be quite easy (unless, perhaps, it had military-grade protection). The attackers would use their own transmitter to override the signal from the GPS satellites. The real GPS signals are inherently weak—they’re broadcast thousands of miles away, from satellites powered by solar panels—so they’re easily overpowered. If the goal is to only hack one particular receiver, that just means the attackers have to point their antenna at the right car—perhaps from another car driving alongside.
The good news is that, even without any hackers in the picture, self-driving cars just cannot afford to be blindly dependent on GPS, because of obstructions, signal echoes, and so on. Any successful self-driving car will use GPS only to get rough location and then look for visible road features to steer by (more or less the same way that human drivers use GPS). If GPS says turn left when the road turns right, a successful self-driving car will follow the road. If the inconsistency in the GPS signal persists, the car will probably enter an “I’m confused, human driver should take over” mode.
”
No, most self-driving cars being tested today are independent of the technology that would allow cars to be hacked.
” ~antiplanner
Source? We have non-driverless cars today that are getting hacked. Getting hacked doesn’t speak to a state of connectivity but to good design that doesn’t allow a breach in one area spread to another.
What evidence do we have that the design of these systems ensures the inability to hack? After all the recent Jeep hack involved being able to get into the vehicle wifi/ cell service to then get to the main control system which Chrylser had previously claimed was separate and safe. Well, it wasn’t separated, it was connected to a controller that was connected to the outside system. Chrysler was wrong. ( https://blog.kaspersky.com/blackhat-jeep-cherokee-hack-explained/9493/ )
What evidence do we have that these prototypes aren’t hackable? Whether or not one vehicle is saying “hey I’m here” to the cars around it doesn’t make the vehicles unhackable, it’s just one less of many possible pathways to gain access.
In fact, the importance of what Cummings gets at is better addressed by Bruce Schneier. He long ago pointed out that computer security wasn’t about keeping intruders out but ensuring that when they get in, the damage is kept to a minimum. Any cars with a computer will get hacked. The question is if the systems are designed in a way to ensure the brakes can’t be cut, etc.
The problem is, every system on a driverless car – even those not talking to other vehicles – requires some or all of the main 3 components of security: confidentiality, integrity, and availability. Securing the brake system doesn’t protect riders in the car if hackers are simple able to feed the car fake data ( integrity ) that causes it to go off a cliff.
The same with if they’re able to mess with the availability of the data in ways that cause the cars to be unable to drive ( availability ). If you think traffic is bad in LA today, just imagine if some extreme political org is able to cause all the cars on the roads to be to barely move. Or worse, just as hackers are holding data for ransom today, able to force some Just In Time ( JIT ) manufacturer to cough money to ensure the parts they need to receive today to keep their production line moving today arrive in time; they could just shut down the semis or the roads to block them.
Hacking isn’t just about getting into a system and controlling it. Hacking is simply about finding something to do that causes the system to do something it really shouldn’t do. An old hack form the old days was a whistle that had the correct tone to enable the phone systems back then give out a free long distance call.
With driverless cars, hacking isn’t just about gaining control of the vehicle. It’s about causing the vehicle to slow to a crawl and weave, thinking it’s avoiding children when really the street is just littered with manequins. Even if you can’t gain control of the braking system, you can pull off the same hack if you can compromise the data and cause the car to stop because it it thinks that it’s not on a road.
etc ,etc, etc.