Self-Driving Cars: End of the Human Driving Era

Artificial Intelligence Robotics Singularity Technology

It’s 2030 and less than 1% of the current 190 million drivers in the U.S. can still legally drive. The reason: all 51 states (Puerto Rico was granted statehood in 2022) have prohibited human drivers from taking direct control of passenger vehicles while on public roads unless they are specifically trained and have a compelling reason. Farfetched?

The Case for Autonomous Cars

There are pressing reasons to get artificially intelligent, self-driving cars to market as quickly as it is safely possible to do so. Yet, while the idea of fleets of Google carbots delivering pizzas automatically is mesmerizing, the real justification to accelerate availability of this technology is the potential for a dramatic reduction in injury and fatality rates.

In the United States, the National Highway Traffic Safety Administration (NHTSA) recognizes the impact that both alcohol use and distracted driving have on public safety. Recent data shows that:

However, statistics in the United States compare quite well to most of the world. The World Health Organization (WHO) lists traffic fatalities as the #10 cause of death worldwide, accounting for 1.2 million deaths in 2008.

With approximately 3 trillion miles logged each year in the United States, there is an average of 1 death per 88 million road miles. Although much more data is needed to accurately gauge the safety prospects of self-driving vehicles, the track record is very promising. Of the 250,000 miles logged by Google’s autonomous cars so far, two accidents have been recorded and both were the fault of human operators. Two things are certain: self-driving vehicle software won’t drink while driving, and it actually can keep its attention on the road while posting Facebook updates.

Industry analysts believe that this technology won’t be ready for public consumption for 10 years, but signs point to a more rapid adoption. The attention that car manufacturers are investing indicates a desire to make this capability available quickly. Google project manager Anthony Levandowski states that their self-driving cars are already completing test courses faster than humans and he thinks that a better-than-human safety record will be shown much sooner than the next decade.

Industry Embraces Autonomous Technology

In 2008, General Motors stated that they would begin testing driverless cars by 2015, and that they could be on the road by 2018. More recently, General Motors chairman William Ford, Jr. talked about a connection revolution at the ITS 2011 conference in Orlando. Similarly, GM’s Vice President for Global Research and Development Alan Taub promoted technologies such as vehicle-to-vehicle communications, but worried about the significant challenge of still having humans at the wheel. In 2011, Taub stated:

The technologies we’re developing will provide an added convenience by partially or even completely taking over the driving duties. The primary goal, though, is safety. Future generation safety systems will eliminate the crash altogether by interceding on behalf of drivers before they’re even aware of a hazardous situation.

The impact of intelligent vehicle systems can already be seen in declining fatality statistics in the U.S. – down nearly 58% from 1.73 (per 100 million) in 1995 to 1.14 in 2009.

Evolving Legal Status

Once legal hurdles, liability issues and public perception challenges are overcome, and autonomous vehicles cars begin driving with injury and fatality rates lower human-driven vehicles drivers, the liability burden will shift. Unless a compelling reason can be found, humans must yield to self-driving cars that have a better, proven track record. While Taub says that driving for humans is “fun”, fun is no longer an option when lives are at stake and better alternatives exist.

The shift to self-driving cars is not something that has occurred overnight. Although this technology has received a lot of press in the past few years, we’ve actually been moving towards greater vehicle autonomy for decades with capabilities such as adaptive cruise control, electronic stability control, collision warning systems, and lane departure detection to name a few.

Self-driving technology will initially require a competent driver be able to take control at a moment’s notice and many governments are racing to make changes in the law. These changes seem premature since the legal and liability issues presented by this type of augmentation should not be different than with today’s intelligent technologies – the human driver is still ultimately responsible for the vehicle operation at all times.

However, the question of liability will become murky at the point when self-driving vehicles can fully assume control of the vehicle from start to finish, and not require that a human driver be capable of taking control. A second round of changes in laws will be required, and questions of legal liability must be decided. Although driving will be much safer overall, in any particular accident, an assumption of fault will fall on autonomous technology until proven otherwise. Manufacturers and the developers of such technologies cannot afford millions of dollars in settlements for each incident. We must eventually make decisions to offer legal protection to manufacturers and the developers of these technologies since the benefits to the public as a whole are clear.

 

6 thoughts on “Self-Driving Cars: End of the Human Driving Era

  1. WITH THE SELF DRIVING CAR LIKE THE CARS OF TODAY IF YOU ARE THE OWNER YOU ARE THE PERSON WHO IS AT FAULT FOR WHATEVER HAPPENS WITH THE CAR. IF YOU DO NOT MAINTAIN THE CAR AND SOMETHING GOES WRONG WITH IT YOU CAN NOT PUT THE BLAME ON SOMEONE ELSE YOU ARE AT FAULT FOR NOT MAINTAINING THE CAR. SO WHAT IS ALL THE BS ABOUT WHO IS AT FAULT IF THE CAR GETS IN A ACCIDENT. YOU ARE THE OWNER YOU ARE AT FAULT JUST LIKE YOU ARE NOW.

  2. If my brakes fail because of a design flaw by a manufacturer and my car collides with another vehicle, there is a compelling argument to hold the manufacturer liable, not me, if I could not reasonably still avoid the accident.
    The question of fault is not black and white and can be decided with many outcomes by different juries. Imagine we are in “stage 1”, and I am required to take control of the vehicle at any time. I’m ultimately responsible if the vehicle decides to go into the oncoming lane and I don’t take control to correct it. I would probably be held at fault by a jury.
    However, in “stage 2”, where there is no requirement for a competent driver to be able to take control, there no longer is a driver in control of the vehicle. For example, imagine an elderly patient being driven alone to a medical appointment. It’s similar to riding in an autonomous tram at an airport today. The manufacturer assumes much of the responsibility for the operation of the vehicle. The problem is that, even if fully autonomous driving makes the public safer as a whole, the liability costs for the manufacturers for even a few individual accidents will make them reluctant to make the technology generally available without assurances of liability immunity.

  3. How can you possibly be held liable for the behavior of the vehicle if you have no conceivable means of controlling it? I think that is a little bit ridiculous. Though you do have a valid point in terms of how our current laws are structured. For instance financial responsibility (registration) of a vehicle is important in making a decision on citation of blame in an accident.
    So the question is would the manufacturer have to assume financial responsibility for all of their products? If so I don’t see many motor vehicle manufacturers jumping on this one right away.
    Also , what would this do for insurance? Who would insure the vehicle? What type of policy would it need? These are all valid concerns outside of the simple technological aspect of whether or not we CAN do this.
    Now… My concern lies in “modification” of how one’s vehicle operates, or how another individuals vehicle operates. It’s just software, written by fallible human coders, the potential is devestating if a vulnerability could be exploited in this system…
    P.S : Caps lock is cruise control to cool 🙂

  4. I do not think a self-driving car is a good idea.What if it malfunctions? What will the driver do then if they have no control over their vehicle?

  5. Which is more likely: the software malfunctioning, or human error?
    If someone is talking on their cellphone/adjusting the radio/eating fast food/gazing out the window/drunk/angry/being incompetent while driving, I’d trust proven software on a well-maintained machine any day of the week.

  6. I love what you did with the article. The one thing that’s missing is that people are no longer going to have a need to own a vehicle. You will be able to use an app on your phone(Or implant with a sort of augmented reality) letting the company know that you need a vehicle and it will drive itself to you then it will drive you to your specified location. It’ll be cheaper to rent a one way trip to somewhere or even have everything all delivered to you.
    All delivery jobs will now be done with out a driver. They will find a way to have the vehicles fix and create roads themselves.
    This inevitably is the first step to Automation on a grand scale. Once people feel safe with cars driving them they will feel safer and safer with machines doing most is not all of the labor intensive jobs. So eventually all repetitive simplistic jobs will be taken care of by robots. Not robots in the standard sense either ones that are made and work to do specific things like factories.
    Money will drive corporations to do this as humans are much less efficient.

Leave a Reply

Your email address will not be published. Required fields are marked *