Self-driving cars are the most challenging automation project ever undertaken. Driving is one of the most complicated activities humans routinely do, requiring a great deal of processing and executive decision making. Besides following driving regulations, we make eye contact with others to confirm who has the right of way, react to weather conditions, and otherwise make judgment calls that are difficult to encode as hard-and-fast rules. On top of that, there are many unpredictable external factors that must be accounted for.
Self-driving systems must execute reliable driving and protect all within the vehicle, across a wide variety of situations and conditions. Currently, the technology is more capable in some situations than in others. Through sensors and detailed mapping software, self-driving systems build representations of their environments and update them constantly in real-time. They classify the objects they see and predict their likely behavior before selecting appropriate responses. The speed and the accuracy of these systems already surpass human responses in many situations. Lasers can see in the dark. Reaction times can be nearly instantaneous.
However, some conditions still constrain them. Cameras are challenged by strong, low-angle sunlight that makes it difficult to read traffic lights, and lasers can be confused by fog and snowfall. The ideal way to train a self-driving car would be to show it billions of hours of footage of real driving and use that to teach the computer good driving behaviour in simulations. Machine learning systems perform really well when they have abundant data, and very poorly when they have only a little bit of it. But collecting data for self-driving cars is expensive. And since unusual, unfamiliar, and unstructured situations, such as accidents, road work, or a fast-approaching emergency response vehicle, are rare — it is possible for the car to stall because it has encountered a situation so infrequently in its training data that it is unable to respond safely and accurately.
Processes and environments that are structured well are much easier to automate than those that are not. Automated self-driving systems function best in an unambiguous environment — which is what many driving environments are not. The engineers of self-driving systems simply cannot foresee every possible combination of conditions that will occur on the road. Over time, learning will take place and the number of situations that systems cannot recognise will decrease. But novel combinations of conditions will never be eliminated, and sometimes these may produce disastrous consequences.
The conundrum is easily solved by allowing self-driving vehicles to hand over control to a human driver in emergency situations. The issue here, however, is that this makes self-driving systems only semi-self-driving at best, additionally, humans zone out when their full attention is not needed. A recent Tesla driving software update delivered the ability for a motorist to play video games even while driving, worsening this problem.
Two fatal accidents involving Tesla vehicles operating on their Autopilot systems demonstrate how this space between semi-self-driving and intermittent human control may be the most dangerous place of all. In the Florida 2016 crash, the driver of the Tesla had his hands on the steering wheel for only 25 seconds of the 37 minutes in which he operated the vehicle in automated control mode. In California in 2018, the driver’s hands were not detected on the steering wheel in the six seconds preceding the crash.
Self-driving cars also have to navigate an environment that is shared — with pedestrians who may cross the road without looking, cyclists, animals, roadworks etc., and of course whatever elements the weather brings. This means that we need to think not just about the onboard technology but also about the environment in which it is deployed. This transition will take decades, with autonomous vehicles eventually having to share the roads with human drivers (and pedestrians).
In the long run, driverless cars will help us reduce accidents, save time spent on commuting, and make more people mobile. The onboard technology is developing rapidly, but we are entering a transition stage in which we need to think carefully about how self-driving vehicles will interact with human drivers and the wider driving environment. The key question is: at which point in the future will self-driving systems evolve to the extent that they are fully ready to take on our ever-complex road systems? Therefore, this author contends that there is still some way to go before we are fully ready for self-driving cars.
Click here to find out more about our Master in Rail Transport & Logistics programme today!