I came across this article in Insurance Journal today (of course!).
Can a 40-ton self-driving truck be safe? Developers say yes, others are skeptical.
The optimist in me says yes, self-driving trucks can be safe.
The risk analyst in mine says I have questions about how to make them safe.
The insurance nerd says maybe. You could be sure, but we’re not sure how best to provide protection.
Before you start your apocalyptic playlist, understand that two very different facts are true.
First, there are vehicles with varying degrees of autonomy all around you as you drive, especially as you take the on-ramp to the nearest freeway.
Second, we are still many years away from the autonomous vehicle revolution if we ever fully achieve it.
That being said, we can safely say that a properly equipped, operated, and monitored autonomous truck can be as safe as any other vehicle on the road. I didn’t say it wasn’t risk-free or completely safe. Given today’s road conditions, traffic and driver preferences, an autonomous truck can be just as safe as any other truck on today’s road.
When you read autonomous vehicle, you probably think of George Jetson putting his feet up on the dashboard of his flying car (oh, could someone please get me a flying car?) and letting the car fly him to work or home. The truth is far less exciting than that. Just about every vehicle on the road has some level of autonomy. From cruise control to passive lane departure warning, these vehicles take control of the human driver and steer the vehicle.
What are the risks?
The first risk that comes to mind is the possibility that drivers could be injured by the autonomous vehicle. This may seem almost counterintuitive to some, seeing the potential for increased safety when a fleet of autonomous vehicles are on the road. While that may ultimately be true, there is still that time between where we are today and any future in which most if not all vehicles will be autonomous.
Today, the autonomous vehicle must contend with the flood of other vehicles with less autonomy, including fully human-controlled vehicles. Since the human drivers on the road have to act and react by anticipating every possible reaction of the drivers of other vehicles on the road while being individually difficult to predict, a safety problem arises.
The questions that need to be answered are how the autonomous vehicle will let the human drivers around it know it is not being driven by a person and how it will respond to the human drivers around it. Since the actions of individuals are difficult to predict, the autonomous vehicle must be able to find the best possible reaction to other road users.
Another risk we should consider is the risk to other people’s property. As the article tells us, we’re not looking at something the size of a Roomba cruising around, bumping into things and correcting course. We’re talking 40-ton trucks and cargo moving at speeds of 60-80 miles per hour. That’s a lot of energy, especially when it comes into contact with someone else’s property.
This is partly an extension of the risk of injury for a person, as the main risk we are thinking of here is the risk of property damage on the road. These are mostly the other vehicles on the road and everything in them.
We must also consider that autonomous vehicles can drive not only on highways, but also on country roads, city streets and even deliver to your favorite restaurant in your city. That brings us to the possibility of autonomous trucks damaging other property.
Another risk to consider is the loss, damage or theft of the truck and its contents. While the critics concerned about the safety of autonomous vehicles don’t talk about it, it’s a real risk that those who might deploy autonomous technology need to consider.
It’s the other side of the coin of what happens when one of these vehicles crashes. We’ve already looked at the risks associated with injuring other drivers or damaging someone else’s property, but what about the property being moved and the truck itself? There is clearly a chance that the vehicle and its cargo will be damaged if an accident occurs, whether involving another vehicle or a single vehicle accident.
There is another risk that needs to be addressed that could be a possible cause of the risks we have already looked at. That is the risk that the owner of the autonomous vehicle will lose control of it or lose sight of it. We are considering autonomous vehicles, which seem to preclude remote vehicle control, but even an autonomous vehicle would have a planned route and should be trackable. So we have to consider the risk of the vehicle doing something unexpected.
This risk arises from the possibility that the programming will fail in some way, causing the vehicle to deviate from its expected course. Any number of programming problems can cause these types of errors, from a coding error in the software to a route programmer data error to a power failure that resets everything.
There is also a risk of the autonomous vehicle being hacked or hijacked. We’re not exactly thinking of a Mission Impossible-style heist, where the hijacker jumps out of another vehicle or launches from a helicopter overhead while the truck drives at 75mph down a crowded freeway through a city.
As an autonomous vehicle, it needs to send and receive multiple signals, including GPS, so it can know where it is and report that location to someone else, and to send and receive what I find best other than telemetry may designate, including speed, fuel level, oil pressure and other necessary data to give someone. When signals are sent and received by the vehicle, it is possible for those signals to be hacked or hijacked, allowing someone else to take control of the vehicle. At best, that means sending the vehicle somewhere other than the intended location and having the cargo stolen. In the worst case, this means that the vehicle becomes a weapon in someone else’s hands.
How can the risks be contained?
I mentioned earlier that an autonomous truck can be safe if properly equipped, monitored and operated. This also applies to the current working climate.
Before I go any further, I don’t necessarily recommend a Tier 5 fully autonomous vehicle where there is no option for a human driver. It’s not time for that. To be honest, I doubt there’s much of an appetite for it in much of this country. Some people (me) just love to drive.
The truck clearly needs to be equipped with all the necessary cameras, sensors, computing power, software, radar and mechanics that enable it to drive and know what’s going on around it. Why not equip it with posters as well? “Self-driving vehicle. Follow (pass) with caution.” “There is no driver in the cab. Keep that in mind when you come by.” “My computer drives better than your driver.”
As we are concerned about the truck’s cybersecurity, it should be equipped with proper hardware and software designed to allow only the necessary signals and block any unauthorized signals from accessing it or its data.
It must also be equipped with the ability to report information about itself as it requires continuous monitoring.
Proper monitoring of an autonomous truck requires it to be equipped with an array of cameras, sensors, transmitters and receivers so that it can be tracked. This may not be as necessary as technology advances, but now that it’s in its infancy and very few similar trucks are on the road, it needs to be able to be continuously monitored. The data collected must include location, location history, speed, planned route, fuel level, planned refueling stop, oil pressure, engine temperature, charging system status, engine speed, tire pressure, tire wear estimates and probably some other factors such as good. All because someone needs to know whether the truck can reach its destination or not.
As this level of autonomy is currently the exception rather than the rule, there needs to be both automated monitoring and human monitoring. Redundant monitoring capabilities reduce the risk of some critical information falling through the cracks. Does it eliminate that risk? Certainly not, but it mitigates it and that’s what risk management is all about anyway.
Properly operating an autonomous truck hauling cargo from one place to another is a bit more complicated than simply programming the thing to go where it’s supposed to go and leave it there. In this case, I propose that the autonomous vehicle has a human driver available for certain functions. In short, the vehicle should be driving itself on more or less autopilot mode on freeways, certain US routes, and some federal highways that are more than dual carriageway freeways.
Whenever the truck needs to exit a freeway, drive on dual carriageways, or perform other potentially complicated maneuvers that increase the risk of an accident, a human driver should be available. In short, the truck operates similarly to commercial air travel, where the human pilot operates as needed during takeoff, landing, and any other time, but the majority of the journey is performed on autopilot.
Over time, this human driver could move out of the vehicle to a remote location where the driver is immersed in a virtual reality driver’s seat that emulates the driving experience without having to be present at all times, increasing the number of drivers required in the is reduced over time.
Given that even an autonomous truck needs to fill up, it would be handy to have a human driver who can pull the truck to the pump and actually pump the gas. If there is no human driver to pump gas, you need to hire servants to pump the gas and charge the trucking company for it. It’s also good to have a human driver on board if the truck has a mechanical problem, be it a blown tire, a broken engine or some other mechanical problem.
Currently, fully autonomous vehicles are not in our immediate future, but technology is advancing and there are more sophisticated semi-autonomous vehicles in our near future, and that future requires a consideration of the risks involved and how best we can mitigate those risks.
Personal Autonomous Vehicles