At first glance, you may think self-driving cars and industrial automation do not have much in common. However, there are many lessons for industry in general to benefit from the massive levels of investment in self-driving cars. Key to these are the safety and testing of AI algorithms, and cybersecurity.
Another interesting subject is how the ethical model in the algorithm decides on the course of action to follow. When Mercedes announced their self-driving cars were to prioritise occupants over pedestrians, the German Ministry of Transportation challenged their proposal.
Car makers have become expert at mechanical safety and addressing risks to both occupants and other road users. The Euro NCAP standard shows a car’s crash protection rating (stars) in the event of an accident. But what is the system for testing its AI performance, and whether its decisions are ethical. Furthermore, who is responsible for updates and bug fixes?
Safety of AI algorithms
The use of AI and algorithms in industrial automation is also increasing, although arguably less mission-critical than for cars. Initially cobots and predictive maintenance applications, are likely to benefit.
Nevertheless, how safe are the AI algorithms and who is responsible for testing them? How are their perception algorithms tested to ensure the decisions made are correctly prioritised? In the case of cars, the present method seems to be by driving them tens of millions of miles. Another, more effective way may be machines checking machines using machine learning.
Machine learning is growing ever more sophisticated, due to algorithms that pit two artificial intelligences against each other. These algorithms, known as generative adversarial networks (GANs) and require the use of Quantum computing.
Cyber Security for self-driving cars
Autonomous cars process huge amounts of data from both internal and external sources. To enable them to function safely and efficiently they are also cloud connected. This will inevitably lead to hackers turning their attentions to profit from hacking vulnerabilities. The same is true of the IIoT and the Internet of things IoT.
Machine learning also plays an important role in analysing processing and vehicle behaviour. However, this is likely to be cloud based as it may be too intense for in-car computing.
The UK’s Department of Transport, National Security Centre and BSI have developed a new cyber security standard for self-driving cars.
Ethical AI algorithms
In the event of an accident, who is your car programmed to protect, its occupants or the other road users? Will your European car’s AI use different criteria from one from China or the USA, and who legislates?
A recent paper from CEPS-Europe “Ethics, algorithms and self-driving cars” considers the relevance of the “trolley problem”. A thought experiment in ethics, it considers a runaway tram moving toward five incapacitated people lying on the tracks. You are standing next to a lever that controls a switch. Pulling the lever, directs the tram onto a side track, saving the five people on the main track. However, there is a single person lying on the side track. You have two options:
– Do nothing and allow the trolley to kill the five people on the main track.
– Pull the lever, diverting the trolley onto the side track where it will kill one person, but what if the one person is your child?
Taking a crime scene approach, the paper considers how the car ended up there, what did it know, and how did it (the AI) decide on that course of action. It also looks at who is going to be liable for the damage.
The author believes that it is not the trolley problem, but the important and complex policy issues that require consideration. These include the need to preserve human control over machines, data governance and algorithmic accountability. It concludes that current legal systems are insufficiently equipped to cope with most of these issues. The mapping of outstanding ethical and policy dilemmas is a useful starting point for a thorough overhaul of public policies.