posteriori / Shutterstock

Robotic vehicles have been used in hazardous environments for decades, since decommissioning Fukushima nuclear power plant or inspect underwater energy infrastructure In the North Sea. Recently, autonomous vehicles boats that grocery delivery trolleys have gently moved from research centers to the real world with very few hiccups.

However, the promised arrival of self-driving cars has not progressed since the testing phase. And with one Uber self-driving test drive in 2018, the pedestrian died by vehicle. While these accidents happen every day when people are behind the wheel, the public considers driverless cars to have much higher safety standards, interpreting one-off accidents as evidence that these vehicles are too dangerous to get on public roads.

A small wagon-like robot with a flag on a city street.
If it were as easy as stand-alone grocery delivery robots.
Jonathan Weiss / Shutterstock

Programming the perfect self-propelled car that always makes the safest decision is a huge and technical task. Unlike other autonomous vehicles, which are commonly used in tightly controlled environments, self-propelled cars have to operate indefinitely on an unpredictable road network and process many complex variables stay safe.

Inspired highwayWe are drawing up rules to help self-driving cars make the safest decisions in all possible situations. Ensuring that these rules work is the last hurdle that needs to be overcome to get reliable self-driving cars safely on our roads.

Asimov’s first law

Scientific author Isaac Asimov wrote “The Three Laws of the Robot” in 1942. The first and most important law is: “A robot must not harm a human being or, through inaction, allow a human being to be harmed.” When you drive cars yourself harm people, they clearly violate this first law.




Read more:
Are self-driving cars safe? An expert on how we drive in the future


We National Robotarium lead research to ensure this self-propelled vehicles always makes decisions in accordance with this Act. Such a guarantee would provide a solution to the very serious safety problems that prevent self-driving cars from rising worldwide.

Red alarm box around women in the protection zone pushing a pram
Self-driving cars need to detect, handle, and make decisions about hazards and risks almost immediately.
Jiraroj Praditcharoenkul / Alamy

Artificial intelligence software is really good for learning situations it has never encountered. Using “neural networks“For those inspired by the appearance of the human brain, such software can detect data patterns, such as car and pedestrian movements, and then recall those patterns in new scenarios.

But we still need to show that all the safety rules taught to self-driving cars work in these new situations. To do this, we can turn formal inspection: a method used by computer scientists to prove that a rule works in all circumstances.

For example, in mathematics, rules can prove that x + y is equal to y + x without testing all possible values ​​of x and y. Formal authentication does something similar: it allows us to prove how AI responds to different situations without having to exhaustively test every situation that may occur on public roads.

One of the most significant recent successes in the field is the verification of an artificial intelligence system that uses neural networks to avoid collisions. independent aircraft. Scientists have successfully formally confirmed that the system always responds correctly regardless of the horizontal and vertical movements of the aircraft in question.

Highway coding

Human drivers follow a highway keep all road users safe, based on the human brain in learning and rationally applying these rules in countless real situations. We can also teach self-driving cars on the highway. It requires us to choose each rule from the code, teach vehicle neural networks to understand how to follow each rule, and then check it they can be trusted to follow these rules safely in all circumstances.

However, the challenge of ensuring safe compliance with these rules is complex when examining the consequences of the phrase “never get” in motorway traffic. To make a self-driving car as reactive as a human driver in any scenario, we need to program these practices to take into account nuances, weighted risk, and a random scenario where different rules are in direct conflict, requiring the car to overtake one or more of them.


Robotics Ethics Patrick Lin introduces the complexity of automated decision making in self-propelled cars.

Such a task cannot be left to programmers alone – it requires the input of lawyers, security experts, systems engineers and decision makers. In the newly formed AISEC projectThe research team is designing a tool to facilitate the kind of interdisciplinary collaboration needed to create ethical and legal standards for self-driving cars.

Teaching self-driving cars to be perfect is a dynamic process: it depends on how legal, cultural, and technology experts define perfection over time. The AISEC tool will be built with this in mind and will provide a “task control panel” to monitor, complete and adapt the most successful rules for self-driving cars, which will then be made available to industry.

We hope to deliver the first experimental prototype of the AISEC tool by 2024. But we still have to create adaptive verification methods to address the remaining safety and security issues, and it is likely to take years to build and embed them in self-propelled cars.

Accidents with self-propelled cars always create headlines. A self-driving car that detects a pedestrian and stops before hitting 99% of the time is the cause of the celebration in research labs, but in the real world a killer. By creating stable and verifiable safety rules for self-driving cars, we are trying to make this percentage of accidents a thing of the past.


Discourse

e.komendantskaya@hw.ac.uk receives funding from EPSRC, NCSC, DSTL.

Luca Arnaboldi and Matthew Daggitt do not work, advise, own shares or receive funding from companies or organizations that would benefit from this article, and have not disclosed any relevant affiliations outside of their academic designation.

Original post published Discourse.

Ekaterina Komendantskaya

guest author

Ekaterina Komendantskaya is Professor at the School of Mathematics and Computer Science, Heriot-Watt University

Luca Arnaboldi

guest author

Luca Arnaboldi is a researcher at the University of Edinburgh School of Computer Science

Matthew Daggitt

guest author

Matthew Daggitt is a researcher at Heriot-Watt University School of Mathematics and Computer Science

LEAVE A REPLY

Please enter your comment!
Please enter your name here