On the road to regulating self-driving cars
As cars become increasingly able to drive themselves, to what extent should they be able to explain their actions? A UN body is putting that question and others to the public to find answers.
What happens the day an unoccupied self-driving car hits a child crossing the road in a remote area where there are no eyewitnesses? Does the car detect that a collision occurred? Does it stop and call emergency services? Is the car able to explain what happened, or rather can the artificial intelligence (AI) driving system recall what decisions it made that led the car to crash into a pedestrian?
These are questions the public is being asked in an online survey called the Molly ProblemExternal link.
“The Molly Problem was a thought experiment that we came up with [as] a way of focusing people’s minds on what information is important, or in the context of AI, what level of explainability do you need in the AI system, to justify what happened, and record what happened,” says Bryn Balcombe, chair of the focus group on AI for autonomous and assisted drivingExternal link at the Geneva-based International Telecommunication Union (ITU).
Unsurprisingly, the preliminary results from the survey show that people expectExternal link an empty self-driving car not to commit a hit-and-run. They have clear expectations that the AI driving system should record enough data to be able to explain what happened. And in fact, most think this data should be collected for near collisions too.
The focus group Balcombe leads includes close to 350 international participants representing the automotive industry, telecoms, universities, and regulators. They are drafting a proposal for an international technical standard – an ITU recommendation – for the monitoring of the behaviour of self-driving vehicles on the road. The Molly Problem is part of this effort.
The work of the ITU group addresses current gaps in standards and regulations for self-driving vehicles. “The reality is that there are no standards for collision detection with pedestrians,” says Balcombe, who is also the chief strategy officer of Roborace, a championship for self-driving racing cars. “There’s no specific regulation that addresses recording near-miss event data, and identifying near-miss events, even.”
In Switzerland, the governing Federal Council conducted consultations last year that should inform an amendment of the road traffic law. The revised text would allow the Federal Council to adapt the law for self-driving cars through ordinances that do not require the approval of parliament. This would give it flexibility to adapt as self-driving cars become more prevalent.
The group’s draft standard is aimed at supporting international regulations and technical requirements that bodies of the UN Economic Commission for Europe (UNECE) are responsible for updating for self-driving vehicles.
“How can you take principles, which have been agreed by governments all around the world, and translate those into this AI, digital world,” asks Balcombe.
The group will submit its proposal to the ITU early next year, at which point the ITU will decide whether to turn it into an ITU recommendation, which can then be referenced by regulators.
AI for Good
The group’s efforts to ensure that self-driving vehicles improve road safety were born from discussions at the ITU’s AI for Good Global Summit in 2019. More than a million people a year die in road traffic accidents. Nine out of ten road fatalities occur in low- and middle-income countries.
“AI for Good, was built on the premise that we don’t have very long to reach the 2030 Sustainable Development Goals (SDG), and AI holds great promise to achieve some of those goals,” says Fred Werner, head of strategic engagement at the ITU and a catalyst for the launch of AI for Good.
The ITU hopes to leverage its unique membership model within the UN system, which includes 193 member states and over 900 private companies, universities, and other organisations, to bring different stakeholders together to discuss the opportunities and challenges of AI.
“AI experts themselves would say that AI is too important to leave to the experts. So, the goal of the summit is really to bring as many different voices to the table as possible,” Werner says.
Outcomes of AI for Good over the years have included several pre-standardisation focus groups like the one Balcombe chairs on self-driving cars. They are, for example, looking at the use of AI for health, energy efficiency, and natural disaster management.
More
Geneva Internationals: Using AI to solve real-world problems
Governance
Angela Müller, senior policy and advocacy manager at AlgorithmWatch Switzerland, a non-profit that monitors AI systems and their impact on society, is sceptical of the message AI for Good may convey.
She says parts of the narrative tend to sway the debate away from needs for governance frameworks. “When you choose to use this narrative, that AI will eventually save the world, then the solution would just be to invest in AI research.”
Transparency on how AI driving systems work is key to having an evidence-based public debate on governance. This is why Müller welcomes research that is open to the public and focuses on explaining how AI systems make decisions and what their impact on humans is.
“It’s very important that these discussions are happening right now, and that this kind of research takes place, because we need it as a basis for our governance debate.”
The European Commission (EC) is currently working on the world’s first legislation to regulate AI. The proposal currently under discussion addresses the risks of this technology and defines clear obligations regarding its specific uses. For now, Switzerland does not have anything comparable to the proposed European AI legislation.
Müller says making sure standardisation processes are inclusive is important, not least as they may serve as blueprints for future regulations. Separate regulatory efforts at the national and international levels, such as the EC’s draft legislation, are needed, she says, to ensure providers are held accountable for the behaviour of the AI driving systems they produce.
AI for road safety
Earlier this month, a new initiative on AI for Road SafetyExternal link led by the ITU, the UN special envoy for road safety, and the UN envoy on technology was launched. It aims to encourage public and private efforts to use AI technologies that increase road safety for all road users. These should be applicable to low- and middle-income countries.
AI could help make sense of the data that vehicles, other road users, and infrastructures will increasingly be able to collect. Better statistics about crashes, for instance, could help improve road infrastructures and accelerate emergency responses.
“Cars share data with each other, cars share data with pedestrians, infrastructure shares data with both, and we should be using that data with AI to enhance safety on the roads,” Balcombe explains. The focus group is part of the broader AI for Road Safety initiative.
The cost of fully self-driving vehicles will remain too high for large-scale adoption in lower income countries by 2030, when UN targets to halve road traffic fatalities are to be met. But a lot of the data that can be collected about self-driving cars, Balcombe says, can also be gathered from less expensive vehicles equipped with assisted driving systems.
“That’s why we’re really interested to expand the role of the focus group, to start looking at how we can take these technologies but deploy the most important and most valuable ones in those countries to make a difference by 2030.”
In compliance with the JTI standards
More: SWI swissinfo.ch certified by the Journalism Trust Initiative
You can find an overview of ongoing debates with our journalists here . Please join us!
If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.