Has Switzerland missed the train on AI regulation?
The European Union has drafted the world's first law regulating artificial intelligence. Where does that leave non-member Switzerland, which was not at the table?
The European Union (EU) has reached an agreement of the final draft of the AI Act, considered the world’s first legislation aimed at limiting the growing power of artificial intelligence systems and the companies that develop them. The law “will deliver on the European promise, ensuring that rights and freedoms are at the heart of the development of this revolutionary technology,” member of the European Parliament and AI Act negotiator Brando Benifei said in a statementExternal link after the proposed legislation was agreed.
It aims to ban AI systems that present an “unacceptable” risk to citizenship and democracy, such as those that use sensitive personal data for psychological manipulation, social classification, and racial, sexual, and religious profiling.
Lawmakers also zeroed in on generative AI software, such as ChatGPT, and those using the technology to create manipulated images, requiring transparency around data and intellectual property.
More
Governments and companies won’t stop the AI race
And any market application of AI considered “high-risk” must meet strict requirements, otherwise risking a fine of up to 7% of global turnover.
If passed by a vote in the European Parliament and the EU Council this spring, the law will take effect in the European Union. But what will it mean for non-EU-member state Switzerland, which is very active in AI research and home to international governing bodies such as the United Nations?
After long and exhausting negotiations, the EU members agreed on the most debated points.
The use of facial recognition software in public spaces by police and governments will not be banned altogether, as the European Parliament demanded in its first draft of the Act. However, it will be limited to exceptional cases for reasons of national security and law enforcement.
Companies that use generative AI software, such as ChatGPT, or image manipulation software, or “Deepfakes”, must declare that they are artificially generated content and to be transparent about the data used to train the systems, while respecting copyright and intellectual property. Countries such as Italy, France and Germany have been among the most vocal opponents of these measures, for fear that they might inhibit the innovation of their AI companies.
The so-called “high-risk systems” include a long list of applications, such as biometric identification, access to the labour market and universities, and the use of public and private services. However, the definition of what constitutes ‘high risk’ still remains vague according to many experts.
The text is not final. The technical details of the agreement will be finalised in the coming weeks.
Switzerland joins the global race to regulate AI
Since ChatGPT, the most powerful chatbot in history developed by the US company OpenAI, burst onto the market in November 2022, several countries have moved to regulate AI or limit its risks in some way. The EU, which has been working on this since 2021, has been under pressure to finalise and pass its law as soon as possible.
In October 2023, China launched Global AI GovernanceExternal link, open to all countries that are part of the Beijing government’s New Silk Road initiative. In the same month, US President Joe Biden’s administration issued a decreeExternal link on AI regulation. Shortly afterwards, 29 countries gathered at Bletchley Park in the United Kingdom and signed a declarationExternal link calling for the safe and responsible development of AI. Switzerland was among them.
Historically, the Alpine country has always been in favour of a relaxed and deregulated approach to AI.
“For Switzerland, no regulation is better than bad regulation,” Livia Walpen, policy advisor for international relations at the Federal Office of Communications (OFCOM), said in September during a panel discussion at the Swiss-based Institute for AI Research Idiap.
But Walpen emphasised that the pressure for regulation was also strong in Switzerland, especially after the arrival of ChatGPT a year ago. And indeed, Switzerland has recently changed its approach. At the end of November, the Swiss government joined the growing list of countries interested in regulating AI, statingExternal link that it wants to explore regulatory approaches that are in line with European law and the Council of Europe’s AI Convention, to which Switzerland is contributing. A decision on the way forward will come by the end of 2024.
More
AIs are out of (democratic) control
Switzerland ‘missed the train’ on AI regulation
By that time, the EU will likely have passed its law, which will probably come into force around the end of 2025. Does that mean it’s too late for Switzerland to find its own way to regulate AI?
“In effect, yes,” says Boris Inderbitzin, a Zurich-based technology policy specialist and lawyer. In his view the influence of the European law leaves Switzerland little choice but to passively adopt it.
“Switzerland has missed the train on relevant AI regulation,” says Inderbitzin, who sees the Alpine country’s absence from European negotiating tables as its Achilles’ heel, even if the country has been active in other international fora, such as in the Council of Europe.
As a strong democracy and a centre of innovation and research in the field of new technologies, Switzerland could have made an important contribution to the European AI law. “Now it can only go along with the change, without having had anything to say about how the EU shapes it,” says the expert.
Swiss businesses face compliance mandate
To continue doing business on the European market, Swiss companies will have to comply with the new law. Many have already started preparing, mindful of their experience with the data protection regulation RGPD, introduced in the EU in 2018. “So many companies didn’t take it seriously and are still struggling with it,” says Kevin Schawinski, co-founder of a Zurich-based start-up that helps companies develop products compliant with AI regulations. “Companies have realised that the longer they wait, the more difficult and expensive it will be to comply with European law.”
According to a studyExternal link by consultancy firm Intellera, companies will face annual costs of between €230,000 and €4 million (CHF218,000-CHF3.7 million) to ensure the fairness and reliability of their high-risk AI systems. This will also require them to hire specialised personnel. In Switzerland, these costs will fall on about 30% of companies, says Schawinski. But they will weigh most heavily on start-ups and small and medium-sized businesses, which make up a significant part of the Swiss economy.
“The EU has set the bar too high,” argues Philippe Gillieron, a law professor at the University of Lausanne and an intellectual property and technology lawyer.
Switzerland, centre of global AI governance?
For his part, Schawinski thinks the EU draft law can provide a commercial advantage for European companies that offer AI systems perceived as trustworthy by the public, since the law is the first of its kind aiming to guarantee the development of secure AI.
Should Switzerland therefore copy-paste European law? It could do better, argues Inderbitzen, given its expertise, its pragmatic approach to AI applications and its position as a country that promotes multilateralism and human rights.
“But to unleash our potential and have a global impact we should reinforce our relationship with the EU,” he says.
Meanwhile, the country’s flagship technical universities aim to position Switzerland as a leading AIs centre focused on transparency and reliabilityExternal link. Many AI researchers see a way forward for the Alpine country if it establishes itself as a neutral location for the governance of new technologies, seeking global solutions.
Ethics and digital economy experts Niniane Paeffgen and Salome Eggler see potential in the approach, as they outlined in a recent report on the Switzerland’s AI governance.
“The country can leverage its strengths more strategically to create global framework conditions for AI,” they wroteExternal link.
Edited by Veronica DeVore/subbed ds
More
The ethics of artificial intelligence
In compliance with the JTI standards
More: SWI swissinfo.ch certified by the Journalism Trust Initiative
You can find an overview of ongoing debates with our journalists here . Please join us!
If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.