Should we put the brakes on AI? And how?
As algorithms become more powerful, world powers are debating how to limit their risks. Switzerland, recently re-named the most innovative country in the worldExternal link, is also at the table.
Since ChatGPT launched to the public and became part of everyone’s vocabulary, AI has infiltrated many aspects of our daily lives: we have started using it to converse, write e-mails and answer our questions. It has also changed the landscape for many businesses, including media organizations like ours, promising greater efficiency but also drawing attention to the many risks of AI related, for example, to the dissemination of false or incorrect data and news.
In March, Elon Musk and other leading figures in the field of AI published an open letterExternal link calling for a halt to the training of systems more powerful than Chat GPT-4. But so far, no one has put a stop to the development of these impressive, widely-used algorithms.
>> One of the researchers who signed the letter told us why it is urgent to stop the uncontrolled development of AI:
More
AIs are out of (democratic) control
When it comes to AI regulation, something is moving: in June, the European Parliament approved the first draft law on artificial intelligence. It’s the first such law in the worldExternal link that aims to regulate the development and application of AI. The final draft is expected to be voted on later this year.
This law could have a big impact globally, prompting other countries to follow suit.
__________________________________________________________
Event tip: AI+X Summit
If you’re near Zurich later this week and are interested in the latest in AI research, you can attend the ETH AI+X Summit, a gathering of top researchers in the field held at the Federal Technology Institute ETH Zurich with numerous accessible panel discussions and exchanges on all things AI. You’ll find all the information hereExternal link.
________________________________________________________
As the world’s leading country in innovation and technological progress, Switzerland is contributing to the development of international regulation in the Council of EuropeExternal link. Within national borders, the debate on how to curb AI is lively.
Recently, I took part in an informal conversation on the subject with experts from the research, government, industrial and legal sectors, organised by the artificial intelligence research institute Idiap based in southern Switzerland. I was struck by the fact that most of the participants agreed with the need for regulation but were sceptical about a European law.
More
Where does Switzerland stand on regulating AI?
What is lacking, they say, is an understanding of what AI really is and its capabilities. Unlike other more “tangible” innovations (such as those in the pharmaceutical sector, for example), the wide range of technologies that rely on AI complicates the definition of clear regulatory boundaries. For instance, according to European law, a large language model such as ChatGPT is not considered a high-risk technology. It will be difficult, therefore, to find a perfect solution that ensures responsible use without inhibiting innovation.
We’ll keep reporting on developments surrounding AI in Switzerland; in the meantime, you can have your say on the issue and explore some of our top stories on the topic below:
More
ChatGPT: intelligent, stupid or downright dangerous?
More
‘In 2023 governments woke up to the realities of AI’
More
What to expect from the ITU ‘AI for Good’ summit
More
The ethics of artificial intelligence
Participate in our debate!
More
In compliance with the JTI standards
More: SWI swissinfo.ch certified by the Journalism Trust Initiative
You can find an overview of ongoing debates with our journalists here . Please join us!
If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.