Director of Law and Digital Studies at Télécom Paris (IP Paris)
Key takeaways
AI is not outside the law. Whether its RGPD for personal data, or sector-specific regulations in the health, finance, or automotive sectors, existing regulations already apply.
In Machine Learning (ML), algorithms are self-created and operate in a probabilistic manner. Their results are accurate most of the time, but risk of error is an unavoidable characteristic of this type of model.
A challenge for the future will be to surround these very powerful probabilistic systems with safeguards for tasks like image recognition.
Upcoming EU AI regulations in the form of the “AI Act” will require compliance testing and ‘CE’ marking for any high-risk AI systems put on the market in Europe.
Ambassador for the Digital Sector and founding member of the Cap Digital competitiveness cluster
Key takeaways
AI technology shows enormous promise, but there are a number of pitfalls associated with its use, including deep fakes, human rights violations and the manipulation of public opinion.
This constantly evolving multi-purpose tool is prompting intense global reflection over a framework for shared governance.
Increasingly, new AI technologies threaten users’ privacy and intellectual property, and require shared governance.
By regulating AI at a “national” level, Europe fears it will be weakened and overtaken by other powers.
In 2025, France will be hosting a major international summit on AI, which will enable these issues to move forward.
Although the technology is evolving rapidly, it is possible to regulate AI in the long term on the basis of fundamental and robust principles.
Ph.D. in Applied Mathematics and Head of Division in the French Army
Christophe Gaie
Head of the Engineering and Digital Innovation Division at the Prime Minister's Office
Key takeaways
Current artificial intelligence (AI) excels at specific tasks but remains different from artificial general intelligence (AGI), which aims for intelligence comparable to that of humans.
Current AI models, while sophisticated, are not autonomous and have significant limitations that differentiate them from AGI.
Fears about AGI are growing; some experts are concerned that it could supplant humanity, while others consider this prospect to still be a long way off.
Rational regulation of AGI requires an informed analysis of the issues at stake and a balance between preventing risks and promoting benefits.
Proposals for effective regulation of AGI include national licences, rigorous safety tests and enhanced international cooperation.
Contributors
Sophy Caulier
Independant journalist
Sophy Caulier has a degree in Literature (University Paris Diderot) and in Computer science (University Sorbonne Paris Nord). She began her career as an editorial journalist at 'Industrie & Technologies' and then at 01 Informatique. She is now a freelance journalist for daily newspapers (Les Echos, La Tribune), specialised and non-specialised magazines and websites. She writes about digital technology, economics, management, industry and space. Today, she writes mainly for Le Monde and The Good Life.