0_regulationIA
Home / Dossiers / Digital / How can artificial intelligence be regulated?
π Digital π Society

How can artificial intelligence be regulated?

5 episodes
  • 1
    AI Act: how Europe wants to regulate machines
  • 2
    Is it possible to regulate AI?
  • 3
    Are we moving towards global regulation of AI?
  • 4
    Artificial general intelligence: how will it be regulated?
  • 5
    AI Act: what are the implications for sensitive sectors in Europe?
Épisode 1/5
With Sophy Caulier, Independant journalist
On December 1st, 2021
3min reading time
Winston Maxwell
Winston Maxwell
Director of Law and Digital Studies at Télécom Paris (IP Paris)

Key takeaways

  • AI is not outside the law. Whether its RGPD for personal data, or sector-specific regulations in the health, finance, or automotive sectors, existing regulations already apply.
  • In Machine Learning (ML), algorithms are self-created and operate in a probabilistic manner. Their results are accurate most of the time, but risk of error is an unavoidable characteristic of this type of model.
  • A challenge for the future will be to surround these very powerful probabilistic systems with safeguards for tasks like image recognition.
  • Upcoming EU AI regulations in the form of the “AI Act” will require compliance testing and ‘CE’ marking for any high-risk AI systems put on the market in Europe.
Épisode 2/5
On September 20th, 2023
4 min reading time
Félicien Vallet
Félicien Vallet
Head of the AI department at the Commission Nationale de l'Informatique et des Libertés (CNIL) (French Data Protection Authority)

Key takeaways

  • The increasing use of AI in many areas raises the question of how it should be managed.
  • At present, there is no specific regulation of AI in Europe, despite the fact that it is a constantly evolving field.
  • The European Parliament has voted in favour of the AI Act, a regulatory text on artificial intelligence.
  • The use and rapid development of AI is a cause for concern and raises major issues of security, transparency and automation.
  • To address these issues, the French Data Protection Authority (CNIL) has set up a multidisciplinary department dedicated to AI.
Épisode 3/5
On May 14th, 2024
5 min reading time
Henri Verdier
Henri Verdier
Ambassador for the Digital Sector and founding member of the Cap Digital competitiveness cluster

Key takeaways

  • AI technology shows enormous promise, but there are a number of pitfalls associated with its use, including deep fakes, human rights violations and the manipulation of public opinion.
  • This constantly evolving multi-purpose tool is prompting intense global reflection over a framework for shared governance.
  • Increasingly, new AI technologies threaten users’ privacy and intellectual property, and require shared governance.
  • By regulating AI at a “national” level, Europe fears it will be weakened and overtaken by other powers.
  • In 2025, France will be hosting a major international summit on AI, which will enable these issues to move forward.
  • Although the technology is evolving rapidly, it is possible to regulate AI in the long term on the basis of fundamental and robust principles.
Épisode 4/5
On October 2nd, 2024
5 min reading time
Jean LANGLOIS-BERTHELOT
Jean Langlois-Berthelot
Doctor of Applied Mathematics and Head of Division in the French Army
Christophe Gaie
Christophe Gaie
Head of the Engineering and Digital Innovation Division at the Prime Minister's Office

Key takeaways

  • Current artificial intelligence (AI) excels at specific tasks but remains different from artificial general intelligence (AGI), which aims for intelligence comparable to that of humans.
  • Current AI models, while sophisticated, are not autonomous and have significant limitations that differentiate them from AGI.
  • Fears about AGI are growing; some experts are concerned that it could supplant humanity, while others consider this prospect to still be a long way off.
  • Rational regulation of AGI requires an informed analysis of the issues at stake and a balance between preventing risks and promoting benefits.
  • Proposals for effective regulation of AGI include national licences, rigorous safety tests and enhanced international cooperation.
Épisode 5/5
On October 14th, 2025
6 min reading time
Jean de Bodinat_VF
Jean de Bodinat
Founder and General Manager at Rakam AI and Teacher at Ecole Polytechnique (IP Paris)
Solène Gérardin_VF
Solène Gérardin
Lawyer, AI Act and GDPR Specialist

Key takeaways

  • The EU’s Artificial Intelligence Act introduces a European legal framework, a comprehensive instrument to regulate AI use-cases, emphasising risk-based governance across sectors.
  • Businesses in Europe must adhere to new compliance requirements, especially regarding high-risk AI systems concerning health, safety, and fundamental rights.
  • Early integration of AI Act compliance can transform complex legislation into evident strategic advantages: enhanced trust, improved fairness, and competitive positioning.
  • Leading sectors where regulations apply to include but are not limited to, education, recruitment, healthcare, and financial services, transparency and bias reduction are prevalent.
  • Proactive legal and technical involvement can be game changing in recent times, with AI regulations, offering companies a chance to shape the future AI ecosystem responsibly.

Contributors

Sophy Caulier

Sophy Caulier

Independant journalist

Sophy Caulier has a degree in Literature (University Paris Diderot) and in Computer science (University Sorbonne Paris Nord). She began her career as an editorial journalist at 'Industrie & Technologies' and then at 01 Informatique. She is now a freelance journalist for daily newspapers (Les Echos, La Tribune), specialised and non-specialised magazines and websites. She writes about digital technology, economics, management, industry and space. Today, she writes mainly for Le Monde and The Good Life.