Paris AI Action Summit 2025: A Turning Point in AI Governance Amidst Global Divides (15.02.2025)

Participants from over 100 countries, gathered in Paris on February 10 and 11, 2025 to hold the AI Action Summit. The event saw the participation of global leaders, tech executives, and policymakers, aiming to establish a consensus on the future of AI. The summit  emerged as a pivotal moment in the ongoing debate over artificial intelligence (AI) governance. AI Action Summit, focused on fostering responsible AI development, and ensuring uniform AI governance through creation of globally accepted regulations and standards.

However, the event also highlighted stark differences in approaches to AI regulation, particularly between the United States and its allies. China, France, Germany and India were among 61 signatories who agreed it is a priority that “AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all” and “making AI sustainable for people and the planet. Whereas ,  United States and UK made headlines by refusing to sign the same declaration on “inclusive and sustainable AI”.

This is a significant issue because most of the major AI companies are based in the US. On the other hand, the UK, often ranked third globally in AI, holds substantial influence in this field as well. The first and third-ranking nations distancing themselves from international agreements signal a disturbing rupture, with this pattern also visible in their domestic AI polices too.

 

What did the summit declaration say?

The countries that attended were asked to sign a Pledge for a Trustworthy AI in the World of Work, a nonbinding declaration.

The declaration outlined six main priorities:

  • Promoting AI accessibility to reduce digital divides
  • Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all
  • Making AI innovation thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development
  • Encouraging AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth
  • Making AI sustainable for people and the planet
  • Reinforcing international cooperation to promote coordination in international governance

 

India’s Stance on AI Regulation in AI Action Summit

The world is at the “dawn of the AI (Artificial Intelligence) age”, Prime Minister of India , Narendra Modi , called for collective efforts to establish a global framework for AI that upholds shared values, addresses risks, builds trust and ensures access to all, especially the Global South. Co-chairing the AI Action Summit with French President Emmanuel Macron in Paris, PM  Modi underlined the need for open-source systems that enhance trust and transparency, and building data sets “free from biases”. He also offered to host the next summit in India.

At a briefing later, S Krishnan, Secretary, Ministry of Electronics and Information Technology (MeitY), said India would host the next summit. “The time is right for India to host, as the Prime Minister offered, and the offer was accepted that the next AI Summit would be hosted in India later this year, India through PM Modi highlighted that there is a need for collective global efforts to establish governance and standards, that uphold our shared values, address risks and build trust.” Saying that governance is not just about managing risks and rivalries, but also about promoting innovation, and deploying it for the global good, he said: “We must think deeply and discuss openly about innovation and governance.

We must build quality data sets, free from biases. We must democratise technology and create people-centric applications. We must address concerns related to cyber security, disinformation, and deep fakes. And, we must also ensure that technology is rooted in local ecosystems for it to be effective and useful,” he said. Welcoming the decision to set up the ‘AI Foundation’ and the ‘Council for Sustainable AI’, he congratulated France for these initiatives and assured India’s full support.

 

China’s Stance on AI Regulation in Paris Summit

AI has become an important driving force for the new round of scientific and technological revolution and industrial transformation, Zhang said. China has always participated in global cooperation and governance on AI with a highly responsible attitude, he underlined. Zhang said China has always advocated enhancing the representation and voices of developing countries in global AI governance, ensuring equal rights, equal opportunities and equal rules for AI development and governance of all countries, carrying out international cooperation and assistance for developing countries, and constantly bridging the intelligence gap and governance capacity gap.

In October 2023, President Xi Jinping introduced the Global Initiative for AI Governance, which proposed China’s solution and contributed China’s wisdom for the AI development and governance, Zhang noted. In facing the opportunities and challenges brought about by the development of AI, Zhang called on the international community to jointly advocate for the principle of developing AI for good, to deepen innovative cooperation, strengthen inclusiveness and benefits, and improve global governance.

Amid the global affirmation from China on responsible AI development, recently a generative model from Chinese Company DeepSeek ushers a different story. DeepSeek, a Chinese artificial intelligence company founded in July 2023, has recently introduced its R1 model, a large language model (LLM) that has garnered significant attention in the AI community. The R1 model is reported to offer performance comparable to other leading LLMs, such as OpenAI’s GPT, with a notably lower training cost of approximately $6 million, compared to the $100 million invested in GPT-4. The introduction of DeepSeek’s R1 model has also raised geopolitical concerns, leading several countries to ban its use on government devices. Australia, South Korea, Taiwan, and the United States have cited national security and data privacy issues as primary reasons for these bans. The apprehension stems from DeepSeek’s compliance with Chinese government censorship policies and its data collection practices, which some fear could lead to the dissemination of biased information and unauthorized data access.

 

U.S. Stance on AI Regulation in Paris AI Action Summit

United States Vice President JD Vance made headlines by refusing to sign a declaration on inclusive and sustainable AI, which was endorsed by over 60 countries, including Canada, the European Commission, India, and China. In his first  major international appearance, Vance argued that “excessive regulation” could stifle innovation and hinder the growth of the AI industry. He emphasized the Trump administration’s commitment to fostering a pro-growth environment for AI, free from heavy-handed.

Withdrawal of US AI Executive Order

The US President Donald Trump ,  in  Jan 2025  revoked a 2023 AI executive order signed by Joe Biden that sought to reduce the risks that artificial intelligence poses to consumers, workers and national security. The action of withdraw corroborates the statement made by USA at Paris summit. Biden’s order required developers of AI systems that pose risks to US national security, the economy, public health or safety to share the results of safety tests with the US government, in line with the Defense Production Act, before they were released to the public. The order also directed agencies to set standards for that testing and address related chemical, biological, radiological, nuclear, and cybersecurity risks.

 

UK STANCE Paris AI Action Summit

A UK government spokesperson said the statement had not gone far enough in addressing global governance of AI and the technology’s impact on national security.“We agreed with much of the leaders’ declaration and continue to work closely with our international partners. This is reflected in our signing of agreements on sustainability and cybersecurity today at the Paris AI Action summit,” the spokesperson said. “However, we felt the declaration didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it.”

Asked if Britain had declined to sign because it wanted to follow the US lead, Keir Starmer’s spokesperson said they were “not aware of the US reasons or position” on the declaration. A government source rejected the suggestion that Britain was trying to curry favour with the US. But a Labour MP said: “I think we have little strategic room but to be downstream of the US.” They added that US AI firms could stop engaging with the UK government’s AI Safety Institute, a world-leading research body, if Britain was perceived to be taking an overly restrictive approach to the development of the technology.

Campaign groups criticised the UK’s decision and said it risked damaging its reputation in this area. Andrew Dudfield, the head of AI at Full Fact, said the UK risked “undercutting its hard-won credibility as a world leader for safe, ethical and trustworthy AI innovation” and that there needed to be “bolder government action to protect people from corrosive AI-generated misinformation”.

 

Confusing Behavior of the EU

The European Union’s approach to AI governance has been somewhat contradictory. While the EU signed the pact at the Paris Summit, promoting human-centric AI, it simultaneously withdrew the EU AI Liability Directive domestically. The European Commission withdrew  AI Directive (proposal aimed to hold AI developers accountable for damages caused by their systems) after facing criticism for excessive regulation and content moderation of AI at the AI Action Summit in Paris, France. This move has raised questions about the adequacy of the EU Act to regulate AI in the region. Without the directive, consumers may face challenges in seeking compensation for harm caused by AI systems. Directive establishes clear rules on who is liable in cases of AI related harms. The absence of clear liability rules could disproportionately benefit large tech companies.

How the EU AI Act May Fall Short

-Lack of Specific Liability Provisions: The EU Act focuses on regulating AI systems based on risk but does not address liability directly.

– Enforcement Challenges: Without specific liability rules, enforcing the Act may be difficult, especially in cross-border cases.

 

Global Divides and Future Implications

The Paris AI Summit showed us that while the world is united in the need for responsible AI governance, achieving a globally agreed-upon framework remains a significant challenge. For the future, the hope lies in finding common ground — where innovation can thrive alongside necessary regulations. As AI continues to evolve, we must find ways to ensure it is developed in ways that benefit all of humanity.