U.S. Rejects International AI Oversight at the U.N., Raising Concerns About Global Governance (28.09.25)

The U.S. has rejected calls at the United Nations for binding international oversight of artificial intelligence, favoring national regulation and voluntary cooperation instead. The move highlights growing fractures in global AI governance as the EU, China, and India push forward with divergent regulatory frameworks. Without a unified approach, the risks of fragmented oversight and unchecked cross-border harms loom large.

At the 80th United Nations General Assembly, artificial intelligence (AI) governance took center stage. Member states debated whether international oversight should be established to manage the growing risks of AI systems. While many nations called for stronger multilateral mechanisms, the United States firmly opposed proposals for binding international authority over AI, signaling a preference for domestic control and voluntary global cooperation.

This decision is significant. It comes at a time when the European Union is moving forward with its comprehensive AI Act, China is strengthening algorithmic regulation, and India is preparing its own AI governance framework. Against this backdrop, the U.S. rejection of supranational oversight reveals the fractures in the international community’s ability to create shared guardrails for a technology that transcends borders.

 

What Was Proposed at the U.N.?

During the Assembly, several delegations and advocacy groups pushed for the creation of new structures: an international forum to coordinate AI governance, an independent panel of experts to monitor high-risk systems, and even the possibility of binding “red lines” prohibiting extreme uses of AI. Proponents argued that AI’s cross-border impact from disinformation to autonomous weapons  requires coordinated global safeguards.

The U.S., however, resisted these ideas. Its representatives emphasized that oversight should remain in national hands, warning that a centralized international regulator could stifle innovation and hinder competitiveness. Instead, the U.S. endorsed ongoing dialogue, voluntary cooperation, and non-binding guidelines  approaches it considers more flexible in keeping pace with rapid technological change.

Notably, this position marks a step back from March 2024, when the U.S. co-sponsored a General Assembly resolution affirming principles for safe, trustworthy AI. That resolution, however, was non-binding and carried little enforcement power, highlighting the difference between symbolic support and regulatory commitment.

 

The Legal and Policy Dimensions

The U.S. rejection reflects long-standing tensions in global governance.

  • National sovereignty remains a central concern. AI is not just a technological matter; it intersects with defense, infrastructure, and economic competitiveness. Relinquishing oversight to an international authority would challenge a state’s control over sensitive domains.
  • Binding treaties versus soft law. Formal treaties can take years to negotiate and ratify, and often struggle to adapt to fast-moving technologies. The U.S. appears to prefer “soft law” — principles, guidelines, and voluntary commitments — that provide flexibility while avoiding legal entanglements.
  • Regulatory fragmentation. Without international alignment, governance risks splintering across jurisdictions. The EU’s AI Act, China’s rules on algorithms, and India’s emerging framework all take different approaches. Divergent regimes may create barriers to trade, compliance conflicts, and opportunities for regulatory arbitrage.
  • Accountability in high-risk domains. The absence of binding global norms leaves unresolved how to manage AI in sensitive areas such as autonomous weapons, healthcare, and election interference. Domestic rules may be insufficient when harms spill across borders.

 

Why the U.S. Position Matters?

As one of the leading developers of frontier AI systems, the U.S. carries disproportionate influence. Its refusal to endorse binding oversight may discourage other states from supporting ambitious multilateral governance.

  • Risk of a race to the bottom. If major AI powers resist regulation, smaller states may feel pressure to relax their own standards to remain competitive.
  • Weak global accountability. Without common enforcement, harmful applications of AI from deepfake disinformation to military misuse — may go unchecked.
  • Innovation versus safety tension. The U.S. stance underscores a persistent debate: whether rigid rules slow innovation or whether stronger governance is necessary to prevent serious societal harms.

 

Middle Paths Under Consideration

While a single global AI regulator appears politically unattainable, several alternative models are being discussed:

  1. Modular treaties. Narrow agreements could address specific risks, such as AI in military systems or biosecurity, while leaving broader domains under national regulation.
  2. International evaluation bodies. Independent panels of experts could review advanced AI risks across jurisdictions, offering recommendations without imposing binding authority.
  3. Certification tied to trade. Countries could make participation in AI trade conditional on compliance with minimum governance standards, similar to global aviation and financial norms.
  4. Multi-stakeholder forums. Bringing together governments, companies, academics, and civil society could gradually establish norms that evolve into widely accepted standards.

These models seek to balance national autonomy with the need for coordinated safety mechanisms.

 

A Parallel Push: The Call for AI “Red Lines”

The debate at the U.N. coincides with growing pressure from the scientific and policy community. More than 200 experts, including Nobel laureates and AI pioneers, have endorsed a “Global Call for AI Red Lines,” urging states to agree on prohibitions against the most dangerous applications by 2026. Suggested red lines include banning AI systems that impersonate humans or that can self-replicate.

But without U.S. participation, such red lines would lack enforceability. If the leading AI powers do not commit, even well-drafted rules risk being symbolic rather than effective.

 

The Road Ahead

The U.S. rejection illustrates the central dilemma of AI governance: technology is global, but regulation remains rooted in national sovereignty. While international consensus appears elusive, the risks of fragmented oversight are mounting.

In the coming years, the world will likely see a patchwork of governance frameworks. The EU’s AI Act will set strict compliance requirements. China will continue its state-driven model. India and other countries are exploring hybrid approaches. The U.S. remains committed to voluntary cooperation, industry self-regulation, and market incentives.

Whether these divergent paths can be bridged into a coherent international order remains uncertain. What is clear is that AI’s potential harms, from undermining democratic processes to creating security vulnerabilities, do not respect borders. Without a shared governance framework, the world risks being unprepared for challenges that demand collective solutions.

 

Conclusion

The U.S. decision at the U.N. General Assembly marks a turning point in global AI governance. While national sovereignty and innovation incentives drive its rejection of binding oversight, the absence of a unified international framework raises serious questions about accountability, safety, and the ability to prevent cross-border harms.

As AI technologies advance at unprecedented speed, the choice facing the international community is stark: continue along fragmented national paths, or find creative ways to build common guardrails that protect global interests without undermining sovereignty. The path chosen will shape not only the future of AI but also the balance between innovation, safety, and international cooperation in the decades ahead.