INDIA INTRODUCES THE AI ETHICS AND ACCOUNTABILITY BILL: A TURNING POINT IN HOW THE COUNTRY GOVERNS AI (18.12.25)

India has introduced its first legislative framework to govern artificial intelligence. The AI Ethics and Accountability Bill, 2025 proposes statutory oversight, penalties up to ₹5 crore, and criminal liability for AI misuse marking a decisive shift from voluntary AI principles to enforceable accountability.

For years, India’s artificial intelligence story has been one of ambition without accountability. Policy papers promised “responsible AI”, advisory frameworks spoke of “trust and inclusion”, and ministers repeatedly emphasised innovation-first governance. But as AI systems quietly entered policing, surveillance, hiring, credit scoring, content moderation, and political communication, one question remained unanswered: who is accountable when AI causes harm?

This week, Parliament finally addressed that silence.

The Artificial Intelligence (Ethics and Accountability) Bill, 2025 has been officially  introduced in the Lok Sabha by BJP Member of Parliament Bharti Pardhi on 17th December, 2025 , marking India’s first serious legislative attempt to regulate AI misuse through enforceable legal obligations. The Bill proposes penalties of up to ₹5 crore, criminal liability in severe cases, and institutional oversight over how AI systems are deployed across the country .

For a country that has so far resisted binding AI regulation, this is not a routine legislative development. It is a signal.

Why This Bill Matters More Than It Appears?

India did not lack awareness of AI risks. It lacked willingness to legislate.

While the European Union passed its AI Act and other jurisdictions moved toward enforceable standards, India chose a softer path , ethical principles, voluntary frameworks, and sectoral advisories. That approach worked when AI was experimental. It stopped working once AI began making decisions about people.

Deepfakes distorted public discourse. Automated systems reproduced bias in hiring and lending. AI-powered surveillance expanded without clear legal guardrails. Yet accountability remained diffuse  spread thinly across the IT Act, data protection law, and general criminal provisions.

The introduction of this Bill reflects a shift in mindset: AI is no longer treated as neutral technology, but as a system capable of legal and constitutional harm .

 

What does the Bill Proposes?

The Bill was introduced as a Private Member’s Bill in the Lok Sabha. While private member bills rarely become law in their original form, they often play a critical role in shaping national debate and future legislation.

Substantively, the Bill proposes three clear interventions.

  1. A Binding Ethics and Accountability Framework

The Bill seeks to create a statutory framework governing the design, deployment, and use of AI systems, replacing the current reliance on voluntary ethical guidelines. AI developers and deployers would be legally obligated to ensure ethical compliance, transparency, and harm mitigation.

  1. Institutional Oversight Through an Ethics Committee

A central AI Ethics Committee would be established to:

  • frame ethical standards,
  • review high-risk AI systems,
  • investigate complaints of misuse or bias, and
  • recommend enforcement action.

This is a notable move away from India’s usual preference for decentralised or advisory oversight.

  1. Restrictions on Sensitive and High-Risk Uses

The Bill places special emphasis on AI systems used in:

  • surveillance,
  • law enforcement,
  • decision-making that affects rights, liberties, or access to services.

Such systems would require heightened scrutiny and safeguards, acknowledging the disproportionate harm they can cause when deployed without checks .

 

The Real Message Lies in the Penalties

What makes this Bill disruptive is not its language on ethics , it is the penalty architecture.

The Bill proposes:

  • fines of up to ₹5 crore for misuse of AI systems,
  • suspension or prohibition of AI deployment, and
  • criminal liability in serious or repeated violations .

This is a sharp departure from India’s historically permissive stance on emerging technologies. By attaching financial and criminal consequences to AI misuse, the Bill sends a clear message: innovation does not excuse harm.

For companies, developers, and even state actors, this introduces something India’s AI ecosystem has lacked i.e. deterrence.

 

Conclusion

The introduction of the Artificial Intelligence (Ethics and Accountability) Bill, 2025 is not important because it is perfect. It is important because it is deliberate.

For the first time, India’s Parliament has acknowledged in legislative form that artificial intelligence is not a neutral tool, not an abstract innovation, and not something that can be governed indefinitely through intent statements and voluntary codes. It is a system of power, capable of shaping rights, behaviour, opportunity, and harm at scale. And systems of power demand accountability.

By proposing penalties, criminal liability, and ethical oversight, this Bill draws a clear line: AI-led growth cannot come at the cost of unchecked harm. It tells developers, deployers, and the State itself that innovation will no longer operate in a regulatory vacuum. Someone, somewhere, will have to answer for the consequences.

Whether this Bill passes in its present form, evolves into a government-backed framework, or catalyses a more comprehensive AI law, its significance remains unchanged. It marks the moment India stopped treating AI governance as a future problem and started treating it as a present responsibility.