Sanjeev Sanyal, (a member of the Economic Advisory Council to the Prime Minister of India), in an interview with the ETCIO Deeptalks, has expressed some concerns for the current AI regulation frameworks adopted by the European Union (EU) and China. In the interview, Mr. Sanyal has also suggested a different approach Complex Adaptive System (CAS) Theory for India to regulate AI. His arguments center around the effectiveness, flexibility, and adaptability of these frameworks in the rapidly evolving landscape of artificial intelligence (AI).
WHY SANJEEV SANYAL CRITICISES THE APPRAOCH OF EU AND CHINA?
-
Overly rigid and Bureaucratic approach by EU
Sanjeev Sanyal believes the European AI Act’s approach is overly rigid and bureaucratic, making it unsuitable for regulating a rapidly evolving technology like AI and his main criticisms include:
- Static Risk Categorization- The European AI Act categorizes AI systems into predefined risk levels (unacceptable, high, limited, minimal) determined by bureaucrats, which Sanjeev Sanyal finds impractical and counterproductive as he argues that AI technologies are dynamic and constantly evolving, requiring a flexible regulatory approach rather than a fixed categorization that can quickly become outdated. The Act’s rigid framework may over-regulate or under-regulate AI applications, creating barriers that stifle innovation and slow technological growth, ultimately limiting the development of new AI solutions in
- Compliance Over Innovation- Sanjeev Sanyal critiques the European AI Act for its heavy emphasis on compliance, which he believes may create significant barriers for AI development and such detailed obligations for high-risk AI systems can impose substantial costs, particularly on smaller companies, potentially discouraging them from pursuing innovation. The stringent requirements and regulatory complexity might slow down AI development in Europe, as companies could struggle to navigate these rules, ultimately reducing their capacity to innovate and compete
- Lack of Adaptability– Sanyal criticizes the Act for its insufficient mechanisms to adapt to unforeseen AI risks as the Act operates on the assumption that AI risks can be anticipated
and addressed in advance, which is unrealistic given the inherently unpredictable nature of AI technology. This static approach results in regulatory gaps that fail to account for emerging risks and the Act additionally does not emphasize ongoing monitoring or the need for adaptation to new AI developments, which further limits its effectiveness in managing evolving challenges associated with AI.
- Limited Impact of Transparency Measures– Despite the Act’s requirements for transparency and user awareness, Sanjeev Sanyal argues that these measures fall short and merely informing users that they are interacting with AI does not adequately address the potential risks involved. There is a need for more comprehensive and practical risk management strategies beyond basic transparency to effectively mitigate the challenges posed by AI technologies.
- Narrow Focus on Social Scoring– The Act’s prohibition of AI for social scoring is a notable measure, yet it lacks a broader regulatory framework and Sanjeev Sanyal argues that, while banning social scoring represents progress, it does not sufficiently address the full spectrum of AI risks and to be more effective, a more flexible and comprehensive regulatory model is required to tackle a wider array of AI-related
-
State-Controlled Model in China
Sanjeev Sanyal criticizes China’s AI regulatory approach, which is heavily state-controlled. The Chinese government aims to exercise complete control over AI and data, which Sanyal argues is prone to significant failures as he references the initial outbreak of COVID-19 in Wuhan as an example of how state-controlled systems can fail when problems are concealed rather than addressed transparently. The Chinese model, according to Sanjeev Sanyal, may allow dangerous elements to “slip through” due to the tendency to suppress or hide problems rather than tackling them openly and transparently and this lack of transparency and openness could undermine trust and safety in AI applications.
Suggestion for India
Complex Adaptive System (CAS) Theory
In contrast to the EU and China’s approaches, Sanjeev Sanyal advocates for a regulatory framework in India based on Complex Adaptive System (CAS) theory. This theory emphasizes flexibility and adaptability in complex environments, and outlines a framework centered around several key principles listed below :
- Safeguards and Boundaries– Establishing mechanisms to prevent harmful AI behavior, such as “manual overrides” and “authorization choke points,” to maintain human supervision over critical
- Transparency, Accountability, and Explainability– Ensuring openness in core algorithms, continuous monitoring of AI systems, and incident reporting protocols to document AI failures.
- Clear Lines of Accountability– Making developers or managers of AI systems accountable for their creations, ensuring they have “skin in the “
- Specialized AI Regulator– Proposing the establishment of a specialized AI regulator in India with a broad mandate to oversee AI-related
- National Algorithm Registry and Repository– Advocating for a national algorithm registry and repository to foster AI innovation in
Conclusion
Sanjeev Sanyal’s criticism reflects a broader debate over the best approach to be followed to regulate AI, comparing between the EU’s bureaucratic, risk-based framework, China’s state- controlled approach and India’s proposed adaptive, principle-based model. Each framework has different implications for innovation, transparency, accountability, and managing the risks associated with AI development and deployment. Sanjeev Sanyal’s preference for a flexible, adaptive regulatory model underlines the need for a dynamic approach to AI regulation that balances innovation with safety and public trust.
References-
https://cio.economictimes.indiatimes.com/news/artificial-intelligence/eus-ai-regulation-system- is-bound-to-fail-sanjeev-sanyal/113214058
https://www.livemint.com/industry/pmeac-member-sanjeev-sanyal-proposes-a-cas-based- framework-to-regulate-ai-11712055604080.html