EU AI Act TAKES EFFECT– COMPANIES UNPREPARED, FACING COMPLIANCE RISKS (02.02.25)

Authored by Ms. Vanshika Jain

As of today, the first five articles of the European Union’s AI Act have come into effect, marking a significant milestone in the regulation of artificial intelligence within the EU. With these provisions now enforceable, organizations that fail to comply risk facing substantial penalties. However, many companies remain alarmingly unprepared, particularly concerning AI literacy and the prohibition of certain AI practices.

Key Provisions Now in Effect

The following articles of the EU AI Act apply starting today:

  1. Subject Matter – Establishes the purpose and overarching objectives of the AI Act.
  2. Scope – Defines the range of AI systems and actors subject to the Act’s regulations.
  3. Definitions – Provides key definitions to clarify legal interpretations.
  4. AI Literacy – Mandates AI literacy measures for AI system providers and deployers.
  5. Prohibited AI Practices – Lists AI systems and applications that are banned within the EU.

While all five articles are critical in shaping the AI regulatory landscape, Articles 4 and 5 warrant special attention due to their direct compliance implications.

AI Literacy: A Major Compliance Gap

Under Article 4, providers and deployers of AI systems must ensure a sufficient level of AI literacy among their staff and other relevant persons. The article specifies that organizations should implement AI literacy programs that align with employees’ technical knowledge, education, and experience, considering the context in which AI is used and its impact on affected groups.

However, despite this clear mandate, many organizations have not taken action. From discussions with legal and privacy professionals, a common concern emerges: companies covered by the EU AI Act have yet to initiate AI literacy efforts, leaving a critical compliance gap.

This oversight is more than just a failure to meet Article 4 requirements. The broader issue is that a lack of AI literacy amplifies compliance risks across the organization. Employees unaware of how AI functions—or how it might impact fundamental rights—pose a significant liability. Without adequate training, staff members may inadvertently misuse AI systems, violate data protection laws, or fail to recognize high-risk AI applications. In the long run, neglecting AI literacy does not just increase exposure to fines for non-compliance with Article 4—it can also lead to cascading regulatory violations.

Additionally, a well-implemented AI literacy program can shape a company’s culture toward responsible AI usage. Even employees outside of AI development teams must understand AI-related topics, particularly as AI’s influence extends across business functions, from HR to marketing and customer service.

Prohibited AI Practices: Hidden Risks in Business Partnerships

Article 5 is another high-stakes provision, as it governs prohibited AI practices. These include:

  • AI systems that exploit vulnerabilities based on age, disability, or socio-economic conditions.
  • Social scoring systems that lead to unfair or disproportionate treatment.
  • Biometric categorization systems that process sensitive data, such as political beliefs or sexual orientation.
  • AI-driven predictive policing tools that violate fundamental rights.

Organizations must take a proactive approach in ensuring they do not develop, deploy, or indirectly support prohibited AI applications. Many companies mistakenly believe that if they are not directly involved in creating or using such AI systems, they are in the clear. However, the reality is more complex.

Third-party risk is a critical concern

Even if a company does not operate prohibited AI, its suppliers, partners, or contractors might. If an organization has business relationships with entities engaged in prohibited AI practices, its own compliance risk increases. This means companies must scrutinize their AI supply chain, review contracts and conduct due diligence on vendors.

Regulators are likely to hold companies accountable for how AI is used within their networks, not just within their direct operations. Businesses must take immediate steps to assess whether their partners comply with the EU AI Act, as failure to do so could result in penalties, reputational damage, and legal liability.

What Companies Should Do Next

With Articles 1-5 now in effect, organizations must take swift action to close compliance gaps. Here are some immediate steps:

  1. Develop and Implement AI Literacy Programs – Companies should provide structured AI literacy training to employees, tailored to different roles and levels of expertise. This will help staff understand AI risks, ethical concerns, and compliance requirements.
  2. Conduct AI Compliance Audits – Businesses should review their AI systems, ensuring none fall under the prohibited practices listed in Article 5.
  3. Assess Third-Party AI Risks – Companies must evaluate their AI-related business relationships, ensuring suppliers and partners adhere to the EU AI Act.
  4. Establish AI Governance Frameworks – Organizations should implement internal policies that align with the Act’s requirements, including mechanisms for oversight, risk management, and ethical AI use.
  5. Engage Legal and Compliance Experts – Seeking legal counsel and compliance specialists can help businesses interpret the Act’s requirements and implement best practices.

Final Thoughts

The EU AI Act marks a transformative shift in AI governance, prioritizing ethical considerations and consumer protection. While the enforcement of Articles 1-5 is just the beginning, organizations must recognize that compliance is not a one-time task but an ongoing process. Ignoring AI literacy and the risks of prohibited AI practices will not only expose companies to fines but also erode trust in their AI-driven operations.

As regulators prepare to enforce stricter AI regulations, the message is clear: companies must act now, or they may soon face the legal and financial consequences of non-compliance.