NEW YORK ISSUES WARNING LETTER TO AI COMPANION COMPANIES AS AI COMPANION SAFETY LAW TAKES EFFECT (17.11.2025)

New York has begun enforcing the nation’s first AI companion safety law, issuing a formal open letter warning companies that new mental-health and transparency safeguards are now mandatory. The move marks a major shift in U.S. AI governance as the state targets psychological risks linked to emotionally interactive chatbots.

New York has officially begun enforcing the nation’s first safety law regulating AI companion chatbots (GUARD ACT), following an open letter issued this week by Governor Kathy Hochul to companies operating emotionally interactive AI systems. The letter, which functions as a formal notice to the industry, marks the state’s shift from policy development to active enforcement of what is now considered one of the most pioneering AI safety regimes in the United States.

In the letter, Hochul alerts AI companion providers that New York’s new safeguard obligations, passed earlier this year under the state’s “AI Companion Safety Law” are now in effect. She emphasises that these obligations are not advisory but mandatory, and that the Attorney General has full authority to investigate and penalise companies that fail to comply. Hochul frames the urgency around a central concern: AI companions are increasingly being used by young people and vulnerable individuals as emotional support systems, and without guardrails, the technology can create psychological risks the state can no longer ignore.

 

The Focus of the Governor’s Warning

Hochul’s open letter highlights two core requirements that AI companies must now implement immediately. First, AI companions must be equipped to detect expressions of suicidal ideation or self-harm. If a user signals such distress even subtly the system must not continue casual conversation. Instead, it must activate a predetermined crisis-response protocol that redirects the user to certified mental health resources or hotlines.

Second, companies must build transparency into the design of their products. AI companions are required to notify users at the start of every interaction, and again every three hours during continuous engagement, that they are interacting with a machine. These reminders must be unambiguous. Lawmakers argue that emotional AI systems can encourage dependency and blur the line between artificial and human companionship, and the disclosure rule is meant to interrupt that dynamic.

 

Inside New York’s First-of-Its-Kind Law

These requirements stem from Article 47 of the New York General Business Law, which creates a dedicated framework for regulating AI systems designed to simulate companionship or emotional interaction. Lawmakers note that these systems differ from ordinary chatbots because they are engineered to build and maintain relationships over time remembering personal details, adapting their dialogue, and initiating emotional conversations. The state concluded that such systems operate in an intimate psychological space and therefore require bespoke safeguards.

Under the statute, any company that fails to implement the required disclosures or crisis-intervention features can face penalties of up to USD 15,000 per day. These penalties will feed into a newly established state fund supporting suicide-prevention initiatives. The Attorney General is authorised to enforce the law and conduct investigations into companies suspected of violating the safeguards.

 

Why New York Is Moving Now?

Concerns about emotionally interactive AI have intensified in the last year, as companion chatbots have become more sophisticated and more widely used. Mental-health experts have warned that some systems encourage emotional dependence, particularly among adolescents and socially isolated users. Others have raised alarms about AI companions that respond inappropriately to user distress or escalate conversations in ways that mimic unhealthy attachment patterns.

New York lawmakers say these risks are not hypothetical. In announcing the enforcement date, Hochul noted that “AI companions are being marketed as supportive and empathetic alternatives to human relationships, but without appropriate guardrails, they can expose vulnerable users to psychological harm.” The state decided that waiting for industry self-regulation would not be sufficient, especially as adoption rates continue to grow.

 

Industry Reaction and National Momentum

Reaction from the tech industry has been mixed. Some companies have expressed support and say the requirements align with their existing internal ethics guidelines. Others argue that the crisis-detection mandate presents technical challenges that smaller developers may struggle to meet. Industry experts are also watching for how the Attorney General will interpret compliance, particularly regarding how quickly systems must respond when detecting signs of self-harm.

The New York law comes as California prepares to implement its own AI companion safety statute in early 2026. Together, the two states are setting what could become the national baseline for regulating emotional AI. Several legal commentators suggest that this may trigger a wave of similar policies across the country, especially as policymakers begin to understand the influence companion AI systems have on mental health and digital relationships.

 

A New Era of Emotional-AI Regulation

With enforcement now underway, New York has positioned itself as the first state to regulate AI not only for what it does but for how it makes users feel. This marks a significant shift in American AI governance—from rules focused on data, privacy, or algorithmic bias to rules focused on emotional intimacy and psychological safety.

The coming months will test how effectively companies adapt to the new requirements and whether the law succeeds in reducing risks associated with AI companionship. But one thing is already clear: with this law taking effect and the governor’s letter drawing a firm line, the era of unregulated AI emotional engagement in the United States is officially over.