Recent events in Alaska have demonstrated, the intersection of AI and policymaking can sometimes lead to unintended and problematic consequences. This blog demonstrates a case where AI-generated data led to a significant policy mishap, highlighting the inherent risks of relying on generative AI models without proper verification, especially in research and policymaking.
The Incident in Alaska
In a notable turn of events, Alaska’s legislators found themselves in the spotlight for all the wrong reasons when it was revealed that AI-generated citations were used to justify a proposed policy banning cell phones in schools. As reported by The Alaska Beacon, the Department of Education and Early Development (DEED) presented a policy draft that contained references to academic studies that simply did not exist.
The root of the problem lay in the use of generative AI by Alaska’s Education Commissioner, Deena Bishop, to draft the policy to regulate cell phone. The AI-generated document included what appeared to be scholarly references. However, these citations were neither verified nor accurate, and the use of AI in drafting the document was not disclosed. This led to some AI-generated content reaching the Alaska State Board of Education and Early Development before it could be thoroughly reviewed, potentially influencing board discussions.
The Role of AI Hallucinations
Commissioner Bishop later stated that AI was used only to “create citations” for an initial draft and claimed to have corrected the errors by sending updated citations to board members before the meeting. Despite these claims, AI “hallucinations”—a phenomenon where AI generates plausible-sounding but false information—remained in the final document that was voted on by the board.
The final resolution, published on DEED’s website, directed the department to establish a model policy for cell phone restrictions in schools. Shockingly, the document included six citations, four of which seemed to be from respected scientific journals but were entirely fabricated, with URLs leading to unrelated content. This incident highlights the risks of using AI-generated data without human verification, particularly in the context of policymaking.
Broader Implications and Similar Incidents
The Alaska case is not isolated. AI hallucinations are becoming more common across various professional sectors. For instance-
- Google Bard Error: In March 2023, Google’s Bard chatbot incorrectly claimed that the James Webb Space Telescope captured the first images of a planet outside our solar system. This misinformation led to a significant dip in Google’s stock price, losing around 7.7%, which equated to a staggering $100 billion in market value.
- Microsoft’s Bing Chat: During its launch week, Microsoft’s Bing AI misrepresented company financial data, claiming inaccuracies about the performance reports for Gap and Lululemon. This incident highlighted the unreliability of AI in delivering accurate information.
- Legal Faux Pas: An attorney faced consequences after using ChatGPT to fabricate legal citations in a court motion. The judge found that the cases cited were fictitious, and the lawyer was fined $5,0001. This incident underscores the need for caution when integrating AI into critical applications like law.
When left unchecked, generative AI algorithms, designed to produce content based on patterns rather than factual accuracy, can easily produce misleading citations. The growing prevalence of such incidents highlights the importance of human oversight and verification in the use of AI technologies.
Why Generative AI Models Should Not Be Used for Research?
Generative AI models are remarkable tools for creating content, generating ideas, and assisting in various creative processes. However, their use in research, particularly in generating citations and references, poses significant risks. Here are some critical reasons why generative AI models should not be relied upon for research purposes:
- Lack of Accuracy: Generative AI models often produce information that sounds plausible but may not be accurate. These inaccuracies can lead to false conclusions and undermine the integrity of research.
- Fabricated Data: AI models can generate data and references that do not exist. This phenomenon, known as AI hallucination, can result in the inclusion of fictitious sources in research documents, which can be highly misleading.
- Erosion of Trust: Research relies heavily on credibility and trust. The use of AI-generated data without proper verification can erode the trustworthiness of the research and the researchers involved.
- Ethical Concerns: Relying on AI for generating research data raises ethical issues, especially if the AI produces biased or fabricated information. Researchers have a responsibility to ensure the accuracy and integrity of their work.
- Legal Implications: The use of inaccurate or fabricated data in research can have legal repercussions, especially if the findings are used to influence policy or legal decisions.
Conclusion
The combination of artificial intelligence and policymaking has the potential to drive significant advancements. However, as the incident in Alaska demonstrates, it also carries risks that must be carefully managed. Generative AI models, while powerful, are not infallible and can produce misleading or fabricated information if not properly checked. Ensuring that human experts thoroughly verify AI-generated data is essential to maintaining the integrity and trustworthiness of research and policy decisions.
Sources:
- https://alaskabeacon.com/2024/10/28/alaska-education-department-published-false-ai-generated-academic-citations-in-cell-policy-document/
- https://medium.com/@seekmeai/ai-hallucinations-in-policy-how-alaskas-misstep-highlights-risks-of-unverified-ai-data-in-502e83e88248
- https://www.fisherphillips.com/en/news-insights/education-officials-learn-dangers-of-ai.html
- https://www.artificialintelligence-news.com/news/ai-hallucinations-gone-wrong-as-alaska-uses-fake-stats-in-policy/