In an era where interactions with generative artificial intelligence (AI) are becoming increasingly difficult to distinguish from genuine human communication, states across the U.S. are enacting legislation requiring transparency when businesses deploy generative AI systems to converse with consumers. These disclosure requirements aim to ensure consumers know when they're interacting with machines rather than humans, particularly as AI becomes more sophisticated at mimicking human conversation. This article examines the current landscape of passed and pending state AI disclosure laws, highlighting common themes and implications for businesses.
Enacted State Legislation
The Utah Artificial Intelligence Policy Act: Utah became the first state to enact comprehensive AI-focused consumer protection legislation when Governor Spencer Cox signed the Utah Artificial Intelligence Policy Act (UAIPA) on March 13, 2024, which took effect May 1, 2024.
The UAIPA applies to any business or individual using generative AI to interact with Utah consumers and defines generative AI as "an artificial system that (a) is trained on data; (b) interacts with a person using text, audio or visual communication; and (c) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight."
The law imposes two key disclosure requirements:
1. Businesses in "regulated occupations" (those requiring licensure or certification), must “prominently” disclose that a consumer is interacting with generative AI at the beginning of any communication.
2. All other businesses subject to Utah consumer protection laws must "clearly and conspicuously" disclose the use of generative AI if directly asked by the consumer.
The California Bot Disclosure Law: Although the UAIPA was technically the first to impose disclosure requirements on private companies using generative AI when interacting with consumers, some credit should be given to California’s 2019 Bot Disclosure Law. A groundbreaking pre-AI effort to address automated deception, the Bot Disclosure Law
requires companies to disclose when Internet chatbots are used to "knowingly deceive" a person for commercial transactions or to influence votes in an election. While the law likely applies to AI chatbots, it is limited to those deployed on the Internet and does not govern chatbots used in other media.
The California AI Transparency Act: Passed on September 19, 2024, and effective January 1, 2026, the California AI Transparency Act (SB 942) applies to "Covered Providers" creating generative AI systems with over 1 million monthly visitors or users in California. The law requires providers to provide consumers with a free, publicly accessible AI content detection tool, along with options to include both hidden and manifest disclosures for AI-generated content. Violations of SB 942 incur a $5,000 fine per incident.
The Colorado AI Act: Colorado's AI consumer protection bill (SB 24-205), known as the Colorado AI Act (CAIA), was signed by Governor Jared Polis on May 17, 2024, and is scheduled to take effect February 1, 2026. While primarily focused on preventing algorithmic discrimination in "consequential decisions" affecting healthcare and employment, the law also contemplated disclosure requirements.
Pending Legislation
Several states have introduced bills in 2025 that would require notification when consumers interact with AI systems:
Alabama (House Bill 516): Would make it a deceptive practice to engage in commercial transactions through chatbots or AI agents that could mislead consumers into believing they're communicating with humans without providing clear notification.
Hawaii (House Bill 639): Would classify as unfair or deceptive the use of AI chatbots capable of mimicking human behavior without first disclosing this to consumers in a clear and conspicuous manner. Notably, the bill includes exemptions for small businesses that unknowingly utilize AI chatbots.
Illinois (House Bill 3021): Would declare it unlawful to engage in commercial transactions where consumers communicate with AI systems that could be mistaken for humans without clear notification, regardless of whether consumers are actually misled.
Maine (House Paper 1154): Would categorize as an unfair trade practice the use of AI chatbots in commercial transactions that could mislead consumers without proper notification.
Massachusetts (Senate Bill 243): Would designate as unfair and deceptive any commercial transaction where consumers interact with AI that might mislead them into believing they're engaging with humans, unless consumers receive clear notification.
AI Disclosure Laws At The Federal Level
The AI Disclosure Act of 2023 (H.R. 3831) was introduced on June 5, 2023. This bill would require generative artificial intelligence to include on any output a disclaimer stating: "this output has been generated by artificial intelligence." The bill would grant enforcement authority to the Federal Trade Commission (FTC), treating violations as unfair or deceptive practices. However, the legislation has never made it out of the committee stage, and legislative trackers now list H.R. 3831 as dead.
Opposition to State Regulation: A recent development threatens all existing and pending state legislation governing AI. On May 13, 2025, House Republicans introduced a budget reconciliation bill that would prohibit states from enforcing "any law or regulation" concerning automated computing technologies for ten years following the bill's enactment. If passed, this would effectively end existing state-level AI laws and prevent new ones from taking effect.
Critics, including advocacy groups like Americans for Responsible Innovation, warn this could lead to "catastrophic consequences" for the public, while supporters (including major tech companies) argue for federal oversight rather than a fragmented system of state regulations.
AI Disclosure Laws - Implications for Businesses
As states continue to lead the way in regulating AI transparency, businesses must navigate an increasingly complex regulatory landscape. The trend toward requiring disclosure when using generative AI for consumer interactions shows no signs of slowing, despite opposition from some quarters. Companies that proactively implement transparent AI disclosure practices will be better positioned to comply with both existing and future regulations while maintaining consumer trust in an increasingly AI-driven economy.
To meet the challenge presented by the growing patchwork of state AI disclosure laws, businesses should:
1. Develop AI governance programs that include disclosure mechanisms adaptable to various state requirements.
2. Consider notifying consumers when they're interacting with AI, regardless of disclosure is required.
3. Monitor legislative developments as more states introduce and pass AI disclosure requirements.
4. Prepare for potential federal legislation that could eventually preempt state laws and create uniform national standards.
