On June 22, 2025, Texas Governor Greg Abbott signed into law the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), codified as House Bill 149, marking a pivotal moment in state-level regulation of artificial intelligence (AI). Taking effect on January 1, 2026, the legislation positions Texas as the fourth state—after California, Colorado, and Utah—to enact comprehensive AI governance rules. As organizations prepare for implementation, TRAIGA compliance has become a critical focus for businesses developing or deploying AI systems in Texas.
However, TRAIGA represents a significant evolution in regulatory approaches to AI, adopting a more targeted, innovation-friendly framework compared to earlier drafts that resembled the European Union's AI Act.
Passed in the wake of Texas SB140, which amended the Texas Telemarketing Act, TRAIGA emerges at a critical juncture when AI technologies are increasingly integrated into telecommunications and marketing operations. With 78% of U.S. businesses now utilizing AI in some capacity, TRAIGA's regulatory framework has far-reaching implications for companies operating in Texas or serving Texas residents.
Broad Jurisdictional Reach
TRAIGA applies to any person or entity that "promotes, advertises, or conducts business" in Texas, offers products or services to Texas residents, or "develops or deploys" an AI system in the state. This expansive jurisdictional approach ensures that the Act covers not only Texas-based companies but also out-of-state and international organizations whose AI systems are accessible to Texas users.
The legislation defines an "artificial intelligence system" as "any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments."
Prohibited Uses and Intent Standard
TRAIGA establishes a categorical prohibition framework that applies to both private and governmental entities. The Act prohibits the development or deployment of AI systems for several specific purposes:
Behavioral Manipulation: AI systems cannot be developed or deployed to intentionally encourage physical self-harm, harm to others, or criminal activity. This provision directly impacts marketing companies that might use AI for persuasive advertising, requiring careful consideration of how AI-driven marketing campaigns are designed and implemented.
Constitutional Infringement: The Act prohibits AI systems developed with the sole intent of infringing, restricting, or impairing federal Constitutional rights. For telecommunications companies, this has implications for AI systems used in content moderation, customer service, or network management.
Unlawful Discrimination: AI systems cannot be developed or deployed with the intent to unlawfully discriminate against protected classes under federal or state law.
Harmful Content Creation: The Act prohibits AI systems designed to produce child pornography, unlawful deepfakes, or engage in explicit conversations while impersonating minors.
A critical aspect of TRAIGA is its intent-based standard for liability, which provides important protections for developers whose systems might be misused by third parties while holding bad actors accountable. This approach offers significant legal clarity for telecommunications and marketing companies, as it focuses on the intended purpose of AI systems rather than unintended consequences.
TRAIGA Compliance for Telecommunications and Marketing
AI-Driven Network Operations: Telecommunications companies increasingly rely on AI systems for network optimization, predictive maintenance, traffic management, and quality assurance. TRAIGA's prohibited uses framework directly impacts these applications, particularly in areas where AI systems might affect customer service delivery or network access.
The Act's prohibition on discriminatory AI use has significant implications for companies that use AI in customer service, billing, network prioritization, or service provisioning. Companies must ensure that their AI systems do not intentionally discriminate based on protected characteristics when making decisions about service delivery, pricing, or network access.
Customer Service and Interaction Systems: Companies of all kinds frequently deploy AI-powered chatbots, virtual assistants, and automated customer service systems. Under TRAIGA, these systems must comply with disclosure requirements when interacting with consumers, particularly in healthcare contexts where telecommunications infrastructure supports telemedicine or health monitoring services.
The Act's broad definition of AI systems encompasses many telecommunications applications, from automated billing systems to network optimization algorithms]. Companies must evaluate their entire AI portfolio to identify systems that fall under TRAIGA's regulatory scope and ensure compliance with prohibited uses and disclosure requirements.
AI-Driven Advertising and Consumer Engagement: Marketing companies increasingly utilize AI for targeted advertising, customer segmentation, personalization, and campaign optimization. TRAIGA's prohibition on behavioral manipulation has direct implications for AI-driven marketing systems that might influence consumer behavior.
The Act's intent-based standard requires marketing companies to carefully document the intended purposes of their AI systems and ensure that these systems are not designed to encourage harmful behaviors or criminal activity. This creates new compliance obligations for companies developing AI-powered marketing tools.
Disclosure Requirements and Transparency: While TRAIGA's disclosure requirements primarily apply to governmental entities, companies working with government clients must understand these obligations. The Act requires clear and conspicuous disclosure when consumers interact with AI systems, using plain language and avoiding dark patterns.
Marketing companies that provide AI services to healthcare organizations must be particularly aware of disclosure requirements, as TRAIGA mandates specific disclosures when AI systems are used in healthcare contexts.
Consumer Protection and Unfair Practices: The Act's broad prohibition framework extends to marketing practices that might manipulate consumer behavior or infringe constitutional rights. Marketing companies must evaluate their AI-driven campaigns to ensure compliance with these prohibitions.
TRAIGA's emphasis on preventing algorithmic discrimination has implications for AI-powered marketing systems that might inadvertently create discriminatory outcomes in advertising delivery or customer targeting. Companies must implement safeguards to prevent such discrimination while maintaining the intent-based liability standard.
Data Collection and Processing: Marketing companies often collect and process vast amounts of consumer data for AI training and deployment. TRAIGA's amendments to biometric data collection requirements create new compliance obligations for companies that use AI systems to process images, voice data, or other biometric identifiers.
To achieve TRAIGA compliance, marketing companies must navigate new challenges related to biometric data collection. The Act’s provisions mandate informed consent when collecting biometric data from publicly available sources, which poses significant hurdles for companies that scrape social media or other public platforms to train their AI systems. These requirements demand stricter data handling protocols and greater transparency in how training data is sourced and used.
Enforcement and Safe Harbor
TRAIGA vests exclusive enforcement authority with the Texas Attorney General, creating a centralized enforcement mechanism that avoids the complexity of multiple enforcement bodies. The Attorney General has the authority to investigate complaints, issue civil investigative demands, and bring enforcement actions.
The Act requires the Attorney General to establish an online portal for consumer complaints, creating a mechanism for public reporting of potential violations. This system enables proactive enforcement while providing companies with visibility into consumer concerns.
Civil Penalties and Violation Categories: TRAIGA establishes a graduated penalty system based on the severity and nature of violations:
- Curable Violations: For violations that can be remedied, penalties range from $10,000 to $12,000 per violation. Companies have a 60-day cure period to address violations before penalties are imposed.
- Uncurable Violations: For prohibited activities that cannot be cured, penalties range from $80,000 to $200,000 per violation. These typically involve intentional harmful uses of AI systems.
- Continuing Violations: For ongoing violations, daily penalties range from $2,000 to $40,000 per day.
The Act's per-violation penalty structure has significant implications for telecommunications and marketing companies that deploy AI systems at scale. If an AI tool wrongfully denies service to thousands of customers, the company deploying it the company could face separate penalties for each erroneous denial.
Thankfully, TRAIGA provides several safe harbor protections for companies that demonstrate proactive compliance efforts:
1. Testing and Discovery: Companies that discover violations through feedback, testing (including adversarial or red-team testing), or compliance with state agency guidelines receive protection.
2. Framework Compliance: Substantial compliance with recognized frameworks like NIST's AI Risk Management Framework provides legal protection.
3. Third-Party Misuse: Developers are protected from liability when end users operate AI systems in prohibited manners.
Federal Preemption Considerations
The future of TRAIGA remains uncertain due to potential federal preemption. Congress is currently considering a 10-year moratorium on state AI regulations, which could limit or nullify TRAIGA's effect. This federal preemption discussion reflects broader tensions between state-level innovation and federal regulatory coordination.
Despite this uncertainty, companies operating in Texas must prepare for TRAIGA's implementation, as the federal moratorium remains pending and its scope and application are unclear.
A Watershed Moment in AI Governance
The Texas Responsible Artificial Intelligence Governance Act represents a watershed moment in AI governance, establishing a comprehensive regulatory framework that balances innovation promotion with consumer protection. For telecommunications and marketing companies, TRAIGA creates both opportunities and obligations that require careful strategic planning and implementation.
The Act's intent-based liability standard, prohibited uses framework, and enforcement mechanisms create a regulatory environment that supports legitimate business applications while preventing harmful uses of AI technology.
Telecommunications and marketing companies must navigate TRAIGA's implications for AI-driven network operations, customer service systems, and security applications. Marketing companies face new requirements around disclosure, discrimination prevention, and consumer protection in AI-driven advertising and engagement systems.
As TRAIGA takes effect on January 1, 2026, companies operating in Texas or serving Texas residents must implement comprehensive compliance programs that address the Act's requirements while positioning themselves for continued innovation in the AI-driven marketplace. The legislation's emphasis on responsible AI development and deployment creates a framework that will likely influence national AI governance approaches and industry best practices for years to come.
