The Senate Judiciary Committee unanimously advanced the “GUARD Act,” a bipartisan bill led by Sen. Josh Hawley that would impose strict age verification requirements on artificial intelligence platforms including Microsoft Corp. ($MSFT), Alphabet Inc. ($GOOGL), and Meta Platforms Inc. ($META), aiming to block minors from accessing certain chatbot systems and restrict harmful content.
- The bill (S.3062) requires “reasonable age verification,” including government ID or comparable methods, and prohibits simple self-reported age checks
- AI companies must require user accounts, verify age at signup, and periodically re-verify users
- The legislation bans minors from accessing “AI companions,” defined as chatbots designed for emotional or human-like interaction
- Companies face penalties of up to $100,000 per violation for chatbots that promote self-harm, suicide, or sexually explicit content involving minors
- Chatbots must clearly disclose they are not human and cannot present themselves as licensed professionals
- Enforcement authority would be granted to the U.S. Attorney General, with additional state-level enforcement provisions
- The bill follows lawsuits and growing backlash tied to alleged chatbot-related harm involving minors
- Privacy groups have raised concerns about ID requirements and potential impacts on online anonymity
Relevant Companies
- Microsoft Corp. ($MSFT) – Operates AI systems like Copilot that could face new compliance requirements
- Alphabet Inc. ($GOOGL) – Develops AI chatbots such as Gemini that may be impacted by verification rules
- Meta Platforms Inc. ($META) – Expanding AI assistants across its platforms, potentially subject to restrictions
Editor’s Note: This is a developing story. This article may be updated as more details become available.