S. 2938: Artificial Intelligence Risk Evaluation Act of 2025
This bill is called the Artificial Intelligence Risk Evaluation Act of 2025 and aims to create a program to assess and manage the risks associated with advanced artificial intelligence (AI) systems. Below are the key aspects of the bill:
Purpose of the Bill
The bill recognizes that advancing AI technology offers both significant opportunities and risks. It emphasizes the need to establish a secure testing and evaluation program to produce data-driven options for managing these risks and enabling Congressional oversight of AI systems to protect American citizens.
Definitions
The bill defines several key terms:
- Advanced Artificial Intelligence System: An AI system that requires substantial computing power for its operation, specifically greater than 10^26 operations.
- Adverse AI Incident: Specific incidents involving AI that could harm national security, civil liberties, or economic competitiveness, such as loss of control, weaponization risks, or threats to critical infrastructure.
- Artificial Superintelligence: An AI that can surpass human cognitive capabilities and potentially operate beyond human oversight.
Program Establishment
The bill mandates the Secretary of Energy to establish an Advanced Artificial Intelligence Evaluation Program within 90 days of the bill's enactment. The program will:
- Provide standardized testing and evaluation for advanced AI systems.
- Implement rigorous testing protocols to anticipate and counteract potential vulnerabilities posed by AI systems.
- Offer formal reports to participating entities on the risks and safety measures of their AI systems.
- Develop strategies for addressing potential risks identified during testing.
- Assist Congress in assessing the risks associated with the development of artificial superintelligence.
Obligations for AI Developers
Developers of advanced AI systems are required to:
- Participate in the evaluation program.
- Provide necessary data and documentation for their AI systems, such as code, training data, and structural details.
Penalties for Noncompliance
If a developer fails to participate in the program or deploys an AI system without compliance, they will incur a hefty fine of at least $1 million for each day of violation.
Long-term Oversight Plan
Within 360 days of enactment, the Secretary is to submit a detailed plan to Congress for ongoing federal oversight of advanced AI systems. This plan will include:
- Analysis of testing outcomes and identified risks.
- Recommendations for regulatory oversight based on empirical data.
- Suggested revisions for improved monitoring and governance tailored to evolving AI technology.
Program Duration and Renewal
The program is designed to last for seven years from the date of enactment unless Congress decides to renew it.
Relevant Companies
None found
This is an AI-generated summary of the bill text. There may be mistakes.
Sponsors
2 bill sponsors
Actions
2 actions
| Date | Action |
|---|---|
| Sep. 29, 2025 | Introduced in Senate |
| Sep. 29, 2025 | Read twice and referred to the Committee on Commerce, Science, and Transportation. |
Corporate Lobbying
0 companies lobbying
None found.
* Note that there can be significant delays in lobbying disclosures, and our data may be incomplete.
Potentially Relevant Congressional Stock Trades
No relevant congressional stock trades found.