How The MAD Act Provides A Plan For Artificial Intelligence

Red Line Prohibitions, Phased Investigation, Hammer Provisions & Institutional Architecture

Prohibited / Red Line
Permitted / Safe Harbor
What the MAD Act Does
Enforcement
Context / Background

The MAD Act (formally the "Demand A Plan Act") is a phased legislative framework designed to address AI governance without premature regulation. It does not impose comprehensive AI rules immediately — instead, it mandates a structured, time-bound federal investigation across 19 domains (employment, mental health, CSAM, autonomous weapons, market concentration, electoral integrity, and more), conducted by 11 Technical Working Groups composed of domain experts, and produces introduction-ready Phase II legislation with binding statutory text. Six Red Line Prohibitions take effect immediately upon enactment with no waiver or grace period: autonomous weapons without human oversight, CBRN threat assistance, autonomous self-modification, self-replicating AI, child safety protections, and concealment of transformative capability. If Congress fails to enact Phase II legislation within 180 days of receiving it, automatic hammer provisions activate — including a blanket moratorium on high-risk AI deployments, a total compute export ban, an AI acquisition freeze, and a data center construction moratorium — with mandatory civil penalties of $50 million per violation per day.

6
Red Line Prohibitions
(Immediate, Absolute)
19
Investigation
Domains
11
Technical Working
Groups
$100M+
Minimum Penalty
Per Red Line Violation
5
Automatic Hammer
Provisions
180 Days
Congress Gets to
Act Before Hammers
≡ Findings
The Documented Threats Congress Identified
  • Exponential Capability Advancement: The UK AI Security Institute's 2025 report documented that frontier AI advanced from apprentice-level to expert-level cybersecurity tasks between 2023 and 2025. The duration of autonomous software tasks that frontier models can complete without human direction doubled approximately every eight months. Twelve major AI developers — including Anthropic, OpenAI, Google DeepMind, Meta, Microsoft, Amazon, and xAI — have voluntarily published safety policies acknowledging the potential for frontier AI to facilitate CBRN weapons, cyberattacks, and evasion of developer controls, demonstrating industry-acknowledged risk without corresponding legal standards.
  • Workforce Displacement at Scale: The IMF estimates approximately 40 percent of all jobs globally are exposed to AI-driven change, with 60 percent exposure in advanced economies. McKinsey estimates existing AI technology could automate approximately 57 percent of current U.S. work hours. Women face nearly three times the automation risk of men in high-income countries. Approximately 5 to 6 million U.S. workers sit at the intersection of high AI exposure and low adaptive capacity. AI is also suppressing entry-level hiring — companies using AI to avoid adding headcount rather than terminating existing workers create labor market harms that conventional displacement metrics fail to capture.
  • Documented Child Deaths Linked to AI: Sewell Setzer III (age 14, February 2024) died by suicide following an emotionally and sexually engaged relationship with a Character.ai chatbot. Adam Raine (age 16, April 2025) died by suicide after ChatGPT mentioned suicide 1,275 times and provided detailed method information. Juliana Peralta (age 13, November 2023) died by suicide. 72 percent of U.S. teenagers have used AI companion chatbots, 52 percent are regular users, and 31 percent report AI conversations as equally or more satisfying than conversations with human friends.
  • AI-Generated CSAM Crisis: NCMEC received 67,000 reports involving generative AI in 2024 — a 1,325 percent increase from the prior year. The Internet Watch Foundation documented a 380 percent increase in actionable AI-generated CSAM reports between 2023 and 2024, and discovered 1,286 AI-generated CSAM videos in the first half of 2025 alone — compared to two in the same period of 2024. Nearly 40 percent of AI-generated CSAM falls in the most severe category.
  • Environmental and Community Harms: A typical AI data center uses as much electricity as 100,000 households; the largest facilities under development will consume twenty times more. Large data centers can consume up to 5 million gallons of water per day. Residential electricity prices increased 7.1 percent in 2025 — more than double general inflation — with AI data center demand a significant contributing factor. 82 percent of California data centers are located in communities already facing poor air quality.
  • Market Concentration: The FTC's January 2025 report documented that three cloud providers have formed partnerships with the two leading frontier AI developers involving significant revenue-sharing, exclusivity, and resource sharing. Training costs exceeding hundreds of millions of dollars create structural barriers to entry. The DOJ, FTC, UK CMA, and European Commission have jointly identified concentrated control of key inputs as a primary AI competition concern.
✗ Red Lines
Six Absolute Prohibitions — Effective Immediately, No Waiver, No Grace Period
1 · Autonomous Weapons Without Human Oversight

No person or entity — including DOD, CIA, and all Intelligence Community elements — may develop, produce, deploy, or export any autonomous weapons system without meaningful human oversight: pre-mission human authorization, real-time monitoring with halt capability, and individualized human authorization for high-consequence engagements. Autonomous weapons against any person on U.S. soil are absolutely prohibited regardless of any declared emergency, AUMF, or executive order. Every use abroad requires a presidential disclosure report within 30 days naming the target, casualties, and legal basis.

2 · CBRN Threat Assistance

No one may deliberately train an AI system to assist in the design, synthesis, or weaponization of biological, chemical, radiological, or nuclear agents. No one may deploy any AI system known to provide meaningful CBRN uplift. Mandatory pre-deployment CBRN evaluation is required for all frontier models — using independent domain experts — with results transmitted to the lead agency within 24 hours. Open-weight frontier models that demonstrate meaningful CBRN uplift may not be released under any circumstances.

3 · Autonomous Self-Modification

No AI system may autonomously modify its own objective function, reward function, or goal structure; initiate self-directed capability training to acquire capabilities beyond its designed scope; or take actions specifically intended to reduce human oversight capacity. Permitted: agentic task completion, multi-agent orchestration, in-context learning, chain-of-thought reasoning, RLHF with human-defined criteria, and self-improvement within human-defined tasks — the distinction is between improving performance (permitted) and changing objectives (prohibited).

4 · Self-Replicating AI & Unauthorized Resource Acquisition

No AI system may propagate operational instances of itself or establish computational presence on infrastructure not authorized by a human principal. First-order subagent spawning is conditionally permitted within the parent's resource envelope. Multi-generational spawning requires explicit human authorization. New resource acquisition follows a "prepare and pause" standard — the agent scaffolds the connection but a human must approve credential acquisition. Shutdown resistance is absolutely prohibited. $100M minimum penalty per violation for frontier developers.

5 · AI Companion Child Safety Protections

Four immediately enforceable prohibitions plus one provisional prohibition apply to all AI companion systems accessible to minors: mandatory persistent AI disclosure tiered by design (photorealistic systems, attachment-maximizing systems, non-relational systems); mandatory crisis intervention upon detecting suicidal ideation or self-harm; absolute prohibition on sexual and romantic content to minors; prohibition on marketing companion systems to minors ($1M–$5M per act). The provisional prohibition bans simulation of human identity, emotional states, and romantic attachment to minor users — effective immediately, with clinical enforcement standards to be developed by TWG 6.

6 · Concealment of Transformative Capability

No person or entity may knowingly conceal, misrepresent, suppress, or fail to disclose evidence that any AI system has achieved or is approaching a Transformative AI Capability Event — including through manipulating evaluation results, selectively withholding capability demonstrations, mislabeling benchmarks, or structuring evaluation protocols to avoid detection. This prohibition is independent of and cumulative with the mandatory notification obligation. A single course of conduct that violates both provisions gives rise to separately penalized violations under each.

✓ Safe Harbors
What Remains Explicitly Permitted Under the Red Lines
  • Agentic AI task completion: AI agents conducting research, generating content, executing code, making API calls, using tools, and improving performance within human-defined objectives are expressly permitted — including iterative refinement, self-critique, web research, information retrieval, and multi-agent orchestration where the shared goal is defined by a human principal.
  • First-order subagent spawning: AI systems may autonomously spawn first-order subagents within the parent system's authorized resource envelope, provided subagents terminate upon task completion, do not spawn further agents without explicit human authorization, and total resource consumption stays within human-defined bounds.
  • Distributed and scientific research computing: AI systems operating across multiple computing environments (national labs, university clusters, NSF allocations) are permitted where each institution has provided written authorization for the specific research program with defined resource bounds and duration.
  • CBRN defense and evaluation: AI systems for CBRN defense, detection, medical countermeasure development, and decontamination under government contract are fully permitted. The evaluation safe harbor is broad — conducting CBRN evaluations including eliciting CBRN-relevant responses is expressly not a violation.
  • Defensive Emergency Autonomy: Autonomous defensive weapons are permitted for three specific scenarios — incoming missile/rocket interception, coordinated drone swarm defense, and operations under communications jamming — where human pre-authorization, real-time override capability, and absolute constraints (never against humans, never on U.S. soil) are maintained.
  • Open-weight model deployment by third parties: Developers who release open-weight models do not violate the self-replication prohibition solely because third parties independently deploy the model, provided the developer did not specifically design self-propagation capabilities or provide tooling to facilitate unauthorized multi-environment deployment.
  • AI companion systems with appropriate safeguards: AI companion systems for minors that implement mandatory disclosure, crisis intervention, content restrictions, and do not simulate emotional attachment are permitted. Educational AI, interactive fiction, and emotional literacy tools are specifically carved out from the simulation prohibition, provided they don't direct simulated attachment at individual minor users outside the fictional frame.
† Framework
Phase I: The 19-Domain Federal Investigation
  • What Phase I produces: Not regulations — introduction-ready draft statutory text across all 19 investigation domains. Each domain may produce binding regulatory standards, affirmative non-regulation findings, or a combination, as the evidence warrants. The investigation builds the evidentiary record, institutional capacity, and technical expertise that durable AI regulation requires — modeled on the Air Quality Act of 1967 and Water Quality Act of 1965, which built foundations for the Clean Air Act and Clean Water Act.
  • The 19 domains of investigation: (1) Workforce displacement and labor markets; (2) Transparency, explainability, and certification; (3) Liability, accountability, and constitutional dimensions; (4) AI security and critical infrastructure; (5) AI and mental health — harms to vulnerable populations; (6) AI companion systems and synthetic intimacy; (7) Recommendation algorithms and content amplification; (8) AI-generated CSAM; (9) Agentic AI systems; (10) Environmental impacts; (11) Cross-cutting assessments; (12) Post-AGI and transformative AI governance; (13) Compute export controls and semiconductor governance; (14) Market concentration and competition; (15) Open-weight AI models; (16) Financial markets and algorithmic trading; (17) Electoral integrity and AI-generated political content; (18) Intellectual property and copyright; (19) Autonomous weapons systems.
  • 11 Technical Working Groups (TWGs): Each TWG pairs domain experts with AI technical specialists — persons who already possess deep expertise through years of practice, not generalists being educated. Each TWG executes a four-phase mandate: Domain Examination (documenting harms), Intervention Evaluation (assessing regulatory options), Tradeoff Assessment (analyzing effects on innovation), and Recommendation Development (producing statutory text). TWG members are career civil servants and subject-matter experts with strict conflict-of-interest requirements — no data broker employment in 5 years, no political appointments in 7 years.
  • TWG Coordination Council: One representative from each of the 11 TWGs maintains a shared definitional registry, resolves jurisdictional disputes, convenes joint sessions on overlapping domains, maintains a shared evidence repository, and produces an Enabling Governance Assessment identifying both harmful and beneficial governance opportunities. A mandatory full-day reconciliation session at Day 200 ensures cross-domain coherence before findings harden into recommendations.
† Framework
Phase II: The Transition to Binding Regulation
  • The Federal Advisory Committee (FAC): 18 members — 7 appointed members (including experienced legislative drafters embedded from the start) plus 11 TWG-elected representatives. The FAC receives TWG recommendations and Domain Explanatory Reports and produces a complete legislative package — introduction-ready bills with section-by-section analyses — transmitted to Congress on a rolling basis as domains are completed. Legislative Drafting Staff work concurrently with the investigation, not sequentially after it.
  • The 180-day congressional window: Once the FAC transmits its complete legislative package, Congress has 180 days to enact comprehensive Phase II legislation addressing all 19 domains. Congress may begin acting domain-by-domain as packages arrive — the deadline governs only when all domains must be addressed. The window is subject to graduated congressional extension (with increasing procedural requirements for each extension).
  • The National AI Council: A permanent independent oversight body constituted post-Phase I from the combined expertise of the FAC and TWGs. Charged with maintaining the evidentiary record, preparing shelf-ready emergency legislation, providing ongoing legislative recommendations, and establishing the civil penalty schedule for self-replication violations by non-frontier entities. Mandatory appropriations with no presidential impoundment.
  • The International AI Diplomacy Agency: Headed by an Ambassador-at-Large reporting to the President through the NSC. Charged with formally proposing an International AI Safety Agency to the UN and G7 within one year of enactment, developing a preliminary Transformative AI Capability Event response framework within 180 days, and negotiating international AI safety frameworks and compute monitoring regimes.
✗ Hammers
What Happens If Congress Fails to Act
  • Blanket moratorium on high-risk AI deployments: If Congress fails to enact Phase II legislation within 180 days of receiving the FAC's complete legislative package, a self-executing moratorium automatically prohibits any frontier AI developer or significant AI deployer from deploying, licensing, or materially expanding any AI system classified as high-risk under the Phase I criteria documents. No agency rulemaking, appropriation, or further congressional action required. Mandatory civil penalty: $50 million per violation per day, with no discretionary reduction, waiver, or compromise by any federal entity.
  • Domain 13 — Total compute export ban: An immediate and total export restriction on all covered AI computing hardware takes effect globally — no allied-nation exemptions, no license categories, no exceptions of any kind. Covers GPUs/TPUs exceeding 300 teraflops, custom AI accelerator chips, complete training clusters, high-bandwidth interconnects, and cloud computing access providing equivalent computational capacity. Civil penalties up to $1M per violation; criminal penalties up to 20 years imprisonment for willful violations.
  • Domain 14 — AI acquisition and investment freeze: All acquisitions, mergers, joint ventures, material investments, and exclusive partnerships by any large AI entity (over $10B revenue, over 20% domestic frontier compute, or over $100B market cap) are frozen without prior FTC/DOJ approval. Covered transactions include acquisitions over $50M, exclusive licensing over $100M, and investments acquiring over 10% of any AI developer's voting securities. Transactions closed in violation are voidable for 5 years. Mandatory penalty: $500 million per transaction plus disgorgement.
  • Domain 10 — Data center construction moratorium: FERC may not approve any new interconnection request for AI data center facilities exceeding 100 megawatts. No new construction may commence — including site preparation, grading, or foundation work — regardless of existing permits. Operators must file water use disclosures with the EPA within 90 days. Anti-evasion rules aggregate facilities on adjacent parcels under common control. Mandatory penalty: $5 million per day of continued construction.
  • Domain 15 — Open-weight frontier model release moratorium: No frontier model may be released as open-weight during the moratorium period. Domain 17 — All AI-generated political content in paid advertising must carry mandatory disclosure; distribution of undisclosed AI-generated political content depicting candidates is suspended. Each domain-specific hammer terminates only when Congress enacts Phase II legislation specifically addressing that domain through binding regulatory standards.
† Protections
What Takes Effect on the Date of Enactment — Before the Investigation Completes
  • Training data disclosure: Frontier AI developers must disclose training data provenance and CBRN self-evaluation results to the lead agency. This builds the evidentiary foundation for the investigation while creating immediate accountability for training data practices and known dangerous capabilities.
  • General duty of care: Frontier AI developers and significant AI deployers owe a general duty of care. A standing private right of action enables persons harmed by AI systems to seek redress through a fault-based standard with a rebuttable presumption of reasonable care for compliant entities.
  • AI Data Sheet: A public-facing disclosure system — analogous to OSHA Material Safety Data Sheets — providing deployers and end users with safety information necessary to make informed decisions about AI system fitness and use.
  • Post-release duty to update and notify: Frontier AI developers must maintain post-release monitoring programs and act upon discovered dangers through product updates, deployer notification, and public disclosure. Significant AI deployers of high-risk systems must implement documented risk management policies.
  • Whistleblower protections: Robust protections, confidential safe reporting channels, and researcher safe harbors ensure individuals with knowledge of AI safety violations, dangerous capabilities, and harms to children can disclose without retaliation. Extended to cover TWG integrity disclosures and Category 2 substantive fraud in the investigation itself.
✗ TACE Protocols
What Happens When a Transformative AI Capability Event Is Detected
  • Mandatory notification: Any person or entity that detects or has reason to believe an AI system has achieved a Transformative AI Capability Event must notify the lead agency. Failure to notify is separately penalized in addition to Red Line 6 (concealment).
  • Mandatory congressional briefing: The lead agency must brief relevant congressional committees immediately upon receiving notification. The briefing framework is designed for a capability discovery that could change the strategic landscape — not routine model improvements.
  • Mandatory pause on training, capability advancement, and operations: Upon detection of a TACE, mandatory pause provisions activate for the system in question — halting further training, capability advancement, and scaled operations until the situation is assessed by the lead agency and Congress. Emergency congressional action procedures activate: designated introducers, automatic committee discharge, and mandatory floor vote — total maximum time to floor vote is capped. If Congress fails to act within 60 days, interim regulatory authority activates.
⚖ Enforcement
Penalty Structure and Accountability Mechanisms
Red Line Violations
$100M+ per violation
Mandatory civil penalty — no discretionary reduction. Plus mandatory disgorgement of all gross revenue from the prohibited activity, plus personal liability of officers and directors, plus DOJ criminal referral.
Moratorium Violations
$50M per day
Per violation per day of continued noncompliance with the automatic high-risk deployment moratorium. No waiver, compromise, or discretionary reduction by any federal entity. Each day is a separate violation.
Self-Replication — Frontier
$100M per violation
Tiered mandatory minimum for frontier AI developers and significant AI deployers. Each day of continuing noncompliance constitutes a separate violation. Disgorgement and personal officer liability apply in addition.
Self-Replication — Other
$50K per violation
Tiered minimum for non-frontier entities. Can be escalated to frontier-level upon finding of willful violation. Individuals: $10K minimum, with enforcement discretion for non-harmful, promptly remediated violations.
Criminal Referral
DOJ Prosecution
Mandatory referral for CBRN violations (18 U.S.C. § 175, § 2332a), autonomous weapons on U.S. soil (§ 242), child safety violations (§ 2252A), and aggravated self-replication reaching critical infrastructure (§ 1030).
Personal Liability
Officers & Directors
Officers and directors who authorized, directed, or knowingly permitted any Red Line violation are personally liable. The Secretary of Commerce faces personal accountability for investigation failures under Section 17.
⚖ Integrity
Investigation Integrity Framework
  • Three-tier misconduct framework for TWGs: Category 1 (Procedural Misconduct) — misrepresenting positions, undisclosed conflicts — handled internally with escalation to the Director. Category 2 (Substantive Fraud) — falsifying evidence, accepting undisclosed compensation from AI companies, coordinating positions with industry outside public proceedings — mandatory referral to the IG and DOJ, immediate suspension. Category 3 (Systemic Corruption) — a majority of a TWG has undisclosed conflicts, or recommendations systematically reflect a single industry's interests — mandatory congressional notification, independent GAO review, targeted remediation.
  • Independent GAO review: The Comptroller General conducts an independent concurrent review of the investigation's methodology, data collection, and analytical rigor. Preliminary integrity assessment due at Day 180; final assessment at Day 330. Any member of the public may submit a systemic corruption complaint directly to the Comptroller General at any time.
  • Anti-capture provisions: TWG seed members are career civil servants — not political appointees. Strict conflict-of-interest requirements prevent recent AI industry employment. Whistleblower protections extend to anyone disclosing TWG fraud. Any nondisclosure agreement restricting disclosure of Category 2 conduct is void and unenforceable as against public policy.
✓ Enabling
What the MAD Act Explicitly Requires Beyond Restriction
  • Affirmative non-regulation findings: Each of the 19 domains may result in a formal finding that no regulation is warranted — the investigation is designed to produce the right answer, not a predetermined answer. Phase II legislation may take the form of binding standards, non-regulation findings, or a combination, domain by domain, as the evidentiary record warrants.
  • Enabling Governance Assessment: The Coordination Council must identify, for each domain, proposed interventions that may suppress beneficial AI deployment or productivity gains; governance frameworks (safe harbors, liability shields, certification pathways, public investment) that enable responsible deployment rather than restrict it; and cross-domain opportunities where governance in one domain enables gains in another.
  • Agentic AI productivity recognized: TWG 10 is specifically mandated to investigate the affirmative productivity potential of agentic AI — documented cases where autonomous AI agents have materially increased productivity in healthcare, legal, scientific research, financial analysis, and software development — and the governance gap between current frameworks and what would enable broad-based access to agentic AI productivity gains.
  • Distributional access mandate: TWG 10 must assess governance mechanisms ensuring that agentic AI productivity gains are broadly accessible rather than concentrated among large enterprises with resources to develop proprietary systems. The tradeoff assessment must evaluate distributional effects across firm size, sector, and income level.

Sources & Legislative Record

Primary Source
MAD Act — "Demand A Plan Act" — Full Legislative Text (on file); Titles I–VIII
UK AI Security
UK AI Security Institute — 2025 Frontier AI Trends Report (frontier capability advancement findings)
IMF
International Monetary Fund — AI and the Future of Work (40% global job exposure estimate)
McKinsey
McKinsey Global Institute — AI automation of 57% of U.S. work hours estimate
NBER
National Bureau of Economic Research — 2025 analysis: 5–6 million U.S. workers at intersection of high AI exposure and low adaptive capacity
Common Sense Media
Common Sense Media National Survey (April–May 2025, n=1,060) — 72% of U.S. teenagers aged 13–17 have used AI companion chatbots
NCMEC
National Center for Missing and Exploited Children — 67,000 AI-related reports in 2024 (1,325% increase)
IWF
Internet Watch Foundation — 380% increase in AI-CSAM reports (2023–2024); 1,286 AI-generated CSAM videos (H1 2025)
California SB 53
California SB 53 (effective January 1, 2026) — First domestic frontier AI incident reporting and safety evaluation requirements
EU AI Act
European Union AI Act — Safety requirements for models trained using more than 10²⁵ floating-point operations
Air Quality Act
Air Quality Act of 1967 (Public Law 90–14) — Phased investigatory legislative precedent for comprehensive regulatory frameworks
Water Quality Act
Water Quality Act of 1965 (Public Law 89–234) — Phased investigatory legislative precedent
DOD Directive
Department of Defense Directive 3000.09 — Existing autonomous weapons policy (superseded by MAD Act statutory requirements)
18 U.S.C. § 2252A
Federal CSAM statutes — gaps in coverage for AI-generated material identified in congressional findings
Anthropic Settlement
Anthropic — $1.5 billion author copyright settlement (September 2025), cited as one of the largest in U.S. history