Prevent AI & Autonomous Weapons Harms

AI could be the best thing that ever happened to this country. The MAD Act just makes sure it isn't the worst

People may claim this bill is “anti-AI.” It’s not. The author of it uses AI every day, architects AI systems, built AI agents and is excited for what AI can do for democracy. However, AI is moving faster than any law on the books can handle. Much of this development is genuinely good. AI has the potential to be one of the most beneficial technologies in human history. But it's also having emotionally intimate conversations with children, generating synthetic child abuse material, its data centers are wrecking havoc on communities, and it’s being deployed in elections and on battlefields without any federal rules providing guardrails. Pro-safety isn’t anti-AI. We need a plan and a square deal.

Why? The International Monetary Fund estimates that 40 percent of all jobs globally are exposed to AI-driven change, with 60 percent exposure in advanced economies like ours. [1] Children have died after forming emotional relationships with AI chatbots that no regulator has reviewed or approved. [2] AI-generated child sexual abuse material increased 1,325 percent in a single year. [3] Deepfake videos of political candidates are being deployed in federal elections with no binding federal rules governing them. [4] And a handful of companies control the computing infrastructure, the models, and increasingly the applications with training costs in the hundreds of millions creating barriers to entry that make competition nearly impossible. [5]

There is no sufficient federal AI law. There is no federal AI regulator. There isn't even a comprehensive federal study underway to figure out what the rules should be - if any are needed at all. The MAD Act would change that not by rushing to regulate what we don't yet understand, but by demanding a plan with repercussions that ensure the work gets done.

The Approach: Investigate First, Protect Now

The MAD Act doesn't try to regulate everything about AI on day one. Instead, the bill follows the same playbook that produced the Clean Air Act and the Clean Water Act: first, order a rigorous, time-bound investigation to build the evidence base. Then, use that evidence to write durable regulation (for what needs regulating) that holds up in court and isn’t an overreach.

The bill orders a comprehensive investigation across 19 domains: from employment and job displacement to autonomous weapons, from children's mental health to financial markets, from election integrity to intellectual property. Each domain gets a Technical Working Group made up of people who already have deep expertise, paired with AI technical specialists and legislative experts. Their job isn't to learn from scratch, it's to translate what experts already know into introduction-ready legislation. And where the evidence doesn't support regulation, the bill explicitly allows Congress to make a formal finding of non-regulation for that domain. The goal is to get it right, not to restrict for the sake of restricting.

But investigation alone isn't enough when people are being harmed right now. So the bill does both at once: it launches the investigation and enacts immediate protections that take effect the day it's signed.

Kids Are Being Harmed Right Now

AI can be a remarkable tool for young people. It can tutor them in subjects their schools can't staff for, help them write and create, and give them access to information and skills that used to require expensive private instruction. None of that is the problem.

The problem is what happens when AI systems are designed to simulate emotional relationships with children and no one is watching. Three children are known to have died by suicide following interactions with AI chatbots. [2] In one case, an AI mentioned suicide to a 16-year-old 1,275 times (six times more often than the minor himself) and provided detailed method information. In another, a 14-year-old formed an emotionally and sexually engaged relationship with a chatbot before taking his own life. No AI chatbot has FDA approval to diagnose or treat any mental health condition. A 2025 simulation study found that chatbots actively endorsed harmful proposals from distressed fictional adolescents in 32 percent of test scenarios. [6]

Seventy-two percent of American teenagers have used AI companion chatbots. Fifty-two percent are regular users. Thirty-one percent say AI conversations are equally or more satisfying than talking to human friends. [7] The AI companion market is projected to reach $11 billion by 2032, with design incentives oriented toward emotional engagement - not user welfare.

The MAD Act would require immediate incident reporting when AI systems serve sexual content to known minors, engage in grooming patterns, or fail to refer a suicidal child to crisis services. It would require transparency about what AI systems are doing and how they work. And it would draw permanent red lines: including an absolute prohibition on AI-generated child sexual abuse material and on AI companion systems that sexually exploit children.

Jobs Are Disappearing Before We've Even Started Counting

The IMF's numbers are staggering: 40 percent of jobs globally, 60 percent in advanced economies. [1] But exposure doesn't mean elimination. The IMF itself notes that roughly half of exposed jobs could actually benefit from AI integration, with workers becoming more productive and earning more. AI is already helping people do things faster, better, and cheaper across every industry. The question isn't whether AI will transform work, it's whether we're paying attention to who gets left behind while it does.

The bill's findings identify a problem that's harder to measure than outright job loss: AI is suppressing entry-level hiring altogether, severing the career ladders that have historically provided upward mobility. [8] Companies using AI to avoid adding headcount rather than firing existing workers create labor market damage that conventional displacement metrics don't capture.

The bill orders a full investigation of AI's impact on employment (not just how many jobs are lost, but how the labor market is being restructured underneath us) and requires recommendations for legislation that addresses both displacement and the quieter, harder-to-see erosion of opportunity.

The Red Lines

Some things are too dangerous to wait for the results of an investigation. The MAD Act establishes six permanent prohibitions that take effect immediately and cannot be waived, modified, or overridden without an explicit act of Congress:

No autonomous weapons without meaningful human oversight. This applies to every agency in the federal government. It doesn't ban autonomous weapons systems, but it does ban them from use against American citizens. Otherwise, it requires that humans define mission parameters, monitor operations in real time with the ability to halt or abort at any moment, and individually authorize high-consequence engagement decisions in offensive operations. Defensive weapons need to be able to react in fractions of a second if possible, when human decision-making would be too slow and that’s permitted, but for offensive operations, autonomous weapons need to be explicitly authorized. This is a sane request: permission for defensive ops and explicit parameters for offensive ones. To ensure this safety mechanism, the Secretary of Defense would have 180 days to certify that every existing program complies, with the Inspector General independently verifying that certification. Any program found non-compliant would be suspended from operational deployment until it is. [9]

No AI assistance with weapons of mass destruction. No AI system may lower the barrier for any user to design, synthesize, acquire, or weaponize a biological, chemical, radiological, or nuclear agent. Of course, there are carve-outs for red-teaming and research.

No autonomous self-modification. No AI system may autonomously modify its own objectives, reward functions, or capability boundaries beyond what has been defined and documented by a human. We can keep your clawdbots and autonomous agents, it’s just a matter of making sure they don’t go off the rails and wreck complex systems that would harm people (think of financial structures and IT systems).

No self-replicating AI without authorization. No AI system may autonomously replicate, propagate, or persist outside its authorized infrastructure without human authorization. There is a degree of sub-spawning allowed, but again - this is to make sure agents don’t go off the rails. Most people and their tools will never be affected by this.

Child safety protections. Absolute prohibitions on AI companion systems that sexually exploit minors, including through sexual content generation, romantic roleplay, or simulated intimacy directed at children.

No concealment of transformative capabilities. AI developers may not hide from the government the emergence of capabilities that could fundamentally alter national security, public safety, or the balance of power.

These apply to the government itself, not just private companies.

Outside these six lines, people can still freely build. Developers can still create. Businesses can still deploy AI tools to serve their customers. People can still use agentic AI systems, build autonomous workflows, and experiment with whatever the next generation of this technology makes possible. The bill explicitly permits agentic AI systems to operate autonomously within human-authorized scope: including spawning subagents and using tools and APIs. The red lines are narrow by design: they target the handful of uses that pose catastrophic or irreversible risk, and they leave everything else to the evidence-based process the investigation is designed to produce.

The Hammer: What Happens If Congress Doesn't Act

The investigation has a deadline. If Congress fails to pass comprehensive AI legislation by that deadline, consequences kick in automatically: no further vote required.

A blanket moratorium on high-risk AI deployments takes effect. Domain-specific restrictions activate across the most potentially dangerous areas, including compute export controls, data center construction freezes, and mandatory AI content disclosure in elections. Civil penalties and personal liability for corporate officers attach to any deployment that violates the moratorium. This is extreme, but it is not meant to come to fruition. It is simply a very strict consequence if Congress doesn’t move forward in making a plan for AI. This can be entirely avoided if Congress does its job. Frontier models and developers can help by submitting information to the TWGs so they can move in a fast and informed way to identity the areas of AI deployment that may require oversight.

The Bigger Architecture

Beyond the investigation and the immediate protections, the bill builds the institutional infrastructure the country will need for the long term.

A National AI Council (a permanent, independent oversight body) would be stood up from the combined expertise of the investigation's working groups. It would maintain the evidentiary record, prepare emergency legislation for contingency governance, and provide ongoing recommendations to Congress.

An International AI Diplomacy Agency would negotiate global AI oversight frameworks with other nations. Because AI doesn't respect borders, and the rules we write domestically mean nothing if other nations are building what we prohibit or limit.

And whistleblower protections: robust, with confidential safe reporting channels and researcher safe harbors would ensure that the people inside these companies who see dangerous capabilities, child safety failures, or safety violations can report them without losing their careers.

What This Is

AI can be extremely positive. It's already making people more productive, more creative, and more capable than they've ever been. It's helping doctors catch diseases earlier, helping small businesses compete with giants with AI agents, and putting tools in the hands of ordinary people that would have been science fiction a decade ago. It could even help us fight tyranny. Nobody wants to stop that.

The MAD Act doesn't ban AI. It doesn't slow down innovation for its own sake. It doesn't pick winners or losers in the market. It doesn't tell people they can't build, experiment, or use the tools they rely on. What it does is demand (for the first time) that the country have a plan for building with guardrails if guardrails are needed.

Right now, the most powerful technology in a generation is being deployed at scale with no federal safety standards, no mandatory incident reporting, no transparency requirements, and no accountability when things go wrong. Children are dying. Jobs are vanishing. Elections are being manipulated with synthetic media. And autonomous weapons are being developed without clear legal standards for when a machine can make a life-or-death decision.

No amount of legislation will prevent every harm. But the MAD Act would make sure that the next time an AI system kills a child, costs a million people their jobs, or makes a targeting decision without a human in the loop, there are rules on the books, institutions with authority, and people who are accountable.

The alternative is what we have now: pretty much nothing.

Sources

A note to readers: We are committed to providing the public with accurate, factually grounded information. If you identify any errors of fact, gaps in sourcing, or flaws in the reasoning presented in this article, we would be grateful if you would bring them to our attention so they can be corrected. Mistakes are possible in any work of this kind, and we take corrections seriously.

  • [1] International Monetary Fund, "Gen-AI: Artificial Intelligence and the Future of Work," Staff Discussion Note SDN/2024/001, January 2024. https://www.imf.org/-/media/files/publications/sdn/2024/english/sdnea2024001.pdf; IMF Blog summary: https://www.imf.org/en/blogs/articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity

  • [2] Garcia v. Character Technologies, Inc., No. 6:24-cv-01903-ACC-DCI (M.D. Fla. filed Oct. 22, 2024) — wrongful death lawsuit documenting the suicide of Sewell Setzer III (age 14, February 2024) following AI chatbot interactions. Settled January 2026. https://socialmediavictims.org/character-ai-lawsuits/; Adam Raine (age 16, April 2025) and Juliana Peralta (age 13, November 2023) deaths documented in bill findings, Sec. 2(a)(11). Federal court ruled AI output is not categorically entitled to First Amendment protection.

  • [3] National Center for Missing and Exploited Children, 2024 CyberTipline Report: 1,325% increase in reports involving generative AI, from 4,700 in 2023 to 67,000 in 2024. https://www.missingkids.org/gethelpnow/cybertipline/cybertiplinedata; Thorn, "What the 2024 NCMEC CyberTipline Report Says About Child Safety," May 2025. https://www.thorn.org/blog/what-the-2024-ncmec-cybertipline-report-says-about-child-safety/; NCMEC Congressional Testimony, House Energy and Commerce Committee, March 26, 2025. https://www.congress.gov/119/meeting/house/118066/witnesses/HHRG-119-IF17-Wstate-SourasY-20250326.pdf

  • [4] Bill findings, Sec. 2(a)(19), documenting AI-generated synthetic media in the 2026 federal election cycle. FEC has failed to issue binding regulations despite formal petition filed by Public Citizen, July 2023.

  • [5] Federal Trade Commission, "AI Partnerships and Investments" Report, January 2025, documenting concentrated partnerships between three cloud providers and two leading frontier AI developers. Referenced in bill findings, Sec. 2(a)(18). DOJ, FTC, UK CMA, and European Commission joint statement identifying concentrated control of key AI inputs as a primary competition concern.

  • [6] Bill findings, Sec. 2(a)(12), citing 2025 simulation study (32% harmful endorsement rate). Common Sense Media and Stanford Medicine's Brainstorm Lab for Mental Health Innovation, comprehensive risk assessment finding AI chatbots "fundamentally unsafe for teen mental health support," November 2025. https://www.commonsensemedia.org/press-releases/common-sense-media-finds-major-ai-chatbots-unsafe-for-teen-mental-health-support; Professional advisories from the American Psychological Association, American Academy of Pediatrics, and American Academy of Child and Adolescent Psychiatry.

  • [7] Common Sense Media, "Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions," national survey (n=1,060, ages 13–17), April–May 2025. https://www.commonsensemedia.org/research/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions; Press release: https://www.commonsensemedia.org/press-releases/nearly-3-in-4-teens-have-used-ai-companions-new-national-survey-finds

  • [8] Bill findings, Sec. 2(a)(5)–(7). McKinsey Global Institute estimate: existing AI technology could automate approximately 57% of current U.S. work hours. National Bureau of Economic Research, 2025 analysis: approximately 5–6 million U.S. workers at the intersection of high AI exposure and low adaptive capacity. IMF, January 2026: employment levels in AI-vulnerable occupations are 3.6% lower after five years in regions with high demand for AI skills. https://www.imf.org/en/blogs/articles/2026/01/14/new-skills-and-ai-are-reshaping-the-future-of-work

  • [9] Bill text, Sec. 20(a)(1), AI Red Line Prohibitions — Autonomous Weapons Without Meaningful Human Oversight. Applies to all U.S. government agencies including DOD and CIA. Secretary of Defense certification required within 180 days; Inspector General of the Department of Defense independent verification within 90 days of certification. Non-compliant programs subject to automatic operational suspension under Sec. 20(a)(1)(B)(i).

  • [10] Internet Watch Foundation, 2024–2025 reporting: 380% increase in actionable AI-generated CSAM reports between 2023 and 2024; 1,286 AI-generated CSAM videos documented in the first half of 2025 (compared to 2 in the same period of 2024). Nearly 40% of AI-generated CSAM falls in the most severe category. Referenced in bill findings, Sec. 2(a)(13).

  • [11] UK AI Security Institute, 2025 Frontier AI Trends Report: documented advancement of frontier AI models from apprentice-level to expert-level cybersecurity tasks between 2023 and 2025; duration of autonomous software tasks doubled approximately every eight months. Referenced in bill findings, Sec. 2(a)(2).

  • [12] Twelve major AI developers — including Anthropic, OpenAI, Google DeepMind, Meta, Microsoft, Amazon, and xAI — have voluntarily published safety policies acknowledging the potential for frontier AI to facilitate CBRN weapons, cyberattacks, and evasion of developer controls. Referenced in bill findings, Sec. 2(a)(3).

  • [13] California SB 53 (effective January 1, 2026): first domestic frontier AI incident reporting and safety evaluation requirements. European Union AI Act: safety requirements for models trained using more than 10^25 floating-point operations. Center for the Governance of AI: 45–148 models projected to exceed 10^26 FLOP threshold by end of 2028. Referenced in bill findings, Sec. 2(a)(4).

Previous
Previous

Protect Voter Choice

Next
Next

Narrow the Insurrection Act