Beating the bots on social media
Simple but Not Easy
“Nobody has fought harder for full release of the Epstein files and prosecutions of those who abused children more than I did,” Elon Musk posted on Sunday. Quite a claim from Musk, given the other posts from accounts he controls.
Over the course of 11 days in January, Musk’s chatbot Grok generated 23,000 sexualized images of children. Every 41 seconds, someone auto-produced child pornography on Grok. Musk continues to allow Grok to remove clothing from images of nonconsenting people – including minors.
On TikTok, both the Chinese and Russians deploy deepfakes for propaganda. The Iranians and Qataris have been taking careful notes. So, too, has Trump.
In Minnesota, the administration used an AI deepfake to smear an anti-ICE protester. Nekima Levy Armstrong, a civil rights attorney and mother of four, was transported to jail in three layers of shackles. She held her head high and walked with dignity. The administration posted a deepfake photo of her with a doctored, hysterical expression, as well as darkened skin. The White House social media account bragged about arresting a “far-left agitator”. The falsified photo was viewed six million times within hours. “The memes will continue,” an administration spokeswoman said in response. (Lies, she means.)
Photo comparison: NBC. The deepfake on the left was posted by @WhiteHouse; the original on the right was posted by @Sec_Noem.
These lies must not continue. Congress can stop it. Social media has been terrible for our society. Social media swarming with AI bots will be exponentially worse.
After the Sandy Hook school shooting in 2012, Alex Jones of Infowars defamed the grieving families with conspiracy theories to drive clicks. He was sued; the social media corporations that platformed him were held harmless. The 2026 version of Infowars will be using deepfakes to substantiate and spread conspiracy theories. And still, the AI firms and the social media corporations that produce and platform that defamation will be held harmless.
This is because they have no liability.
Liability is the link that connects norms, laws, and technology. Social norms about how to treat one another become codified as torts. Generations of juries, judges, and attorneys have invested the common law with accumulated wisdom about right and wrong – about what counts as a tort. Companies are liable for operating in line with that sense of right and wrong; in other words, they can be sued for torts.
Except Big Tech. Congress cut the legal link between norms and technology in 1996 with passage of Section 230, which immunizes platforms against liability. It should be no surprise that norms and technology have since divorced in the digital realm.
My two bipartisan social media bills restore that link for deepfakes and AI chatbots.
The Deepfake Liability Act would impose a duty of care onto social media platforms, in which AI-generated or nonconsensual pornography, as well as cyberstalking content, must be removed within 24 hours. The platforms could be sued for failing in this duty of care. The threat of nuclear verdicts would bend platforms away from bots with a pattern of generating deepfake pornography and towards ‘reality defender’ tools that disclaim AI-generated content.
The Deepfake Liability Act also makes clear that the AI bots themselves are not exempt from liability. AI companies that give medical or legal counsel can be sued for malpractice, like hospitals or law firms. This would promote a ‘race to the top’ between AI companies to address hallucinations, improve disclaimers, and meet consumer-grade standards.
My second bipartisan bill, the Parents over Platforms Act, creates a technical and legal framework to hold apps accountable for providing the age-appropriate experience they market. Parents throughout my district, from all walks of life, agree that children under 16 should not be discoverable or contactable by strangers or bots online. Chatrooms and direct messaging pose the biggest risks to online kids safety, making up nearly 90 percent of online sexual advances toward kids. The response from Meta to date? A ‘17-strikes-and-you’re-out policy’ before suspending accounts reported for sex trafficking.
Under Parents over Platforms, Meta, TikTok, and the rest would be under both legal and competitive pressure to match the Pinterest standard: no accounts under 13, no discoverability by strangers or bots under 16.
These two laws are a start. More are necessary, at both the federal and state level. Norms against toxic deepfakes are cohering fast, across party lines. The technology to cure these ills already exists. Laws are what’s missing. Republican leadership in Congress won’t even consider them. Democrats should lead the charge.




You are right, Congressman, but the immediate need is to use the end of the current short-term continuing resolution for funding of DHS as a hammer to force real change. As a beginning:
1. Democracy is under ever more fierce attack, as Trump and his accomplices see the threat of the November elections creeping closer. So, a first priority must be to eliminate—so far as possible—the department’s ability to threaten the voting. To that end, an absolute requirement for an agreement must be a provision specifying that ICE, CBP and other DHS enforcement agencies may not be within, say, 500 yards of a polling place while it is open, except under authority of a warrant issued upon probable cause.
2. It should be made clear as a matter of law that ICE and CBP cannot enter a private space (home or office) on an administrative warrant, but only one one issued by a court.
3. There should be strict requirements for the operation of places where those being held for possible deportation are kept, including heat, light, adequate food and access to counsel.
4. The right of members of Congress or their staffs to visit detention centers without advance notice should be in a separate statute, and not tied to a particular appropriation. (The maladministration has claimed that it could impose conditions, because the provision allowing such inspections was tied to an earlier appropriation that was overtaken by money in the One Big Bad Bill Act.)
5. The One Big Bad Bill Act included an endowment for ICE through fiscal 2029. That should be clawed back. Bernie Sanders proposed that, an it almost passed the Senate. If Republicans thought the claw-back necessary to get any money for DHS, they might agree.
While the bills you describe are worthy, reining in the DHS Gestapo is absolutely vital. Will you commit to doing all you can to that end?
Liability for online content, with a narrow focus as you have discussed, seems like real "work" Congress can accomplish. This brief post is an excellent overview of a topic I know you have spent an incredible amount of time and personal capital to bring to it's current form. Keep pushing. We need real progress, not just outrage. Keep sharing these "big ideas," Jake.