As the capabilities of free face swap AI technology outpaced existing legal frameworks, governments worldwide have been racing to catch up. In 2025, significant strides have been made, particularly in the United States, to address the scourge of non-consensual deepfake pornography. A landmark piece of legislation, the "Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act," or TAKE IT DOWN Act, became federal law in the United States in May 2025. This bipartisan legislation is designed to combat the viral spread of non-consensual sexual imagery, explicitly including AI-generated intimate content, also known as deepfake pornography. Key provisions of the TAKE IT DOWN Act include: * Criminalization: It makes the knowing publication of sexually explicit images—whether real, altered, or entirely AI-generated—without the depicted person's consent a federal felony. Threatening to post such images to extort, coerce, intimidate, or cause mental harm is also a felony. * Platform Responsibility: Critically, the law compels online platforms (public websites, online services, and applications that primarily provide a forum for user-generated content) to establish user-friendly takedown processes. When a victim submits a valid request, the platform must act to remove the content within 48 hours. Failure to comply can lead to enforcement actions by the Federal Trade Commission. * Penalties: Offenders can face fines and imprisonment. For content depicting adults, penalties can include up to two years in prison, while content involving minors can lead to up to three years of imprisonment. * Victim Remedies: The law provides mechanisms for victims to force platforms to remove the content and potentially seek legal recourse against perpetrators. While lauded as a critical milestone, some legal scholars note potential gaps, such as how the act addresses consent obtained through subtle manipulation rather than overt threats, and concerns about victim data protection during the takedown process. Beyond federal legislation, many U.S. states have enacted or expanded their own laws to target deepfakes and non-consensual intimate imagery: * New York expanded its revenge porn laws to include non-consensual distribution of sexually explicit images, including those created or altered by digitization, requiring proof of intent to harm. * Virginia broadened its revenge porn law to encompass nude or partially nude images "created by any means whatsoever" and distributed without authorization, particularly with malicious intent to coerce, harass, or intimidate. * Washington enacted a new crime for "disclosing fabricated intimate images" created or altered using AI, requiring knowing disclosure without consent to cause harm. * Texas imposes penalties for unlawfully creating or distributing deepfake videos depicting sexual conduct or intimate parts. * California has progressive statutory laws, including Assembly Bill 730 outlawing deepfakes in political campaigns and Assembly Bill 602 holding perpetrators accountable for non-consensual pornography. * Utah prohibits the unlawful distribution of "counterfeit intimate images" defined as any visual depiction created, edited, manipulated, or altered to depict the likeness of an identifiable individual. These state laws often vary in their specific definitions, requirements for proving intent, and penalties, but collectively they demonstrate a strong legal pushback against the malicious use of AI face swap technology. The legal landscape is also evolving globally. The UK's Online Safety Act of 2023 legalizes the sharing of fake sexually explicit images if the act causes distress and the sender had intent or was reckless. Australia is also developing more concrete legislation, focusing on online safety and harm minimization, although existing laws on defamation, harassment, and data misuse currently address some deepfake cases. Despite these legislative efforts, legal challenges persist. Identifying the perpetrator can be difficult, and proving intent, especially in cases where the image is entirely fabricated rather than an altered real image, can be complex. Moreover, reconciling freedom of expression with the need for harm prevention remains a contentious debate.