The legal frameworks worldwide are grappling to keep pace with the rapid evolution of deepfake technology. As of 2025, significant strides have been made, particularly in the United States, to address the scourge of non-consensual intimate deepfakes. In a landmark move, on May 19, 2025, the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes Act, widely known as the TAKE IT DOWN Act, was signed into federal law in the United States. This bipartisan legislation directly criminalizes the intentional online publication of non-consensual intimate visual depictions of an identifiable person, explicitly including those that are authentic or AI-generated. Critically, it also imposes civil obligations on websites and online platforms to remove such content within 48 hours of receiving notice from a victim. Failure to comply can result in enforcement actions by the Federal Trade Commission. This marks a pivotal shift, providing a federal avenue for victims to seek recourse and mandating platform responsibility. Beyond federal efforts, numerous U.S. states have enacted specific laws targeting deepfake misuse. California, for instance, has prohibited deepfakes that interfere with elections or are used to create non-consensual pornography. Texas has criminalized the creation and distribution of deepfake videos intended to harm others, including the creation of "steamy sex scenes through the fake impersonation of any person without their permission." Other states like Florida, New York, Illinois, and Virginia are also supporting various regulations. While this patchwork of state laws offers some protection, their varying definitions and applications highlight the ongoing need for more uniform and comprehensive legal standards. Internationally, regions like the European Union have been proactive in regulating AI and digital media. The Artificial Intelligence Act (AI Act) and the Digital Services Act (DSA) include provisions that could encompass deepfake technology, mandating transparency and disclosure for AI-generated content. Efforts are underway to integrate more specific provisions to address media manipulation through AI. China, too, has taken proactive steps, with its Personal Information Protection Law (PIPL) requiring explicit consent before an individual's image or voice is used in synthetic media and mandating that deepfake content be labeled. Despite these legislative advancements, challenges remain. Traditional legal frameworks, such as defamation, copyright infringement, or general privacy laws, often prove insufficient to tackle the unique harms posed by deepfakes, particularly due to issues like anonymity, global reach, and the difficulty of proving intent to harm. The complexity of cross-border enforcement further complicates the legal landscape. The legal system is in a constant race against the evolving capabilities of AI, necessitating continuous adaptation and collaboration.