The rapid evolution of AI-generated explicit content, particularly "ai nude girls sex" deepfakes, has outpaced legal frameworks globally, creating a complex and often fragmented regulatory landscape. Governments and international bodies are scrambling to catch up, attempting to balance innovation with the urgent need to protect individuals from harm. As of 2025, regulatory approaches to deepfake technology vary significantly worldwide. * European Union (EU): The EU has adopted comprehensive AI regulations, notably the EU AI Act, which became fully applicable 24 months after entry into force (meaning many provisions are still phasing in in 2025, but some bans applied from February 2025). Under this Act and the Digital Services Act (DSA), providers and moderators of deepfake content face obligations. Generative AI systems, including those that create images and videos, are required to meet minimum transparency standards, such as disclosing that content was AI-generated and publishing summaries of copyrighted training data. The Act also requires high-impact general-purpose AI models to undergo thorough evaluations. The production of deepfakes without user consent is strictly prohibited, protecting individuals from unauthorized use of their likeness. * United States (US): The US approach is more fragmented, lacking comprehensive federal legislation specifically addressing deepfakes or AI generally. However, several states have enacted their own laws focusing on specific applications like explicit content or election security. For instance, California's AB 602 (effective 2022) allows individuals to take action against those who create or disclose sexually explicit deepfakes without consent. As of May 19, 2025, the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN Act) became the first federal statute to criminalize the distribution of nonconsensual intimate images, including AI-generated deepfakes, with penalties up to two years' imprisonment. Prior to this, all 50 states and D.C. had laws targeting nonconsensual intimate imagery, with some updated to include deepfakes. * United Kingdom (UK): The UK has amended criminal laws to combat the misuse of deepfake technology, particularly concerning non-consensual intimate images. The Online Safety Act, passed in 2023, criminalizes the sharing of non-consensual intimate images, including digitally altered ones. There are also proposed new offenses regarding "creating or requesting the creation of a purported intimate image of an adult," which were being expedited into legislation in early 2025. * China: China has also adopted overarching AI regulations. Its provisions cover various forms of AI-generated content and mandate the labeling of all AI-generated content with a watermark or textual indication. The production of deepfakes without user consent is strictly prohibited. * United Arab Emirates (UAE): The UAE's cybercrime legislation punishes the modification or dissemination of personal information with intent to defame or insult, which can be applied to deepfakes. Their Personal Data Protection Law also prohibits processing personal data without explicit consent. A critical and urgent area of legal focus is the creation of AI-generated Child Sexual Abuse Material (CSAM). International and national law enforcement agencies, including the FBI and the Internet Watch Foundation (IWF), have issued strong warnings that CSAM created with generative AI is illegal, regardless of whether it depicts real children. Laws such as the US Protection of Children Act 1978 (amended 1994) and the Coroners and Justice Act 2009 in the UK, which criminalize the taking, distribution, and possession of "indecent photographs or pseudo-photographs of a child," are being applied. Some jurisdictions are specifically expanding laws to include AI-generated visual depictions of children. There has been a significant rise in AI-generated CSAM, with the IWF reporting a 380% increase between 2023 and 2024. Despite legislative efforts, several challenges remain. The global nature of the internet makes enforcement difficult, and laws struggle to keep pace with rapid technological advancements. There's also a debate about how to regulate harmful deepfakes without infringing on free speech or stifling innovation, especially in cases of satire or political speech. Many legal scholars suggest that existing legal frameworks for privacy, defamation, and intellectual property can provide a basis for addressing harms caused by evolving deepfake technology, even as new specific laws are being enacted. For platforms, implementing robust content moderation and identity verification systems, along with clear terms of service prohibiting non-consensual content, are crucial steps.