California Governor Takes Action to Protect Children from AI Deepfake Nudes

California Governor Takes Action to Protect Children from AI Deepfake Nudes

SACRAMENTO, Calif. — California Gov. Gavin Newsom signed a pair of proposals Sunday aiming to help shield minors from the increasingly prevalent misuse of artificial intelligence tools to generate harmful sexual imagery of children.

The measures are part of California’s concerted efforts to ramp up regulations around the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

Earlier this month, Newsom also has signed off on some of the toughest laws to tackle election deepfakes, though the laws are being challenged in court. California is wildly seen as a potential leader in regulating the AI industry in the U.S.

The new laws, which received overwhelming bipartisan support, close a legal loophole around AI-generated imagery of child sexual abuse and make it clear child pornography is illegal even if it’s AI-generated.

Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person, supporters said. Under the new laws, such an offense would qualify as a felony.

“Child sexual abuse material must be illegal to create, possess, and distribute in California, whether the images are AI generated or of actual children,” Democratic Assemblymember Marc Berman, who authored one of the bills, said in a statement. “AI that is used to create these awful images is trained from thousands of images of real children being abused, revictimizing those children all over again.”

Newsom earlier this month also signed two other bills to strengthen laws on revenge porn with the goal of protecting more women, teenage girls and others from sexual exploitation and harassment enabled by AI tools. It will be now illegal for an adult to create or share AI-generated sexually explicit deepfakes of a person without their consent under state laws. Social media platforms are also required to allow users to report such materials for removal.

But some of the laws don’t go far enough, said Los Angeles County District Attorney George Gascón, whose office sponsored some of the proposals. Gascón said new penalties for sharing AI-generated revenge porn should have included those under 18, too. The measure was narrowed by state lawmakers last month to only apply to adults.

“There has to be consequences, you don’t get a free pass because you’re under 18,” Gascón said in a recent interview.

The laws come after San Francisco brought a first-in-the-nation lawsuit against more than a dozen websites that AI tools with a promise to “undress any photo” uploaded to the website within seconds.

The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more accessible and easier to use. Researchers have been sounding the alarm these past two years on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters.

In March, a school district in Beverly Hills expelled five middle school students for creating and sharing fake nudes of their classmates.

The issue has prompted swift bipartisan actions in nearly 30 states to help address the proliferation of AI-generated sexually abusive materials. Some of them include protection for all, while others only outlaw materials depicting minors.

Newsom has touted California as an early adopter as well as regulator of AI technology, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices.

California Governor Gavin Newsom has taken a bold step to protect children from the dangers of AI deepfake nudes. Deepfake technology has become increasingly sophisticated in recent years, allowing individuals to create realistic and convincing fake images and videos using artificial intelligence.

One of the most concerning aspects of deepfake technology is its potential to be used to create and distribute explicit content involving children. This poses a serious threat to the safety and well-being of young people, as these fake images can be easily shared online and used to exploit and manipulate children.

In response to this growing concern, Governor Newsom has signed a new law that makes it illegal to create or distribute deepfake nudes of minors in California. The law imposes harsh penalties on those who violate it, including fines and potential jail time.

This legislation is a crucial step in protecting children from the harmful effects of deepfake technology. By criminalizing the creation and distribution of fake images involving minors, the law sends a clear message that this type of behavior will not be tolerated in California.

In addition to this new law, Governor Newsom has also announced plans to increase funding for programs that educate young people about the dangers of online exploitation and how to protect themselves from potential threats. By empowering children with knowledge and resources, the state hopes to prevent them from becoming victims of deepfake technology.

Overall, Governor Newsom’s actions demonstrate a commitment to safeguarding the well-being of California’s children in the digital age. By taking a proactive stance against AI deepfake nudes, the state is sending a strong message that the exploitation of minors will not be tolerated. It is crucial that other states follow suit and implement similar measures to protect children from the dangers of deepfake technology.