WASHINGTON (AP) — A child psychiatrist altered a first-day school photo he saw on Facebook to show a group of girls nude. A US Army soldier has been accused of creating images depicting the children of his acquaintances being sexually abused. A software engineer responsible for producing highly realistic sexually explicit images of children.
Law enforcement agencies across the U.S. are cracking down. The alarming spread of images of child sexual abuse From manipulated photos of real children to computer-generated graphic depictions of children, created by artificial intelligence technology. Justice Department officials say they are actively pursuing criminals who misuse AI tools. states are competing This is to ensure that people who create ‘deepfakes’ and other harmful images of children can be prosecuted under the law.
“We must communicate early and often that this is a crime and that if the evidence supports it, it will be investigated and prosecuted,” said Stephen Grocki, head of the Justice Department’s child exploitation and obscenity division. ” he said in an interview with The Paper. Associated Press. “If you’re sitting there thinking otherwise, you’re fundamentally wrong, and it’s only a matter of time before someone holds you accountable.”
The Justice Department said existing federal law clearly applies to such content, and recently announced the first case involving purely AI-generated images, where the children depicted are virtual rather than real. He filed what appears to be a federal lawsuit. In a separate case, federal authorities arrested a U.S. soldier stationed in Alaska in August for allegedly making sexually explicit pictures of an acquaintance’s biological child by publishing them through an AI chatbot.
trying to catch up with technology
The charges come as child advocates say authorities are working urgently to curb the misuse of the technology, with authorities concerned that the flood of disturbing images could make it difficult to rescue real victims. It was done. Law enforcement officials worry that investigators will waste time and resources trying to identify and track down exploited children who don’t actually exist.
Meanwhile, lawmakers are passing a flurry of bills to ensure that local prosecutors can bring charges under state law over AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered images of child sexual abuse, according to research by the National Center for Missing and Exploited Children.
“Frankly, as a law enforcement agency, we are catching up to technology that is advancing much faster than we are,” said Ventura County, California, District Attorney Eric Nasarenko.
Nasarenko promoted the bill signed last month Governor Gavin Newsom has made it clear that AI-generated child sexual abuse material is illegal under California law. Mr. Nasarenko said that between December of last year and mid-September, his office was unable to provide any information on the images because California law required prosecutors to prove that the images depicted real children. said it had failed to prosecute eight cases related to AI-generated content.
AI-generated child sexual abuse images could be used to groom children, law enforcement officials say. And even if they are not physically abused, children can be seriously affected if their images are altered to appear sexually explicit.
“I felt like a part of me was taken away from me, even though I wasn’t physically assaulted,” she said on the Disney Channel show “Just Roll With It.” said Kaylin Heyman, a 17-year-old who helped push the California bill after being the victim of a “deepfake” image.
Heyman testified last year in the federal trial of a man who digitally composited his and another child actor’s faces onto their bodies during sex acts. He was sentenced in May to more than 14 years in prison.
Open-source AI models that users can download onto their computers are known to be favored by criminals, who can further train and modify the tools to include explicit depictions of children. Experts say it can be produced in large quantities. Officials say abusers are exchanging tips in dark web communities on how to manipulate AI tools to create such content.
Last year’s report A study by the Stanford Internet Observatory found that research datasets that are the source of major AI image creators such as Stable Diffusion contain links to sexually explicit images of children, which makes some tools harmful. It turns out that this is one of the reasons why images can be easily generated. The dataset was deleted, and the researchers later said: they removed It contained more than 2,000 web links to images of suspected child sexual abuse.
Top tech companies including Google, OpenAI and Stability AI agree to work with child sexual abuse prevention organization Thorn to fight the spread Images of child sexual abuse.
But experts say more should have been done from the beginning to prevent abuse before the technology became widely available. And the steps companies are taking now to make it harder to exploit future versions of AI tools “will do little to prevent criminals from running older versions of models on computers” “undetected.” We cannot,” Justice Department prosecutors wrote in a recent court filing.
“Time wasn’t being spent on making the product more secure instead of more efficient. As we’ve seen, that’s very difficult to do after the fact,” says the Stanford Internet Observatory. said David Thiel, Chief Engineer.
AI images become even more realistic
Last year, the National Center on Missing and Exploited Children’s CyberTipline received approximately 4,700 reports of content related to AI technology. This is just a fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was submitting about 450 reports each month on AI-related content, said Jota Souras, the group’s chief legal officer.
But experts say the images are so realistic that it’s often difficult to tell whether they were generated by AI or not.
“Law enforcement officials need to determine whether the image actually depicts a real minor or whether it was generated by AI,” said Recall Kelly, a Ventura County deputy district attorney who helped write the California bill. “I spend hours just making decisions.” “In the past, there may have been some clear indicators, but with advances in AI technology, that is no longer the case.”
Justice Department officials say they already have tools under federal law to go after the perpetrators of these images.
The United States Supreme Court Federal ban enacted in 2002 About virtual child sexual abuse material. However, a federal law signed the following year prohibited the production of visual depictions, including pictures of children engaged in sexually explicit acts, that were considered “obscene.” According to the Justice Department, this law has been used in the past to prosecute cartoons depicting child sexual abuse, but there is no specific requirement that “the minor depicted actually exists.” It is clearly stated.
In May, the Justice Department indicted a Wisconsin software engineer accused of using an AI tool called “Stable Spread” to create graphic images of children engaging in sexually explicit acts. He was arrested for sending images to a 15-year-old boy through direct contact. Authorities said he posted a message on Instagram. The man’s attorney is asking for the charges to be dismissed on First Amendment grounds, but declined further comment on the charges in an email to The Associated Press.
A Stability AI spokesperson said the man is accused of using an early version of the tool released by another company, Runway ML. Since taking over exclusive development of the model, Stability AI says it has “invested in proactive features to prevent the misuse of AI for the creation of harmful content.” A Runway ML spokesperson did not immediately respond to a request for comment from The Associated Press.
The Justice Department has filed charges under federal child pornography laws in a case involving “deepfakes,” which are digitally altered photos of real children to make them sexually explicit. In one case, a child minder in North Carolina used an AI application to digitally “undress” girls posing for their first day of school in a decades-old photo shared on Facebook. A doctor was convicted on federal charges last year.
“These laws exist. They will be used. We have the will. We have the resources,” Grocki said. “Just because there aren’t actual children involved doesn’t mean it’s a low priority to ignore.”