X

Microsoft, Copyright Office to Lawmakers: Make Deepfakes Illegal

The first part of a US government statement came out days after X owner Elon Musk shared a Kamala Harris deepfake video.

Headshot of Ian Sherr
Headshot of Ian Sherr
Ian Sherr Contributor and Former Editor at Large / News
Ian Sherr (he/him/his) grew up in the San Francisco Bay Area, so he's always had a connection to the tech world. As an editor at large at CNET, he wrote about Apple, Microsoft, VR, video games and internet troubles. Aside from writing, he tinkers with tech at home, is a longtime fencer -- the kind with swords -- and began woodworking during the pandemic.
Headshot of Gael Cooper
Headshot of Gael Cooper
Gael Cooper
CNET editor Gael Fashingbauer Cooper, a journalist and pop-culture junkie, is co-author of "Whatever Happened to Pudding Pops? The Lost Toys, Tastes and Trends of the '70s and '80s," as well as "The Totally Sweet '90s." She's been a journalist since 1989, working at Mpls.St.Paul Magazine, Twin Cities Sidewalk, the Minneapolis Star Tribune, and NBC News Digital. She's Gen X in birthdate, word and deed. If Marathon candy bars ever come back, she'll be first in line.
Expertise Breaking news, entertainment, lifestyle, travel, food, shopping and deals, product reviews, money and finance, video games, pets, history, books, technology history, and generational studies Credentials
  • Co-author of two Gen X pop-culture encyclopedia for Penguin Books. Won "Headline Writer of the Year"​ award for 2017, 2014 and 2013 from the American Copy Editors Society. Won first place in headline writing from the 2013 Society for Features Journalism.
Ian Sherr
Gael Cooper
3 min read
gettyimages-1481664140-1

Criminals have already used AI to lure victims, steal identities and create viral deepfake porn.

Tatiana Lavrova/Getty Images

Artificial intelligence seems to be everywhere these days, doing good by helping doctors detect cancer and doing bad by helping fraudsters bilk unsuspecting victims. On Wednesday, one day after Microsoft said the US needs new laws to hold people who abuse AI accountable, the US Copyright Office released the first part of its report on the legal and policy issues related to copyright and artificial intelligence, especially regarding deepfakes.

The government report recommends that Congress enact a new federal law protecting people from the knowing distribution of unauthorized digital replicas, and offers recommendations on how such a law should be crafted.

"We believe there is an urgent need for effective nationwide protection against the harms that can be caused to reputations and livelihoods," said Shira Perlmutter, register of copyrights and director of the US Copyright Office. "We look forward to working with Congress as they consider our recommendations and evaluate future developments."

AI Atlas art badge tag

The government's report will be issued in several parts, with forthcoming parts addressing copyright issues involving AI-generated material, the legal implications of training AI models on copyrighted works, licensing considerations and the allocation of any potential liability.

Microsoft's plea for regulation

In a blog post Tuesday, Microsoft said US lawmakers need to pass a "comprehensive deepfake fraud statute" targeting criminals who use AI technologies to steal from or manipulate everyday Americans.

"AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation -- especially to target kids and seniors," Microsoft President Brad Smith wrote. "The greatest risk is not that the world will do too much to solve these problems. It's that the world will do too little."

Microsoft's plea for regulation comes as AI tools are spreading across the tech industry, offering criminals increasingly easy access to tools that can help them more easily gain the confidence of their victims. Many of these schemes abuse legitimate technology that's designed to help people write messages, do research for projects and create websites and images. In the hands of fraudsters, those tools can create fake forms and believable websites that fool and steal from users.

"The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI," Smith wrote. But he said governments need to establish policies that "promote responsible AI development and usage."

Already behind

Though AI chatbot tools from Microsoft, Google, Meta and OpenAI have been made broadly available for free only over the past couple of years, the data about how criminals are abusing them is already staggering. 

Earlier this year, AI-generated pornography of global music star Taylor Swift spread "like wildfire" online, gaining more than 45 million views on X, according to a February report from the National Sexual Violence Resource Center

"While deepfake software wasn't designed with the explicit intent of creating sexual imagery and video, it has become its most common use today," the organization wrote. Yet, despite widespread acknowledgement of the problem, the group notes that "there is little legal recourse for victims of deepfake pornography." 

Signup notice for AI Atlas newsletter

Meanwhile, a report this summer from the Identity Theft Resource Center found that fraudsters are increasingly using AI to help create fake job listings as a new way to steal people's identities. 

"The rapid improvement in the look, feel and messaging of identity scams is almost certainly the result of the introduction of AI-driven tools," the ITRC wrote in its June trend report.

That's all on top of the rapid spread of AI-manipulated online posts attempting to tear away at our shared understanding of reality. One recent example appeared shortly after the attempted assassination of former president Donald Trump earlier in July. Manipulated photos spread online that appeared to depict Secret Service agents smiling as they rushed Trump to safety. The original photograph shows the agents with neutral expressions. 

Even in the past week, X owner Elon Musk shared a video that used a cloned voice of vice president and Democratic presidential candidate Kamala Harris to denigrate President Joe Biden and refer to Harris as a "diversity hire." X service rules prohibit users from sharing manipulated content, including "media likely to result in widespread confusion on public issues, impact public safety, or cause serious harm." Musk has defended his post as parody.

For his part, Microsoft's Smith said that while many experts have focused on deepfakes used in election interference, "the broad role they play in these other types of crime and abuse needs equal attention."