X

New Trump Era May Give AI Companies Free Rein

Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.

Headshot of Connie Guglielmo
Headshot of Connie Guglielmo
Connie Guglielmo Former Editor in Chief
For nearly a decade, Connie was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. She also served as a CNET senior vice president focused on AI edit strategy and as an editor at large. A veteran business tech journalist, she earlier worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes, covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWeek, her pre-version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and I've interviewed notable executives, including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
6 min read
Digital illustration shows a circuit board that looks a bit like a street grid at night seen from above.
Getty Images

With former President Donald Trump's reelection, Silicon Valley is now speculating on how the new administration will lean into business-friendly policies toward AI (and cryptocurrency companies). After all, a loosening of regulation was among the top reasons some Silicon Valley venture capitalists, investors and executives, who've poured billions into generative AI, decided to endorse the Republican candidate and a party they previously opposed, as was widely reported in July. 

AI Atlas art badge tag

Axios says American voters have decided that "AI will grow up in a permissive, anything-goes household, rather than under the guidance of stricter parents. Trump's reputation may be that of a strongman, and his MAGA brand of conservatism embraces tough talk, but last night's Republican victory makes it a lot more likely that AI will run wild as it develops."

Trump's allies have drawn up plans to "Make America First in AI," in part by rolling back "burdensome regulations" and safeguards proffered by President Joe Biden, and endorsed by Vice President Kamala Harris, as part of Biden's 2023 AI executive order. And of course, fewer restrictions around the development of AI may benefit Trump backers, including one of his biggest donors, Elon Musk and his xAI company.

As for Trump, he's "repeatedly said he plans to dismantle Biden's AI policy framework on 'day one' and has aligned himself with kingmakers who've sharply criticized all but the lightest-touch regulations," notes TechCrunch. "The AI EO accomplished much. In the last year, the Commerce Department established the U.S. AI Safety Institute (AISI), a body to study risks in AI systems, inclusive of systems with defense applications," TechCrunch reports. "Critics allied with Trump argue that the EO's reporting requirements are onerous and effectively force companies to disclose their trade secrets."

Bloomberg News also reports that Trump will "void" Biden's AI executive order, while Fast Company predicts the returning president "will likely push the government away from AI industry oversight. ... Vice President-elect JD Vance, who worked as a VC before going into politics and may lead AI policy in the new Trump administration, has argued that AI regulation would only lead to AI market dominance by a handful of large players, shutting out startups."

Meanwhile, The New York Times writes that while Trump's first term was marked by "chaos and uncertainty" for the tech industry, most executives will "kiss the ring," as Amazon founder Jeff Bezos has already done, and "quietly tolerate — if not enthusiastically support — his second-term agenda" for fear of retaliation by a president and concern they may lose out on government contracts.

I don't know what the new Trump administration may do, but I will remind everyone that even AI boosters, including OpenAI CEO Sam Altman, aren't sure how to balance the boom and doom scenarios presented by powerful and fast-evolving gen AI and AI tech. 

Here are the other doings in AI worth your attention.

What's in your AI? Ziff Davis has some answers for publishers

If the new administration does in fact decide to take a laissez-faire attitude toward AI regulation, we may never know what data and information is being fed as training data into the large language models that power today's most powerful chatbots, including OpenAI's ChatGPT, Google's Gemini, Anthropic's Claude and Meta's Llama. Though California passed a state law that will require AI companies to reveal what's in their training data, Biden's AI executive order didn't have licensing rules requiring that. 

Why do we want to know? Well, because if these LLMs are going to be the repository of all human information and increasingly become the places where we go to find answers to our questions (as they turn into AI-powered search engines and not just autocomplete on steroids), then we should know what data is powering those answers so we can have confidence that the information isn't flat-out wrong or biased.

Signup notice for AI Atlas newsletter

Also, copyright holders want to know because they say AI companies have been scraping their content off the internet without permission or compensation, which they say means LLM training data is largely made up of pirated content. That's why some publishers are suing AI companies, including OpenAI and Perplexity. The AI companies counter that content is fair game because it's "publicly available" — prompting lawyers to note that copyrighted content is still protected even if it's "publicly available."  

What do we know about what's in all that training data? According to a new study by Ziff Davis reported on by Axios, "leading AI companies such as OpenAI, Google and Meta rely more on content from premium publishers to train their large language models (LLMs) than they publicly admit." 

OpenAI, Google, Meta and Anthropic didn't immediately respond to CNET's requests for comment on the research, which you can find here. (Full disclosure: CNET is owned by Ziff Davis.)

If you're interested in why training data and copyright issues are things you should pay attention to — say, your kids want to be writers or artists or filmmakers or researchers — TechCrunch also has an analysis that's worth a read. It's about a proposal put forth by Microsoft and venture firm A16z on why copyright law is standing in the way of AI startups. Note: Those for-profit startups — as in they want to make money — don't want to pay for copyrighted content, which they think should be "free and accessible."   

Meta allows its AI engine to be used for US military purposes

Besides researchers and entrepreneurs, Meta said last week that its open-source Llama AI engines are also being made "available to U.S. government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work."

"Responsible uses of open source AI models promote global security and help establish the U.S. in the global race for AI leadership," the company said in a blog post. 

The New York Times notes that this decision is an exception to Meta's "acceptable use policy." That policy forbids the use of Llama AI for "military, warfare, nuclear industries" and other purposes, including criminal activity as well as harassment, abuse or bullying of people or groups of people. 

Meta said its partners are already using Llama to "support specific national security team missions, such as planning operations and identifying adversaries' vulnerabilities." It also said they're using Llama to help aircraft technicians more "quickly and accurately diagnose problems" by synthesizing maintenance documents. 

Also worth knowing...

In what can only be described as a cost-cutting move gone seriously awry, a state-funded radio station in Poland fired its on-air journalists, replaced them with three AI-generated "presenters" and then had to back away from the whole endeavor after the AI touted an impressive interview coup: a conversation with Polish poet Wislawa Szymborska, the winner of the 1996 Nobel Prize for Literature. The hitch for OFF Radio Krakow, as The New York Times reported: Szymborska died in 2012. Check out the details in the Times.

Among its 2025 predictions for technology, market research firm IDC predicts a change in the way AI apps work. "The rise of copilots from the GenAI boom of 2022 is quickly giving way to AI agents — fully automated software components that are empowered to use knowledge and skills to assess situations and take action" with limited or no human intervention.

If you're interested in tinkering with Apple Intelligence on your iPhone 15 Pro or iPhone 16, CNET recommends you test out three features: summaries, Siri, and the Clean Up feature in iPhotos.

Elon Musk believes that humanoid robots, like the Optimus robots being created by Tesla, will "outnumber humans in less than 20 years." Musk, speaking at the Future Investment Initiative, told the audience that the cost of a "robot that can do anything" would be between $20,000 and $25,000, Quartz reported. Based on Musk's math, that would mean there would be 10 billion humanoid robots on the planet. Musk said that the AI-powered Optimus (also known as Tesla Bot) would go into limited production in 2025 (I don't see any mention of whether it will adhere to Isaac Asimov's three laws of robotics).     Â