Actor and director Ben Affleck weighed in on the debate about AI's role in Hollywood, saying that "movies will be one of the last things, if everything gets replaced, to be replaced by AI."Â
"AI can write you excellent imitative verse that sounds Elizabethan. It cannot write you Shakespeare," Affleck told attendees at CNBC's Delivering Alpha 2024 investor summit. "The function of having two actors, or three or four actors, in a room, and the taste to discern and construct that, is something that currently entirely eludes AI's capability, and I think will for a meaningful period of time." Â
The director of films such as Argo and The Town is no AI doomer. He believes generative AI tools will take on many of the repetitive, labor-intensive and costly tasks that make filmmaking so expensive, and will free humans to spend time on more-interesting and -creative work.Â
"What AI is going to do is it's going to disintermediate the more laborious, less creative and more costly aspects of filmmaking," Affleck said in comments that have gone viral. "That will allow costs to be brought down. That will lower the barrier for entry. That will allow more voices to be heard. That will make it easier for the people that want to make Good Will Hunting to go out and make it."
Affleck added that he wouldn't like to be in the visual effects business, because they're in trouble, since what costs a lot of money now is going to "cost a lot less ... and maybe it shouldn't take a thousand people to render something." (Check out three new AI-generated ads from Coca-Cola and decide for yourself if the AI work is appealing or, as some people commented, soulless.)
Having an AI assist may lead to the creation of more shows, Affleck said, which should be good — economically — for whoever is producing and is part of those shows, and good for audiences who want more content. AI could also be used as a tool to allow licensing of that content to someone who wants to take a few seconds out of an Avengers movie, he said, and make TikTok videos.
But the Academy Award winner also insisted that AI isn't going to "replace human beings making films" and used an analogy involving furniture makers, in which creatives, like him, are the Gustav Stickleys of this digital new world.Â
"AI is a craftsman at best — craftsmen can learn to make Stickley furniture by sitting down next to somebody and seeing what their technique is and imitating it. ... That's how large language models basically work," he said. "But they're just cross-pollinating things that exist. Nothing new is created. ... Craftsman is knowing how to work. Art is knowing when to stop. And I think knowing when to stop is going to be a very difficult thing for AI to learn, because it's (about) taste."
Affleck's comments run counter to some of the concerns of Hollywood notables, including actors Robert Downey Jr. and Nicholas Cage, who say it all sounds reasonable until studios buy rights to actors' voices and likenesses — or worse, imitate them without permission or compensation. Forbes contributor Pia Lauritzen has an interesting take on Affleck's craftsmen analogy. She called out that, "According to The CFO Survey from June 2024, 61% of large US firms plan to use AI within the next year to automate tasks previously done by employees. But which tasks should and should not be taken over by AI?"
Good point. As I always say, technology is a means to an end, not an end. It's we humans who decide the endgame and how, when, where and if the technology is deployed.
Here are the other doings in AI worth your attention.
Oops, OpenAI erases some evidence in plagiarism suit, NYT says
The New York Times, which sued Microsoft and OpenAI last year for allegedly stealing its copyrighted stories to train ChatGPT, said in a court filing last week that engineers at OpenAI had erased some of the potential evidence the Times had located in the ChatGPT training datasets it's been allowed to search.
In a Nov. 20 letter, posted here, the law firm representing the NYT explained what happened and noted that searching the datasets has cost the newspaper a significant amount of time, with its team spending 150 person-hours since Nov. 1 on the task. The lawyers said in the letter that they "have no reason to believe" the erasure of data located by the Times was "intentional." Still, Law.com noted that "while OpenAI was able to recover the data, the filing said that it 'does not include the original folder structure and original file names, is unreliable and cannot be used to determine where the News Plaintiffs' copied articles were used to build Defendants' models.'" That means a lot of potential evidence was lost and that the Times will have to redo the work it put in to make the data usable.
OpenAI spokesperson Jason Deutrom told Wired and Engadget that the company disagreed with the NYT's claims and said the company would file its response soon.Â
Why is this worth noting? As Wired reported, "OpenAI was required by the court to show the Times its training data, which is a big deal — OpenAI has never publicly revealed exactly what information was used to build its AI models." While the assumption is that OpenAI and other makers of large language models have scraped the internet, there's a difference between thinking that and knowing that. And the way to know is to see what's in the training data used to build popular chatbots.Â
Elon Musk sues OpenAI again, saying he was 'conned'Â
In other legal news involving OpenAI, Elon Musk, one of the company's co-founders, who reportedly wanted to take it over himself and turn it into a for-profit company, revived his lawsuit against the startup (which he'd dropped in June after OpenAI released some of his emails).Â
The new legal filing adds as defendants big OpenAI investor Microsoft and LinkedIn co-founder Reid Hoffman, one of Musk's former colleagues from the PayPal mafia. Musk, one of the richest men in the world, said in the lawsuit that he was "conned."
As the BBC reports, "The lawsuit accuses OpenAI of having transformed from a tax-exempt charity to a $157 billion for-profit, market-paralysing gorgon. It also claims Microsoft and OpenAI used a monopoly to eliminate competitors in the AI sector — including Mr Musk's own company, xAI." The BBC said Microsoft declined to comment and that an OpenAI spokesperson called the refreshed complaint "baseless."
If you like Silicon Valley intrigue, you can read Musk's 107-page amended lawsuit, courtesy of Reuters, here. If that's too much for you, check out The Washington Post's take, which notes that "more than 60 pages of emails ... reveal the deep roots of recent tensions inside the company. They show how Musk and other cofounders squabbled from its earliest days over who would control the venture."
The emails, the Post adds, "also underscore how a handful of rich men with competing financial interests and ideological goals have controlled a project ostensibly created to work on behalf of all humanity." To reference a line from one of my favorite movies, My Cousin Vinny, the work-on-behalf-of-all-humanity plan seems pretty much moot given how much money is at stake.Â
Also worth knowing...
No, an "AI Jesus" isn't hearing people's confessions. So says the headline on a fact-check by the Catholic News Agency following stories that an AI-powered video chatbot of "Jesus" had started hearing people's confessions at a chapel in Switzerland. The Jesus avatar, according to the CNA, was actually trained with content from the New Testament and designed to answer questions about the Bible. The CNA said it's actually part of an art exhibit that "encourages thinking about the limits of technology in the context of religion." A four-minute video of AI Jesus in action is here.
Two AI bots, tapped to serve as news broadcasters at a small Hawaiian newspaper called The Garden Island and help expand the audience, were fired after two months, Wired reported. "James, a middle-aged Asian man, and Rose, a younger redhead, were never able to figure out how to present the news in a manner that wasn't deeply off-putting for viewers," Wired said. The bots were designed by an Israeli company that turns "articles into videos where AI hosts discuss the news with one another," and The Garden Island program "was the first of its kind in the United States," Wired said.Â
On the subject of training data, in an X post last month, Elon Musk encouraged people to share X-rays, PET scans, MRIs, CT scans and other medical images with his Grok AI chatbot for analysis, a move that "alarmed some medical privacy experts," according to The New York Times. Even if you think Musk is the most well-intentioned person, sharing that info on a public forum causes you to lose the protection of HIPAA, the federal law that protects your info from being shared without your permission. As Fast Company reported, "Experts widely agree against sharing sensitive data with publicly available AI systems." And, notably, even Grok, in its privacy policy, tells users not to "share any personal information (including any sensitive information)."Â


