LeBron James targets AI firm over weird deepfake being pregnant movies
NBA celebrity LeBron James has turn out to be one of many first main celebrities to push again towards the unauthorized use of his likeness in AI-generated content material. James’ authorized workforce lately issued a cease-and-desist letter to FlickUp, the corporate behind the AI image-generation software Interlink AI.
In line with a report from 404 Media, FlickUp disclosed the authorized motion to members of its Discord neighborhood in late June. The Interlink AI software, hosted on the server, allowed customers to create AI-generated movies of high-profile NBA gamers, together with James, Stephen Curry, Nikola Jokić, and others. Whereas lots of the movies had been innocent, some crossed the road into disturbing territory, like a distinguished picture of the Los Angeles Laker embracing his pregnant stomach.
AI actors and deepfakes aren’t coming to YouTube advertisements. They’re already right here.
One of the vital broadly seen movies created with Interlink AI depicted an AI-generated Sean “Diddy” Combs sexually assaulting Curry in a jail setting, whereas James seems standing passively within the background. That video alone reportedly amassed over 6.2 million views on Instagram.
Mashable Mild Velocity
404 Media confirmed with FlickUp founder Jason Stacks that James’ authorized workforce was behind the cease-and-desist letter. Inside half-hour of receiving it, Stacks mentioned he determined to “take away all practical folks from Interlink AI’s software program.” Stacks additionally posted a video addressing the state of affairs, captioned merely: “I’m so f**ked.”
LeBron James is amongst a rising checklist of celebrities whose likenesses have been used with out consent in disturbing AI-generated content material. Pop star Taylor Swift has been repeatedly focused with deepfake pornography, whereas Scarlett Johansson and Steve Harvey have each publicly condemned the misuse of their photos and voiced assist for laws to curb it. Nevertheless, James stands out as one of many first to take formal authorized motion towards an organization enabling such a content material by its AI instruments.
A number of payments are at present making their method by Congress to deal with the rise of nonconsensual AI-generated content material. The lately handed Take It Down Act criminalizes the publication or risk to publish intimate imagery with out consent, together with deepfakes and AI-generated pornography. Two further proposals — the NO FAKES Act of 2025 and the Content material Origin Safety and Integrity from Edited and Deepfaked Media Act of 2025 — have additionally been launched.
The NO FAKES Act focuses on stopping unauthorized AI replication of an individual’s voice, whereas the latter seeks to safeguard unique works and implement transparency round AI-generated media.
Subjects
Synthetic Intelligence