OpenAI pauses MLK deepfakes on Sora after ‘disrespectful’ videos

OpenAI said on Thursday night that it has “paused” deepfakes of Martin Luther King Jr. on its social app Sora after users created “disrespectful” AI-generated videos of the late civil rights leader. It said representatives or estates of other historical figures will now be able to opt out of their likeness being used on the platform.
The company said it acted following complaints from King’s estate and his daughter, Bernice King, who asked people on social media to stop sending her AI videos of her father. King is one of many deceased celebrities and historical figures whose likeness has appeared on Sora, often in crude, offensive, and unpleasant ways.
So at King, Inc.‘s request, OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures.
While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used. Authorized representatives or estate owners can request that their likeness not be used in Sora cameos.
OpenAI’s changing stance on historical figures echoes its approach to copyright when Sora first launched. The strategy proved controversial, and the platform mounted an embarrassing U-turn to an “opt-in” policy for rightsholders after it was inundated with depictions of characters like Pikachu, Rick and Morty, and SpongeBob SquarePants.
Unlike copyright, there’s no federal framework for protecting people’s likeness, but a variety of state laws let people sue over unauthorized use of a living person’s image — and in some states, a deceased person’s as well. California, where OpenAI is based, for example, has specifically said postmortem privacy rights apply for AI replicas of performers. For living humans, OpenAI has allowed people to opt in to appearing in videos from the start by having them make AI clones of themselves.