Scarlett Johansson’s attorneys announced Wednesday that the actress plans to take legal action against an AI company that used her name and likeness in an ad without permission—adding the A-lister to a growing faction of stars and politicians frustrated by the proliferation of AI-crafted imposters. But as public figures begin attempting to stamp out deceptive online impersonators via the court system, they may face increasing challenges stemming from the borderless nature of the internet.
In Johansson’s case, according to a report published today in Variety, the actress plans to pursue legal action against Lisa AI, an image generating app that posted an ad on Twitter last month featuring a Johansson deepfake vigorously endorsing the product. The company that owns Lisa AI, however—Convert Yazılım Limited Şirket, according to the app’s terms of service—is a Turkish firm headquartered in Istanbul. And while Hollywood lawyers are certainly no strangers to international disputes, the added variable of AI could throw a snag in matters.
Though politicians in the United States appear increasingly intent on creating a federal legal framework to regulate AI-generated deep fakes—and courts in countries including India have already come down on the matter, siding against AI deep fake creators—not all other world governments have been similarly aggressive in their efforts to control the novel technology. In Japan, for example, regulators announced earlier this year that using copyrighted works to train an AI system does not violate copyright laws.
Even More Celebrities Battle Deepfakes of Themselves
Cybercriminals have stepped up using AI tools to create deepfakes of celebrities, commandeering their likenesses to dupe their fans out of their money and cryptocurrency—with one report claiming such content grew by 87 percent in the last year. On Monday, YouTube giant Mr. Beast notified his over 24 million Twitter followers that he had been the victim of one such scheme—and questioned whether tech companies were capable of stopping them. “Lots of people are getting this deepfake scam ad of me,"...
While the matter of AI and copyright is separate from that of AI and deep fakes, the stance does offer some window into Japan’s current appetite to regulate AI: according to a recent report prepared by the Law Library of Congress, no laws related to AI have been proposed or adopted in Japan. Such laws also do not exist, nor are they anticipated to exist in the near future, in Turkey.
In Johansson’s case, the offending app, Lisa AI, appears to have voluntarily pulled the Twitter ad in contention—likely as a result of the actress’ lawyer’s outreach. But what happens if a company operating out of a country without AI regulation refuses to comply with a similar ultimatum in the future? In most countries, including the United States, public figures are generally subject to fewer protections regarding their publicity rights.
In a legal gray zone, such disputes may start centering on the online platforms on which these deep fakes are posted—such as in this case, Twitter. But since Elon Musk’s takeover of the platform last year, Twitter has loosened many of its policies regarding the dissemination of false information.
Whereas a slew of U.S. senators are seeking to make illegal any AI-generated depiction of a person created without that person’s permission—regardless of further context—Twitter’s policy on the matter currently looks much more relaxed.
According to the company’s standing policy on misleading media, which was updated in April, a post involving audio, video, or images shown to be fake is only eligible for removal if it is “likely to result in widespread confusion on public issues, impact public safety, or cause serious harm.”
Scarlett Johansson may be one of the most famous people alive—but a fake ad purporting to show her endorsing a yearbook photo app probably wouldn’t constitute a national emergency.