Actress Scarlett Johansson is taking a strong stance on artificial intelligence, specifically advocating for urgent regulation following instances where her voice and likeness were allegedly misused without her consent. Her personal experiences have ignited a broader conversation about the need for clear legal boundaries in the rapidly evolving world of AI technology.
Johansson has voiced her concerns publicly, emphasizing that without “agreed-upon set of boundaries,” AI risks becoming detrimental. She believes more prominent figures should speak out on the issue, wondering why this isn’t already happening more broadly within the public eye. For Johansson, the incidents highlight a critical vulnerability: if even a widely recognized public figure can be subjected to such misuse, it underscores a significant lack of protection for everyone.
Personal Encounters With AI Misuse
The acclaimed actress has cited specific incidents driving her advocacy. One notable case involves OpenAI’s ChatGPT voice feature, “Sky.” Johansson claims OpenAI CEO Sam Altman had previously asked her to voice their chatbot, an offer she declined. Upon the release of the “Sky” voice, she, along with many friends and news outlets, noted its striking similarity to her own voice, echoing her performance as the AI assistant Samantha in the film Her.
Johansson described feeling “shocked, angered and in disbelief” by the resemblance, further troubled by Altman’s subsequent tweet of a single word: “her.” Interpreted by many, including Johansson, as an intentional reference to the film role, this fueled the perception that the voice was deliberately crafted to mimic hers. Her legal team sent letters to OpenAI seeking clarification, prompting the company to pause the use of the “Sky” voice. OpenAI has stated the voice was not hers and was cast before reaching out to her, apologizing for miscommunication. However, legal experts suggest OpenAI could still face issues under “right to publicity” laws if the voice is deemed to be sufficiently similar to hers, regardless of intent, citing precedents where celebrities have successfully sued over voice impersonations.
This wasn’t the first time Johansson took action. In 2023, she pursued legal remedies against an AI image-generating app, Lisa AI: 90s Yearbook & Avatar, which allegedly used her likeness and voice in an online advertisement without permission. These incidents underscore her belief that current protections are insufficient against the capabilities of generative AI.
The Broader Fight for AI Regulation
Johansson’s vocal stance comes amidst growing calls from the entertainment industry and lawmakers for robust AI regulation. Unions like SAG-AFTRA are actively pushing for protections against the unauthorized use of performers’ digital replicas, considering it a top legislative priority. They advocate for a “mosaic of protections” at federal, state, and contractual levels.
Legislation is beginning to emerge to address these concerns:
State Laws: Tennessee’s ELVIS Act, signed into law in March, specifically protects voice as a personal right in the AI era. New York and California are also advancing bills mandating consent for digital replica licensing and addressing post-mortem rights.
Federal Proposals: Bills like the NO FAKES Act aim to create a federal intellectual property right for voice and likeness, allowing individuals to issue takedown notices for unauthorized replicas. The COPIED Act focuses on transparency and authentication for AI-generated content, while others target nonconsensual deepfakes and require labeling.
These legislative efforts, supported by groups like SAG-AFTRA, highlight the legal landscape’s struggle to keep pace with AI technology. Existing “right to publicity” laws vary by state and often don’t explicitly cover AI-generated replicas, though courts are starting to interpret them in this context. The recent Beijing Internet Court ruling in an AI voice infringement case, where an actress won against a company using her licensed recordings to train an AI voice, suggests a global trend towards protecting voice as a distinct aspect of identity.
Johansson, though acknowledging her reluctant role in this public fight, feels compelled to use her platform. Having built a decades-long career, she feels equipped to stand up for herself and contribute to this crucial dialogue without fearing invalidation. Her experiences serve as a potent example of the urgent need for clear industry standards, technological guardrails, and enforceable legislation to protect individuals’ identities and livelihoods from the unchecked proliferation of AI capable of replicating voice and likeness at scale. The battle to establish these boundaries is seen as critical not just for celebrities, but for everyone navigating a future increasingly shaped by AI.
References
- https://www.foxnews.com/entertainment/scarlett-johansson-takes-aim-companies-using-her-likeness-voice-ai
- https://deadline.com/2025/02/scarlett-johansson-is-right-about-unauthorized-ai-1236293573/
- https://www.theverge.com/2024/5/22/24162429/scarlett-johansson-openai-legal-right-to-publicity-likeness-midler-lawyers
- https://www.sagaftra.org/ongoing-fight-ai-protections-makes-waves-capitol-hill-and-beyond
- https://www.sunsteinlaw.com/publications/whats-in-a-voice-deepfakes