OpenAI halted the operation of one of its ChatGPT voices following inquiries into its creation process, prompted by actress Scarlett Johansson’s remark that it bore a striking resemblance to her own voice.
Scarlett Johansson revealed that OpenAI initially approached her to provide the voice for ChatGPT. However, upon her refusal, the company proceeded to develop a voice that closely resembled hers. Johansson, in a statement, expressed her surprise and concern, mentioning that she has taken the step to seek legal advice. She has also sent two formal inquiries to OpenAI seeking clarification on the development process of the ChatGPT voice, known as Sky.
Johansson, known for her role as the voice of an AI assistant in the 2013 film “Her,” mentioned that OpenAI CEO Sam Altman had contacted her in September with a proposal to lend her voice to ChatGPT, aiming to connect the worlds of creativity and technology. However, she turned down Altman’s offer.
“When I heard the released demo, I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson further stated.
OpenAI CEO Sam Altman, who has openly expressed his admiration for the 2013 Spike Jonze film, extended an invitation for comparison by sharing the word “Her” on X following the announcement of the new ChatGPT version. However, OpenAI executives later refuted any association between Johansson and the new voice assistant. Subsequently, the company abruptly discontinued the voice.
OpenAI’s Response
In a blog post, OpenAI disclosed that the AI voice in question, named “Sky,” was crafted from the voice of another actress, whose identity remains undisclosed by the company to safeguard her privacy.
The latest iteration, dubbed GPT-4o, evolves the chatbot into a voice assistant capable of analyzing facial expressions, discerning emotions, and even performing on request.
This new voice assistant is set to be released to the public in the forthcoming weeks. In a recent live demonstration, it adopted a familiar, flirtatious manner with certain OpenAI staff members, prompting speculation about whether this playful demeanor was a deliberate strategy to maintain user engagement with the AI platform.
This development emerges just half a year after actors agreed to terminate strikes that paralyzed the entertainment industry, demanding better compensation and safeguards concerning the utilization of AI. Scarlett Johansson participated in last year’s industrial action, which partly centered on concerns about studios employing AI to replicate actors’ facial expressions and voices. The agreement reached with studios included assurances that such practices would not occur without actors’ explicit consent. This situation establishes a concerning precedent for copyright and consent, particularly when a leading company in the field acts in such a manner.
OpenAI has been embroiled in various legal disputes regarding its utilization of copyrighted content available online. In December, the New York Times announced plans to sue the corporation, alleging that it had utilized “millions” of articles from the media organization to train its ChatGPT AI model. Additionally, in September, authors George RR Martin and John Grisham revealed intentions to pursue legal action over claims that their copyrights were violated in training the system.
AI Anxiety
AI anxiety in relation to celebrities pertains to concerns and apprehensions surrounding the use of artificial intelligence technology in replicating or imitating famous individuals, particularly their voices, faces, and mannerisms. This anxiety arises from various factors:
- Identity Exploitation – Celebrities may feel their identities are being exploited when AI systems use their likeness or voice without their explicit consent. This raises questions about privacy, consent, and the ethical implications of using someone’s identity in AI applications.
- Loss of Control – Celebrities may worry about losing control over their own image and voice, especially if AI technologies can convincingly mimic them. They may fear misrepresentation or manipulation, leading to reputational damage or loss of autonomy.
- Copyright Infringement – Unauthorized use of a celebrity’s likeness or voice in AI applications can raise legal concerns related to copyright infringement. Celebrities may pursue legal action to protect their intellectual property rights and seek compensation for unauthorized use.
- Impact on Reputation – AI-generated content that inaccurately represents or misinterprets a celebrity’s image or voice can have a detrimental impact on their reputation. Celebrities may be concerned about the spread of misinformation or harmful content generated by AI.
- Trust in AI – Instances of AI systems imitating celebrities without consent can erode public trust in AI technology. People may become skeptical or wary of AI applications if they perceive them as deceptive or unethical in their treatment of celebrity identities.
Overall, AI anxiety with regards to celebrities highlights broader ethical and legal considerations surrounding the use of AI in entertainment, media, and other industries where celebrity personas are often exploited or commodified.