OpenAI’s Blunder is a Loss for the ML Community
The timeline and why OpenAI’s actions are a big deal
There’s been a lot of drama recently about OpenAI’s Sky voice sounding like Scarlett Johansson. For those unacquainted with what’s been going, here’s the timeline:
OpenAI announced GPT-4o at their most recent event. This is a model they used to create a very human-sounding AI assistant they demoed.
Around the time of the event, Sam Altman, the CEO of OpenAI tweeted out “Her”. Her is a movie from 2013 where a human falls in love with an AI. The AI is voiced by Scarlett Johansson.
Disputes began about how much the “Sky” voice in the ChatGPT app sounded like Scarlett Johansson in Her.
Sky is removed from the ChatGPT app and ScarJo releases a statement regarding the Sky voice. It stated that OpenAI had reached out to her to see if she was interested in voicing ChatGPT. She declined.
OpenAI released a statement after explaining that Sky wasn’t ScarJo’s voice, but another voice actor’s. They also claimed it was never intended to resemble ScarJo’s voice in any way.
Many sources claim ScarJo is opening a lawsuit against OpenAI for using her likeness without her consent.
On X, the overwhelming opinion I see is in defense of OpenAI: “OpenAI didn’t do anything illegal. ScarJo doesn’t have the right to another’s person’s voice because it sounds like hers”. And they’re right—OpenAI didn’t do anything illegal, but there’s more to this situation than just legality. This is something those within the ML bubble (myself included) often forget. In order for machine learning to see mainstream adoption, it needs to be trusted by everyone.
Keep reading with a 7-day free trial
Subscribe to Society's Backend to keep reading this post and get 7 days of free access to the full post archives.