Sexy voices, departing employees, and NDA rumors have challenged the AI company.

AI

Since the launch of its latest AI language model, GPT-4o, OpenAI has found itself on the defensive over the past week due to a string of bad news, rumors, and ridicule circulating on traditional and social media. The negative attention is potentially a sign that OpenAI has entered a new level of public visibility—and is more prominently receiving pushback to its AI approach beyond tech pundits and government regulators.

FURTHER READING

Scarlett Johansson says Altman insinuated that AI soundalike was intentional

OpenAI’s rough week started last Monday when the company previewed a flirty AI assistant with a voice seemingly inspired by Scarlett Johansson from the 2013 film Her. OpenAI CEO Sam Altman alluded to the film himself on X just before the event, and we had previously made that comparison with an earlier voice interface for ChatGPT that launched in September 2023.

While that September update included a voice called “Sky” that some have said sounds like Johansson, it was GPT-4o’s seemingly lifelike, new conversational interface, complete with laughing and emotionally charged tonal shifts, that led to a widely circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Next, a Saturday Night Live joke reinforced an implied connection to Johansson’s voice.

After hearing from Johansson’s lawyers, OpenAI announced it was pausing use of the “Sky” voice in ChatGPT on Sunday. The company specifically mentioned Sky in a tweet and Johansson defensively in its blog post: “We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the company wrote.

On Monday evening, NPR news reporter Bobby Allyn was the first to publish a statement from Johansson saying that Altman approached her to voice the AI assistant last September, but she declined. She says that Altman then attempted to contact her again before the GPT-4o demo last week, but they did not connect, and OpenAI went ahead with the apparent soundalike anyway. She was then “shocked, angered, and in disbelief” and hired lawyers to send letters to Altman and OpenAI asking them for detail on how they created the Sky voice.

“In a time when we are all grappling with deepfakes and the protection of our own likenesses, our own work, our own identities, I believe these are questions that deserve absolute clarity,” Johansson said in her statement. “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

The repercussions of these alleged actions on OpenAI’s part are still unknown but are likely to ripple outward for some time.

Superalignment team implodes

The AI research company’s PR woes continued on Tuesday with the high-profile resignations of two key safety researchers: Ilya Sutskever and Jan Leike, who led the “Superalingment” team focused on ensuring that hypothetical, currently non-existent advanced AI systems do not pose risks to humanity. Following his departure, Leike took to social media to accuse OpenAI of prioritizing “shiny products” over crucial safety research.

FURTHER READING

Chief Scientist Ilya Sutskever leaves OpenAI six months after Altman ouster

In a joint statement posted on X, Altman and OpenAI President Greg Brockman addressed Leike’s criticisms, emphasizing their gratitude for his contributions and outlining the company’s strategy for “responsible” AI development. In a separate, earlier post, Altman acknowledged that “we have a lot more to do” regarding OpenAI’s alignment research and safety culture.

Meanwhile, critics like Meta’s Yann LeCun maintained the drama was much ado about nothing. Responding to a tweet where Leike wrote, “we urgently need to figure out how to steer and control AI systems much smarter than us,” LeCun replied, “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat.”

LeCun continued: “It’s as if someone had said in 1925 ‘we urgently need to figure out how to control aircrafts [sic] that can transport hundreds of passengers at near the speed of the sound over the oceans.’ It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety.  It didn’t require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.”

ARS VIDEO

How The Callisto Protocol’s Team Designed Its Terrifying, Immersive Audio

OpenAI’s NDA kerfuffle

Around the time Sutskever announced his resignation from OpenAI, a Vox article by Kelsey Piper claimed that OpenAI had imposed strict non-disclosure agreements on departing employees, allegedly threatening to revoke their vested equity if they criticized the company.

In the article, Piper wrote, “I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer … If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company.”

In a tweeted response, Altman wrote, “we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement). vested equity is vested equity, full stop.” He wrote that there was a provision about potential equity cancellation in the company’s “previous” exit docs, but “it should never have been something we had in any documents or communication,” he wrote. “this is on me and one of the few times i’ve been genuinely embarrassed running openai.”Advertisement

The CEO stated that OpenAI was already in the process of revising its offboarding paperwork to address these concerns, but critics of the company did not seem satisfied by his response. In an update to the Vox article, Piper called for more transparency and wrote, “All of this is highly ironic for a company that initially advertised itself as OpenAI—that is, as committed in its mission statements to building powerful systems in a transparent and accountable manner.”

The road ahead

These criticisms are far from the first that OpenAI has seen—we’ve levied many ourselves as is appropriate for any company whose goal is to create superintelligent AI models that may end up replacing many humans at their jobs. But the frequency of the incidents over the past week may mark a transition between OpenAI’s role as an underdog “AI startup,” as it is often described in the press (despite huge deals with Microsoft and potentially Apple) and a corporate behemoth with a perpetual target on its back.

With the stakes high for AI to perform as both a financial success (raising the stock price of Big Tech) and as a productivity tool for users, more eyes than ever will be focused on OpenAI’s successes and slip-ups alike.

This article was updated to include Scarlett Johansson’s statement.

LEAVE A REPLY

Please enter your comment!
Please enter your name here