Connect with us

Artificial Intelligence

OpenAI asked Scarlett Johansson to use her voice, she says

blank

Published

on

blank

OpenAI is taking away one of the voices that ChatGPT uses. The company announced on Monday that users thought it sounded like Scarlett Johansson. In response, Johansson said in a statement that she had hired a lawyer to look into the Sky voice and find out exactly how it was made. The game Sky is being stopped because OpenAI used it last week to show off its new GPT-4o model.

According to the company in a blog post, AI voices shouldn’t try to sound like famous people. For example, Sky’s voice is not an imitation of Scarlett Johansson’s; it belongs to a different professional actress who uses her own natural speaking voice. “To protect their privacy, we can’t say who does our voice work.”

Last week, a video of the demo went viral on social media because people thought the voice sounded like Scarlett Johansson’s. There were jokes about how flirtatious the voice was, and some people compared it to a man’s fantasy.

People have said that the flirty voice sounds a lot like Scarlett Johansson’s role as a seductive virtual assistant in the 2013 movie “Her.” Joaquin Phoenix plays the lead role in the movie, and the main character falls in love with the virtual assistant.

The company hasn’t said anything about the similarity between Sky’s voice and Johansson’s, but OpenAI CEO Sam Altman tweeted the word “Her” after the event.

OpenAI’s demo last week was supposed to show off the chatbot’s better ability to have conversations, but it went viral when the sultry voice laughed at almost everything an OpenAI employee said. It said to the worker, “Wow, that’s quite the outfit you’re wearing at one point.” There was another time when the chatbot got complimented and said, “Stop it, you’re making me blush.”

It says in a blog post that it wants the voices of its chatbots to sound “approachable” and “inspire trust.” It also wants them to have a voice that is “warm, engaging, confidence-building, and charismatic.”

In the future, OpenAI says it will “add more voices to ChatGPT to better match the wide range of users’ interests and preferences.”

The whole statement from Johansson is:

“In September, Sam Altman asked me to work for him as a voice actor for the current ChatGPT 4.0 system.” He told me that he thought that by speaking out about the system, I could help tech companies and artists work together and make people feel better about the huge change between humans and AI. He told me that he thought my voice would make people feel better.

I turned down the offer after giving it a lot of thought and for personal reasons. After nine months, everyone, including my friends, family, and strangers, said that the new system called “Sky” sounded a lot like me.

When I heard the demo that was made public, I was shocked, angry, and confused that Mr. Altman would be interested in a voice that sounded so much like mine that even my closest friends and news outlets couldn’t tell the difference. By tweeting the word “her,” Mr. Altman even made it sound like the resemblance was on purpose, referring to the movie in which I voiced Samantha, a chatbot who becomes close with a human.

Mr. Altman called my agent two days before the ChatGPT 4.0 demo came out and asked me to think again. It was out there before we could connect.

Because of what they did, I had to hire a lawyer. The lawyer sent two letters to Mr. Altman and OpenAI explaining what they had done and asking them to explain in detail how they made the “Sky” voice. Because of this, OpenAI reluctantly agreed to remove the “Sky” voice.

Many of us are dealing with deep fakes and how to protect our own likenesses, work, and identities right now. I think these are questions that need to be answered completely. “I look forward to resolution in the form of openness and the passing of appropriate legislation to help protect individual rights.”

OpenAI shared this quote from Altman: “Sky’s voice is not Scarlett Johansson’s, and it was never meant to sound like hers.” Before we reached out to Ms. Johansson, we hired the voice actor who would be Sky’s voice. Because we respect Ms. Johansson, we have stopped using Sky’s voice in our apps and games. We’re sorry that we didn’t talk to Ms. Johansson better.

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Artificial Intelligence

What a new study says suggests that ChatGPT may have passed the Turing test

blank

Published

on

blank

René Descartes, a French philosopher who may or may not have been high on pot, had an interesting thought in 1637: can a machine think? Alan Turing, an English mathematician and computer scientist, gave the answer to this 300-year-old question in 1950: “Who cares?” He said a better question was what would become known as the “Turing test”: if there was a person, a machine, and a human interrogator, could the machine ever trick the human interrogator into thinking it was the person?

Turing changed the question in this way 74 years ago. Now, researchers at the University of California, San Diego, think they have the answer. A new study that had people talk to either different AI systems or another person for five minutes suggests that the answer might be “yes.”

“After a five-minute conversation, participants in our experiment were no better than random at identifying GPT-4. According to the preprint paper, which has not yet undergone peer review, this suggests that current AI systems can deceive people into believing they are human. “These results probably set a lower bound on how likely it is that someone will lie in more naturalistic settings, where people may not be aware of the possibility of lying or only focus on finding it.”

Even though this is a big event that makes headlines, it’s not a milestone that everyone agrees on. The researchers say that Turing first thought of the imitation game as a way to test intelligence, but “many objections have been raised to this idea.” People, for example, are known for being able to humanize almost anything. We want to connect with things, whether they’re people, dogs, or a Roomba with googly eyes on top of it.

Also, it’s interesting that ChatGPT-4 and ChatGPT-3.5, which was also tested, only persuaded humans that it was a person about half of the time, which isn’t much better than random chance. What does this result really mean?

As it turns out, ELIZA was one of the AI systems that the team built into the experiment as a backup plan. She was made at MIT in the mid-1960s and was one of the first programs of her kind. She was impressive for her time, but she doesn’t have much to do with modern large-language model-based systems or LLM-based systems.

“ELIZA could only give pre-written answers, which greatly limited what it could do. Live Science talked to Nell Watson, an AI researcher at the Institute of Electrical and Electronics Engineers (IEEE), about how it might fool someone for five minutes but soon show its flaws. “Language models are completely adaptable; they can put together answers to a lot of different topics, speak in specific languages or sociolects, and show who they are by displaying personality and values that are based on their characters.” a significant improvement over something that a person, no matter how intelligent and careful they were, programmed by hand.

She was perfect for the experiment because she was the same as everyone else. How do you explain test subjects who are lazy and pick between “human” and “machine” at random? If ELIZA gets the same score as chance, then the test is probably not being taken seriously because she’s not that good. In what way can you tell how much of the effect is just people giving things human traits? How much did ELIZA get them to change their minds? That much is probably how much it is.

In fact, ELIZA got only 22%, which is just over 1 in 5 people believing she was human. It’s more likely that ChatGPT has passed the Turing test now that test subjects could reliably tell the difference between some computers and people, but not ChatGPT, the researchers write.

So, does this mean we’re entering a new era of AI that acts like humans? Are computers smarter than people now? Maybe, but we probably shouldn’t make our decisions too quickly.

The researchers say, “In the end, it seems unlikely that the Turing test provides either necessary or sufficient evidence for intelligence. At best, it provides probabilistic support.” The people who took part weren’t even looking for what you might call “intelligence”; the paper says they “were more focused on linguistic style and socio-emotional factors than more traditional notions of intelligence such as knowledge and reasoning.” This “could reflect interrogators’ latent assumption that social intelligence has become the human trait that is most difficult for machines to copy.”

Which brings up a scary question: is the fall of humans the bigger problem than the rise of machines?

“Real humans were actually more successful, convincing interrogators that they were human two-thirds of the time,” the paper’s co-author, Cameron Jones, told Tech Xplore. “Our results suggest that in the real world, people might not be able to reliably tell if they’re talking to a human or an AI system.”

“In the real world, people might not be as aware that they’re talking to an AI system, so the rate of lying might be even higher,” he warned. “This makes me wonder what AI systems will be used for in the future, whether they are used to do bots, do customer service jobs, or spread fake news or fraud.”

There is a draft of the study on arXiv, but it has not yet been reviewed by other scientists.

Continue Reading

Artificial Intelligence

Threads’s API for developers is now live

blank

Published

on

blank

Meta finally put out its long-awaited API for Threads today, so developers can start making games and apps that use it. Third-party developers will be able to create new experiences around

Mark Zuckerberg also posted about the launch of the API, saying, “The Threads API is now widely available and will be coming to more of you soon.”

Engineer for Threads Jesse Chen wrote in a blog post that developers can now use the new API to publish posts, get their own content, and set up reply management tools. In other words, developers can let users hide or show replies or reply to certain ones.

It will also have analytics that let developers see things like the number of views, likes, replies, reposts, and quotes at the media and account level, the company said.

Adam Mosseri, the CEO of Instagram, first talked about the company’s work on the Threads API in October 2023. The API was first released in a closed beta with partners like Techmeme, Sprinklr, Sprout Social, Social News Desk, Hootsuite, and a few other developers. Chen said at that time that Meta planned to let many developers use the API in June. As promised, the company kept its word.

Along with the launch of the new API, the company also put out an open-source reference app on GitHub so developers can play with it.

In 2023, it was hard for third-party developers who made tools for social networks because social networks like Twitter (now X) and Reddit limited or shut down API access at different levels. This is because decentralized social networks like Mastodon and Bluesky are more open to developers. With more than 150 million users, Meta’s Threads is the most popular new social network. Since Threads now works with the fediverse and has an API, third-party developers can make some great social media experiences.

Continue Reading

Artificial Intelligence

Apple has officially announced its intention to collaborate with Google’s Gemini platform in the future

blank

Published

on

blank

After delivering a keynote presentation at WWDC 2024, which unveiled Apple Intelligence and announced a collaboration with OpenAI to integrate ChatGPT into Siri, Senior Vice President Craig Federighi confirmed the intention to collaborate with more third-party models. The initial instance provided by the executive was one of the companies that Apple was considering for a potential partnership.

“In the future, we are excited about the prospect of integrating with other models, such as Google Gemini,” Federighi expressed during a post-keynote discussion. He promptly stated that the company currently has no announcements to make, but that is the overall direction they are heading in.

OpenAI’s ChatGPT is set to become the first external model to be integrated at a later date this year. Apple announces that users will have the ability to access the system without the requirement of creating an account or paying for premium services. Regarding the integration of that platform with the updated iOS 18 version of Siri, Federighi confirmed that the voice assistant will notify users before utilizing its own internal models.

“Now you can accomplish this task directly using Siri, without the need for any additional tools,” stated the Apple executive. “Siri, it is crucial to ascertain whether you will inquire before proceeding to ChatGPT.” Subsequently, you can engage in a dialogue with ChatGPT. Subsequently, if there is any pertinent data mentioned in your inquiry that you wish to provide to ChatGPT, we will inquire, ‘Would you like to transmit this photograph?’ From a privacy standpoint, you always maintain control and have complete visibility.

Continue Reading

Trending