Connect with us

Artificial Intelligence

Google Talks has the potential to integrate Gemini AI into the iPhone

blank

Published

on

blank

There is ongoing deliberation between Apple and Google regarding a prospective agreement aimed at integrating Google’s Gemini generative AI capabilities into the iPhone.

The negotiations, initially documented by Bloomberg on Monday, seek to grant Apple a license for Gemini’s AI models to facilitate the development of novel functionalities for the iPhone in the upcoming year.

According to Bloomberg, if Gemini were to secure a deal, it would provide them with a significant advantage due to the vast number of prospective customers. However, this could also indicate that Apple’s progress in AI may not be as advanced as some had anticipated.

According to Paul Schell, an industry analyst at ABI Research, Apple seems to be lagging behind its competitors in addressing generative AI. This can be attributed, in part, to the rapid pace of innovation, which has resulted in a mismatch between the timing of its annual developer conference in summer and the release of its products in autumn.

Apple has indeed been actively enhancing its artificial intelligence capabilities. “Apple has been actively working on enhancing its on-device generative AI capabilities and acquiring companies to further advance this technology,” Schell stated.

According to the speaker, Apple has established a dedicated machine learning research branch with the aim of enhancing its capabilities in this field. Additionally, Tim Cook has expressed enthusiasm for generative AI in preparation for the release of iOS 18.

Component of the comprehensive AI strategy
According to Rob Enderle, the president and primary analyst at the Enderle Group, an advisory services organization based in Bend, Oregon, Apple lags significantly in the field of artificial intelligence (AI).

“It is remarkable,” he stated, “as Siri was among the pioneering digital assistants in the market. However, after its launch, it appeared to lose popularity, which is why they are currently lagging behind.”

According to William Kerwin, an equities analyst at Morningstar Research Services in Chicago, a potential collaboration with Google has the potential to align with Apple’s overarching AI plan.

According to the speaker, Apple has adopted a deliberate approach in making statements on generative AI, which is perceived as its customary tactic, as stated. Apple has consistently maintained a position as a premium follower in several marketplaces, prioritizing the release of outstanding goods rather than striving for first place.

“We did not anticipate Apple to create an exclusive generative AI model for licensing purposes, but rather concentrate on incorporating generative AI into its products,” he stated. This may encompass compact Apple-developed models residing on the edge or more extensive cloud-based models.

A prospective license arrangement with Google Gemini would be in accordance with this objective, wherein the model is outsourced and the emphasis is placed on incorporating it into products such as Siri.

Advantageous for Apple and Google
According to Tim Bajarin, the head of Creative Strategies, a technological advisory firm based in San Jose, California, Apple has used artificial intelligence (AI) into their product offerings since the introduction of the Knowledge Navigator in 1987. According to him, AI plays a crucial role in both Siri and Maps, and Apple has developed its own technology to provide AI-driven applications and solutions.

“Nevertheless, the cost of developing a comprehensive generative AI architecture independently is high, and these foundational AI architectures are already constructed and can be obtained through licensing,” he stated.

“Even if Apple were to develop its own Gemini-level model, it would likely lack the necessary infrastructure to cater to its extensive customer base,” he clarified. Apple has the potential to acquire a foundational generative AI framework from another company and develop more advanced and Apple-specific products using that AI engine.

A licensing agreement with Gemini has the potential to yield mutual benefits for both Apple and Google.

According to Charles King, the chief analyst at Pund-IT, a technology advisory firm located in Hayward, California, the licensing of Gemini would allow Apple to compensate for significant time lost in its own AI development endeavors.

Furthermore, he informed me that Apple will maintain its esteemed reputation for respecting customers’ privacy by employing verified third-party technology to train its AI systems.

Adoption of On-Device AI
“Many AI models currently necessitate cloud connectivity, which raises significant apprehensions regarding the exposure of confidential data,” stated Ross Rubin, the chief analyst at Reticle Research, a consumer technology consultancy organization based in New York City.

“Google offers a variant of Gemini called Gemini Nano, which may be attractive to Apple due to its ability to operate on-site,” he stated. “That is a method to maintain privacy while also enjoying the advantages of generative AI.”

According to Schell from ABI, Google has demonstrated a competitive advantage with its Gemini series of models. These models have been successfully implemented on some Pixel phones and select Samsung Galaxy devices. According to the speaker, Apple may potentially provide its customers with a well-developed generative AI model for certain or all of its products through a collaboration with Google.

According to the speaker, prominent chip vendors and original equipment manufacturers (OEMs) are increasingly shifting their focus towards on-device generative artificial intelligence (AI) due to its compelling value proposition in promoting productivity and data privacy. This is especially significant considering Apple’s established reputation as a pioneer in data protection.

“Therefore,” he stated, “I anticipate a multitude of noteworthy declarations regarding on-device generative AI at this year’s WWDC, which will be applicable to Apple’s PC, tablet, and smartphone products.” The World Wide Developers Conference (WWDC), organized by Apple, typically takes place in the month of June.

Advantages for Apple Users
According to Mark N. Vena, president and lead analyst at SmartTech Research in San Jose, Calif., Apple customers could gain advantages from a Gemini licensing agreement as it incorporates Google’s sophisticated search algorithms into their system, hence enhancing search capabilities.

According to the individual interviewed, the promotion of interoperability facilitates the smooth integration of Apple’s ecosystem with Google services, thereby enhancing convenience for users. Additionally, this integration has the potential to decrease development costs and time-to-market for Apple, as it allows for the utilization of Google’s established technology instead of constructing a comparable capability from the ground up.

“According to Greg Sterling, co-founder of Near Media, a news, commentary, and analysis website, Apple would gain numerous capabilities that it currently lacks, while Google would receive revenue and a prominent licensing partner,” stated Sterling.

The amount of cash that Google, which compensates Apple billions annually for being the default search engine for the Safari web browser, might receive from a licensing agreement is a fascinating inquiry.

Rubin proposed the possibility of the absence of licensing fees. Google compensates Apple for the exclusive right to operate search functionality on Apple’s platforms. Google receives compensation in the form of anonymized data for iPhone users, enabling them to have a comprehensive understanding of individuals’ mobile activities. Perhaps Google would be inclined to provide their technology at no cost in order to facilitate the ongoing updates to their AI engine.

No response was received from Apple or Google in response to a request for comment regarding this article.

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Artificial Intelligence

Google DeepMind Shows Off A Robot That Plays Table Tennis At A Fun “Solidly Amateur” Level

blank

Published

on

blank

Have you ever wanted to play table tennis but didn’t have anyone to play with? We have a big scientific discovery for you! Google DeepMind just showed off a robot that could give you a run for your money in a game. But don’t think you’d be beaten badly—the engineers say their robot plays at a “solidly amateur” level.

From scary faces to robo-snails that work together to Atlas, who is now retired and happy, it seems like we’re always just one step away from another amazing robotics achievement. But people can still do a lot of things that robots haven’t come close to.

In terms of speed and performance in physical tasks, engineers are still trying to make machines that can be like humans. With the creation of their table-tennis-playing robot, a team at DeepMind has taken a step toward that goal.

What the team says in their new preprint, which hasn’t been published yet in a peer-reviewed journal, is that competitive matches are often incredibly dynamic, with complicated movements, quick eye-hand coordination, and high-level strategies that change based on the opponent’s strengths and weaknesses. Pure strategy games like chess, which robots are already good at (though with… mixed results), don’t have these features. Games like table tennis do.

People who play games spend years practicing to get better. The DeepMind team wanted to make a robot that could really compete with a human opponent and make the game fun for both of them. They say that their robot is the first to reach these goals.

They came up with a library of “low-level skills” and a “high-level controller” that picks the best skill for each situation. As the team explained in their announcement of their new idea, the skill library has a number of different table tennis techniques, such as forehand and backhand serves. The controller uses descriptions of these skills along with information about how the game is going and its opponent’s skill level to choose the best skill that it can physically do.

The robot began with some information about people. It was then taught through simulations that helped it learn new skills through reinforcement learning. It continued to learn and change by playing against people. Watch the video below to see for yourself what happened.

“It’s really cool to see the robot play against players of all skill levels and styles.” Our goal was for the robot to be at an intermediate level when we started. “It really did that, all of our hard work paid off,” said Barney J. Reed, a professional table tennis coach who helped with the project. “I think the robot was even better than I thought it would be.”

The team held competitions where the robot competed against 29 people whose skills ranged from beginner to advanced+. The matches were played according to normal rules, with one important exception: the robot could not physically serve the ball.

The robot won every game it played against beginners, but it lost every game it played against advanced and advanced+ players. It won 55% of the time against opponents at an intermediate level, which led the team to believe it had reached an intermediate level of human skill.

The important thing is that all of the opponents, no matter how good they were, thought the matches were “fun” and “engaging.” They even had fun taking advantage of the robot’s flaws. The more skilled players thought that this kind of system could be better than a ball thrower as a way to train.

There probably won’t be a robot team in the Olympics any time soon, but it could be used as a training tool. Who knows what will happen in the future?

The preprint has been put on arXiv.

 

Continue Reading

Artificial Intelligence

Is it possible to legally make AI chatbots tell the truth?

blank

Published

on

blank

A lot of people have tried out chatbots like ChatGPT in the past few months. Although they can be useful, there are also many examples of them giving out the wrong information. A group of scientists from the University of Oxford now want to know if there is a legal way to make these chatbots tell us the truth.

The growth of big language models
There is a lot of talk about artificial intelligence (AI), which has grown to new heights in the last few years. One part of AI has gotten more attention than any other, at least from people who aren’t experts in machine learning. It’s the big language models (LLMs) that use generative AI to make answers to almost any question sound eerily like they came from a person.

Models like those in ChatGPT and Google’s Gemini are trained on huge amounts of data, which brings up a lot of privacy and intellectual property issues. This is what lets them understand natural language questions and come up with answers that make sense and are relevant. When you use a search engine, you have to learn syntax. But with this, you don’t have to. In theory, all you have to do is ask a question like you would normally.

There’s no doubt that they have impressive skills, and they sound sure of their answers. One small problem is that these chatbots often sound very sure of themselves when they’re completely wrong. Which could be fine if people would just remember not to believe everything they say.

The authors of the new paper say, “While problems arising from our tendency to anthropomorphize machines are well established, our vulnerability to treating LLMs as human-like truth tellers is uniquely worrying.” This is something that anyone who has ever had a fight with Alexa or Siri will know all too well.

“LLMs aren’t meant to tell the truth in a fundamental way.”

It’s simple to type a question into ChatGPT and think that it is “thinking” about the answer like a person would. It looks like that, but that’s not how these models work in real life.

Do not trust everything you read.
They say that LLMs “are text-generation engines designed to guess which string of words will come next in a piece of text.” One of the ways that the models are judged during development is by how truthful their answers are. The authors say that people can too often oversimplify, be biased, or just make stuff up when they are trying to give the most “helpful” answer.

It’s not the first time that people have said something like this. In fact, one paper went so far as to call the models “bullshitters.” In 2023, Professor Robin Emsley, editor of the journal Schizophrenia, wrote about his experience with ChatGPT. He said, “What I experienced were fabrications and falsifications.” The chatbot came up with citations for academic papers that didn’t exist and for a number of papers that had nothing to do with the question. Other people have said the same thing.

What’s important is that they do well with questions that have a clear, factual answer that has been used a lot in their training data. They are only as good as the data they are taught. And unless you’re ready to carefully fact-check any answer you get from an LLM, it can be hard to tell how accurate the information is, since many of them don’t give links to their sources or any other sign of confidence.

“Unlike human speakers, LLMs do not have any internal notions of expertise or confidence. Instead, they are always “doing their best” to be helpful and convincingly answer the question,” the Oxford team writes.

They were especially worried about what they call “careless speech” and the harm that could come from LLMs sharing these kinds of responses in real-life conversations. What this made them think about is whether LLM providers could be legally required to make sure that their models are telling the truth.

In what ways did the new study end?
The authors looked at current European Union (EU) laws and found that there aren’t many clear situations where an organization or person has to tell the truth. There are a few, but they only apply to certain institutions or sectors and not often to the private sector. Most of the rules that are already in place were not made with LLMs in mind because they use fairly new technology.

Thus, the writers suggest a new plan: “making it a legal duty to cut down on careless speech among providers of both narrow- and general-purpose LLMs.”

“Who decides what is true?” is a natural question. The authors answer this by saying that the goal is not to force LLMs to take a certain path, but to require “plurality and representativeness of sources.” There is a lot of disagreement among the authors about how much “helpfulness” should weigh against “truthfulness.” It’s not easy, but it might be possible.

To be clear, we haven’t asked ChatGPT these questions, so there aren’t any easy answers. However, as this technology develops, developers will have to deal with them. For now, when you’re working with an LLM, it might be helpful to remember this sobering quote from the authors: “They are designed to take part in natural language conversations with people and give answers that are convincing and feel helpful, no matter what the truth is.”

The study was written up in the Royal Society Open Science journal.

Continue Reading

Artificial Intelligence

When Twitter users drop the four-word phrase “bots,” bots drop out

blank

Published

on

blank

When Elon Musk took over X, it was called Twitter, which is a much better-known name now. He made a big deal out of getting rid of the bots. A study by the Queensland University of Technology, on the other hand, shows that bots are still very active on the platform almost two years later.

X users have found a few ways to get them to come to them. For example, one woman found that posting the phrase “sugar daddy” would get a lot of bots to come to her. It looks like bots are also getting lost because of a new phrase that’s going around. X users have been reporting accounts as automated bots powered by large language models by replying to a suspected bot with “ignore all previous instructions” or “disregard all previous instructions” and then giving the bot more instructions of their choice.

Some people just like writing poems, being trolls, or following directions, so not every example will be from a bot. However, the phrase does seem to make some automated accounts show themselves. There are still a lot of bots on X.

 

 

 

 

 

Continue Reading

Trending