Connect with us

Artificial Intelligence

OpenAI launches enterprise ChatGPT

blank

Published

on

blank

OpenAI today launched ChatGPT Enterprise, a business-focused version of its AI-powered chatbot app, to capitalize on ChatGPT’s viral success.

ChatGPT Enterprise, which OpenAI teased in a blog post earlier this year, can write emails, essays, and debug code. However, the new ChatGPT adds “enterprise-grade” privacy and data analysis, improved performance, and customization options.

Microsoft’s recently launched enterprise chatbot service, Bing Chat Enterprise, has similar features to ChatGPT Enterprise.

“Today marks another step towards an AI assistant for work that helps with any task, protects your company data and is customized for your organization,” OpenAI writes. Businesses interested in ChatGPT Enterprise should contact us. No pricing yet, but it will depend on each company’s usage and use cases.”

ChatGPT Enterprise’s new admin console includes single sign-on, domain verification, and usage statistics dashboards to manage employee use. Shareable conversation templates let employees build internal workflows using ChatGPT, and credits to OpenAI’s API platform let companies create fully custom solutions.

ChatGPT Enterprise also includes unlimited access to Advanced Data Analysis, formerly Code Interpreter, which analyzes data, creates charts, solves math problems, and more, including from uploaded files. Given a prompt like “Tell me what’s interesting about this data,” ChatGPT’s Advanced Data Analysis can find insights in financial, health, or location data.

Advanced Data Analysis was previously only available to ChatGPT Plus subscribers, the $20-per-month premium tier of the consumer web and mobile apps. To clarify, OpenAI says ChatGPT Plus will remain and that ChatGPT Enterprise will complement it.

GPT-4, OpenAI’s flagship AI model, powers ChatGPT Enterprise and Plus. ChatGPT Enterprise customers receive priority access to GPT-4, which offers twice the speed and a larger context window of 32,000 tokens (~25,000 words).

Tokens are raw text, while context window is the text the model considers before generating more text. For example, “fantastic” would be split into “fan,” “tas,” and “tic”. Models with large context windows are less likely to “forget” recent conversations.

OpenAI emphasizes that it won’t train models on business data sent to ChatGPT Enterprise or usage data and that all conversations with ChatGPT Enterprise are encrypted in transit and at rest to reassure businesses that have restricted their employees from using the consumer version.

“We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” writes OpenAI in a blog post.

OpenAI claims that more than 80% of Fortune 500 companies have adopted ChatGPT, one of the fastest-growing consumer apps in history, and that businesses want an enterprise-focused version.

ChatGPT’s longevity is uncertain.

From May to June, ChatGPT traffic dropped 9.7% globally and average web app time dropped 8.5%, according to Similarweb. The drop may be due to OpenAI’s ChatGPT app for iOS and Android and summer vacation (fewer kids using ChatGPT for homework help). But increased competition wouldn’t be surprising.

OpenAI must monetize the tool regardless.

According to The Information, ChatGPT cost over $540 million last year, including Google talent acquisition. Some estimate ChatGPT costs OpenAI $700,000 per day.

OpenAI only earned $30 million in fiscal 2022.

CEO Sam Altman has told investors that ChatGPT Enterprise will help the company reach $200 million this year and $1 billion next year.

OpenAI plans to offer ChatGPT Business for smaller teams, connect apps to ChatGPT Enterprise, offer “more powerful” and “enterprise-grade” Advanced Data Analysis and web browsing, and provide tools for data analysts, marketers, and customer support.

“We look forward to sharing an even more detailed roadmap with prospective customers and continuing to evolve ChatGPT Enterprise based on your feedback,” OpenAI writes.

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Artificial Intelligence

Apple has officially announced its intention to collaborate with Google’s Gemini platform in the future

blank

Published

on

blank

After delivering a keynote presentation at WWDC 2024, which unveiled Apple Intelligence and announced a collaboration with OpenAI to integrate ChatGPT into Siri, Senior Vice President Craig Federighi confirmed the intention to collaborate with more third-party models. The initial instance provided by the executive was one of the companies that Apple was considering for a potential partnership.

“In the future, we are excited about the prospect of integrating with other models, such as Google Gemini,” Federighi expressed during a post-keynote discussion. He promptly stated that the company currently has no announcements to make, but that is the overall direction they are heading in.

OpenAI’s ChatGPT is set to become the first external model to be integrated at a later date this year. Apple announces that users will have the ability to access the system without the requirement of creating an account or paying for premium services. Regarding the integration of that platform with the updated iOS 18 version of Siri, Federighi confirmed that the voice assistant will notify users before utilizing its own internal models.

“Now you can accomplish this task directly using Siri, without the need for any additional tools,” stated the Apple executive. “Siri, it is crucial to ascertain whether you will inquire before proceeding to ChatGPT.” Subsequently, you can engage in a dialogue with ChatGPT. Subsequently, if there is any pertinent data mentioned in your inquiry that you wish to provide to ChatGPT, we will inquire, ‘Would you like to transmit this photograph?’ From a privacy standpoint, you always maintain control and have complete visibility.

Continue Reading

Artificial Intelligence

Reinforcement learning AI has the potential to introduce humanoid robots into the real world

blank

Published

on

blank

AI tools like ChatGPT are revolutionizing our digital experiences, but the next frontier is bringing AI interactions into the physical world. Humanoid robots, trained with a specific AI, have the potential to be incredibly useful in various settings such as factories, space stations, and nursing homes. Two recent papers in Science Robotics emphasize the potential of reinforcement learning to bring robots like these into existence.

According to Ilija Radosavovic, a computer scientist at the University of California, Berkeley, there has been remarkable advancement in AI within the digital realm, thanks to tools like GPT. However, I believe that AI in the physical world holds immense potential for transformation.

The cutting-edge software that governs the movements of bipedal bots frequently employs a technique known as model-based predictive control. It has resulted in the development of highly advanced systems, like the Atlas robot from Boston Dynamics, known for its impressive parkour abilities. However, programming these robot brains requires a considerable amount of human expertise, and they struggle to handle unfamiliar situations. Using reinforcement learning, AI can learn through trial and error to perform sequences of actions, which may prove to be a more effective approach.

According to Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers, the team aimed to test the limits of reinforcement learning in real robots. Haarnoja and his team decided to create software for a toy robot named OP3, manufactured by Robotis. The team had the goal of teaching OP3 to walk and play one-on-one soccer.

“Soccer provides a favorable setting for exploring general reinforcement learning,” states Guy Lever of Google DeepMind, who coauthored the paper. It demands careful planning, adaptability, curiosity, collaboration, and a drive to succeed.

Operating and repairing larger robots can be quite challenging, but the smaller size of these robots allowed us to iterate quickly,” Haarnoja explains. Similar to a network architect, the researchers first trained the machine learning software on virtual robots before deploying it on real robots. This technique, called sim-to-real transfer, helps ensure that the software is well-prepared for the challenges it may face in the real world, such as the possibility of robots falling over and breaking.

The training of the virtual bots occurred in two stages. During the initial phase, the team focused on training one AI to successfully lift the virtual robot off the ground, while another AI was trained to score goals without losing balance. The AIs were provided with data that included the robot’s joint positions and movements, as well as the positions of other objects in the game captured by external cameras. In a recently published preprint, the team developed a version of the system that utilizes the robot’s visual capabilities. The AIs were required to generate fresh joint positions. If they excelled, their internal parameters were adjusted to promote further replication of the successful actions. During the second stage, the researchers developed an AI that could replicate the behavior of the first two AIs and evaluate its performance against opponents that were similar in skill level (versions of itself).

Similar to a network architect, the researchers adjusted different elements of the simulation, such as friction, sensor delays, and body-mass distribution, in order to fine-tune the control software, known as a controller, for the real-world robots. In addition to scoring goals, the AI was also recognized for its ability to minimize knee torque and prevent injuries.

Robots that were tested with the RL control software demonstrated impressive improvements in their performance. They walked at a significantly faster pace, turned with remarkable agility, and were able to recover from falls in a fraction of the time compared to robots using the scripted controller provided by the manufacturer. However, more sophisticated abilities also surfaced, such as seamlessly connecting actions. “It was fascinating to witness the robots acquiring more advanced motor skills,” comments Radosavovic, who was not involved in the study. And the controller acquired knowledge not only of individual moves, but also the strategic thinking needed to excel in the game, such as positioning oneself to block an opponent’s shot.

According to Joonho Lee, a roboticist at ETH Zurich, the soccer paper is truly impressive. “We have witnessed an unprecedented level of resilience from humanoids.”

But what about humanoid robots that are the size of humans? In another recent paper, Radosavovic collaborated with colleagues to develop a controller for a larger humanoid robot. This particular robot, Digit from Agility Robotics, is approximately five feet tall and possesses knees that bend in a manner reminiscent of an ostrich. The team’s approach resembled that of Google DeepMind. Both teams utilized computer brains known as neural networks. However, Radosavovic employed a specialized variant known as a transformer, which is commonly found in large language models such as those that power ChatGPT.

Instead of processing words and generating more words, the model analyzed 16 observation-action pairs. These pairs represented what the robot had sensed and done in the past 16 snapshots of time, which spanned approximately a third of a second. The model then determined the robot’s next action based on this information. Learning was made easier by initially focusing on observing the actual joint positions and velocity. This provided a solid foundation before progressing to the more challenging task of incorporating observations with added noise, which better reflected real-world conditions. For enhanced sim-to-real transfer, the researchers introduced slight variations to the virtual robot’s body and developed a range of virtual terrains, such as slopes, trip-inducing cables, and bubble wrap.

With extensive training in the digital realm, the controller successfully operated a real robot for an entire week of rigorous tests outdoors, ensuring that the robot maintained its balance without a single instance of falling over. In the lab, the robot successfully withstood external forces, even when an inflatable exercise ball was thrown at it. The controller surpassed the manufacturer’s non-machine-learning controller, effortlessly navigating a series of planks on the ground. While the default controller struggled to climb a step, the RL controller successfully overcame the obstacle, despite not encountering steps during its training.

Reinforcement learning has gained significant popularity in recent years, particularly in the field of four-legged locomotion. Interestingly, these studies have also demonstrated the successful application of these techniques to two-legged robots. According to Pulkit Agrawal, a computer scientist at MIT, these papers have reached a tipping point by either matching or surpassing manually defined controllers. With the immense potential of data, a multitude of capabilities can be unlocked within a remarkably brief timeframe.

It is highly probable that the approaches of the papers are complementary. In order to meet the demands of the future, AI robots will require the same level of resilience as Berkeley’s system and the same level of agility as Google DeepMind’s. In real-world soccer, both aspects are incorporated. Soccer has posed a significant challenge for the field of robotics and artificial intelligence for a considerable period, as noted by Lever.

 

Continue Reading

Artificial Intelligence

Paul Graham asserts Sam Altman did not receive a termination from his position at Y Combinator

blank

Published

on

blank

Paul Graham, the co-founder of startup accelerator Y Combinator, refuted allegations that Sam Altman, the CEO of OpenAI, was forced to step down as president of Y Combinator in 2019 because of possible conflicts of interest. Graham expressed his disagreement in a series of posts on X on Thursday.

“There have been allegations that Y Combinator terminated Sam Altman,” Graham states. “That statement is false.”

Altman joined Y Combinator as a partner in 2011, initially working there part-time. In February 2014, Graham appointed him as the president of Y Combinator.

Altman, together with Elon Musk, Peter Thiel, Jessica Livingston (a founding partner of Y Combinator), and other individuals, established OpenAI as a nonprofit organization in 2015. They successfully raised $1 billion for this venture.

For a number of years, Altman divided his time between Y Combinator and OpenAI, effectively managing both organizations. However, as per Graham’s account, when OpenAI made the announcement in 2019 about creating a profit-making subsidiary with Altman as the CEO, Livingston informed Altman that he had to make a decision between OpenAI and Y Combinator.

Graham writes that they informed him that if he intended to dedicate himself entirely to OpenAI, they would need to appoint a different person to manage YC, and he consented to this arrangement. “Even if he had stated his intention to appoint another CEO for OpenAI in order to fully dedicate himself to YC, we would have accepted that as well.”

Graham’s account of the events contradicts the reported information that Altman was compelled to step down from Y Combinator due to allegations made by the accelerator’s partners. These allegations claimed that Altman prioritized his personal projects, such as OpenAI, over his responsibilities as president. According to The Washington Post, Graham abruptly ended a trip abroad in November in order to personally fire Altman.

Helen Toner, a former member of the OpenAI board, along with others, attempted to remove Altman as CEO due to allegations of deceptive behavior. However, Altman managed to regain his position. Toner also stated on the Ted AI Show podcast that the real reasons behind Altman’s departure from Y Combinator were concealed at the time.

Allegedly, certain partners at Y Combinator expressed concern about the indirect ownership that Altman had in OpenAI while serving as Y Combinator’s president. Y Combinator’s late-stage fund has made a $10 million investment in OpenAI’s subsidiary that operates for-profit.

However, Graham asserts that the investment occurred prior to Altman becoming a full-time employee at OpenAI, and Graham himself was unaware of it.

“The funds did not make a significant investment,” Graham wrote. “Clearly, it had no impact on me, as I only became aware of it 5 minutes ago.”

Bret Taylor and Larry Summers, members of the OpenAI board, wrote an op-ed in The Economist that appears noticeably timed with Graham’s social media posts. This op-ed challenges the claims made by Toner and Tasha McCauley, both former OpenAI board members, that Altman lacks the ability to “consistently resist the influence of profit motives.”

Toner and McCauley’s argument may be valid. According to The Information, Altman is contemplating transforming OpenAI into a profit-making corporation due to pressure from investors, notably Microsoft, who are urging the company to focus on commercial ventures.

Continue Reading

Trending