Connect with us

Artificial Intelligence

Will artificial intelligence really pose a threat to humanity some day?

blank

Published

on

The threat of rogue artificial intelligence could be real

I’m a real sucker for movies based on the idea of advanced artificial intelligence and its potential applications in the future. Unfortunately, most blockbuster sci-fi movies including classics like Terminator or The Matrix depict a very bleak future where humanity is either enslaved or slowly being driven extinct by our own creations. But what are the chances of this actually happening? This is something I often wonder about and I’m certain that there are many others who can say the same. Naysayers may dismiss these concerns as being irrational or ridiculous, but actually, quite the opposite is true. Stories about creators being eventually destroyed by their own creations can be found all throughout human history. The ancient Greeks believed that the Titans were destroyed by Zeus and their other children so why would it be strange to think that mankind could suffer the same fate at the hands of machines powered by artificial intelligence?

Ironically, the Greeks also ended up destroying – or rather forgetting – the very gods they created. But let’s get back to the present now. Ordinary people with little knowledge about artificial intelligence are not the only ones worried that things may one day get out of hand. Renowned scientists and engineers who happen to be working in the field also agree that AI-driven machines could potentially pose a problem if left unchecked. Among the said scientists, we find none other than professor Stephen Hawking who had quite a few things to say regarding the matter back in December. Professor Hawking believes that things could quickly get out of hand if we’re not cautious, although he agrees that artificial intelligence will likely pose no threat to us in the near future. Seeing as how AI is currently no where near being capable of creating a Terminator, I think we can breath easy for at least a few more years.

But what about the future? Well, contrary to what Hollywood teaches us, humans are actually responsible and do take precautions once in a blue moon. An organization that calls itself the Future of Life Institute has recently published an open letter that aims to educate people about the benefits of artificial intelligence, as well as the potential dangers. Many well-known personalities have already signed the letter including Stephen Hawking and Elon Musk. The CEO of Tesla and SpaceX took it one step further though, and donated $10 million to the institute earlier today. The money will go towards R&D and coming up with ways of making artificial intelligence safer, just in case. Assuming that you don’t have such ridiculously large sums of money lying around to donate, you can still help out by signing the open letter which can be found right here.

As for the question found in the title of this article, many people do believe that artificial intelligence could pose a real threat to us in the future and luckily some of them are even coming up with solutions to prevent that from happening. But while we’re unlikely too see man-hunting robots any time soon, there’s no denying that AI technology is starting to become a very real part of our lives. From Microsoft’s Project Adam and Google’s self-driving cars to the inevitable sentient Japanese robot bound to be announced any day now, more and more companies are attempting to find various applications for artificial intelligence and they likely haven’t even scratched the surface yet.

Artificial Intelligence

Apple has officially announced its intention to collaborate with Google’s Gemini platform in the future

blank

Published

on

blank

After delivering a keynote presentation at WWDC 2024, which unveiled Apple Intelligence and announced a collaboration with OpenAI to integrate ChatGPT into Siri, Senior Vice President Craig Federighi confirmed the intention to collaborate with more third-party models. The initial instance provided by the executive was one of the companies that Apple was considering for a potential partnership.

“In the future, we are excited about the prospect of integrating with other models, such as Google Gemini,” Federighi expressed during a post-keynote discussion. He promptly stated that the company currently has no announcements to make, but that is the overall direction they are heading in.

OpenAI’s ChatGPT is set to become the first external model to be integrated at a later date this year. Apple announces that users will have the ability to access the system without the requirement of creating an account or paying for premium services. Regarding the integration of that platform with the updated iOS 18 version of Siri, Federighi confirmed that the voice assistant will notify users before utilizing its own internal models.

“Now you can accomplish this task directly using Siri, without the need for any additional tools,” stated the Apple executive. “Siri, it is crucial to ascertain whether you will inquire before proceeding to ChatGPT.” Subsequently, you can engage in a dialogue with ChatGPT. Subsequently, if there is any pertinent data mentioned in your inquiry that you wish to provide to ChatGPT, we will inquire, ‘Would you like to transmit this photograph?’ From a privacy standpoint, you always maintain control and have complete visibility.

Continue Reading

Artificial Intelligence

Reinforcement learning AI has the potential to introduce humanoid robots into the real world

blank

Published

on

blank

AI tools like ChatGPT are revolutionizing our digital experiences, but the next frontier is bringing AI interactions into the physical world. Humanoid robots, trained with a specific AI, have the potential to be incredibly useful in various settings such as factories, space stations, and nursing homes. Two recent papers in Science Robotics emphasize the potential of reinforcement learning to bring robots like these into existence.

According to Ilija Radosavovic, a computer scientist at the University of California, Berkeley, there has been remarkable advancement in AI within the digital realm, thanks to tools like GPT. However, I believe that AI in the physical world holds immense potential for transformation.

The cutting-edge software that governs the movements of bipedal bots frequently employs a technique known as model-based predictive control. It has resulted in the development of highly advanced systems, like the Atlas robot from Boston Dynamics, known for its impressive parkour abilities. However, programming these robot brains requires a considerable amount of human expertise, and they struggle to handle unfamiliar situations. Using reinforcement learning, AI can learn through trial and error to perform sequences of actions, which may prove to be a more effective approach.

According to Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers, the team aimed to test the limits of reinforcement learning in real robots. Haarnoja and his team decided to create software for a toy robot named OP3, manufactured by Robotis. The team had the goal of teaching OP3 to walk and play one-on-one soccer.

“Soccer provides a favorable setting for exploring general reinforcement learning,” states Guy Lever of Google DeepMind, who coauthored the paper. It demands careful planning, adaptability, curiosity, collaboration, and a drive to succeed.

Operating and repairing larger robots can be quite challenging, but the smaller size of these robots allowed us to iterate quickly,” Haarnoja explains. Similar to a network architect, the researchers first trained the machine learning software on virtual robots before deploying it on real robots. This technique, called sim-to-real transfer, helps ensure that the software is well-prepared for the challenges it may face in the real world, such as the possibility of robots falling over and breaking.

The training of the virtual bots occurred in two stages. During the initial phase, the team focused on training one AI to successfully lift the virtual robot off the ground, while another AI was trained to score goals without losing balance. The AIs were provided with data that included the robot’s joint positions and movements, as well as the positions of other objects in the game captured by external cameras. In a recently published preprint, the team developed a version of the system that utilizes the robot’s visual capabilities. The AIs were required to generate fresh joint positions. If they excelled, their internal parameters were adjusted to promote further replication of the successful actions. During the second stage, the researchers developed an AI that could replicate the behavior of the first two AIs and evaluate its performance against opponents that were similar in skill level (versions of itself).

Similar to a network architect, the researchers adjusted different elements of the simulation, such as friction, sensor delays, and body-mass distribution, in order to fine-tune the control software, known as a controller, for the real-world robots. In addition to scoring goals, the AI was also recognized for its ability to minimize knee torque and prevent injuries.

Robots that were tested with the RL control software demonstrated impressive improvements in their performance. They walked at a significantly faster pace, turned with remarkable agility, and were able to recover from falls in a fraction of the time compared to robots using the scripted controller provided by the manufacturer. However, more sophisticated abilities also surfaced, such as seamlessly connecting actions. “It was fascinating to witness the robots acquiring more advanced motor skills,” comments Radosavovic, who was not involved in the study. And the controller acquired knowledge not only of individual moves, but also the strategic thinking needed to excel in the game, such as positioning oneself to block an opponent’s shot.

According to Joonho Lee, a roboticist at ETH Zurich, the soccer paper is truly impressive. “We have witnessed an unprecedented level of resilience from humanoids.”

But what about humanoid robots that are the size of humans? In another recent paper, Radosavovic collaborated with colleagues to develop a controller for a larger humanoid robot. This particular robot, Digit from Agility Robotics, is approximately five feet tall and possesses knees that bend in a manner reminiscent of an ostrich. The team’s approach resembled that of Google DeepMind. Both teams utilized computer brains known as neural networks. However, Radosavovic employed a specialized variant known as a transformer, which is commonly found in large language models such as those that power ChatGPT.

Instead of processing words and generating more words, the model analyzed 16 observation-action pairs. These pairs represented what the robot had sensed and done in the past 16 snapshots of time, which spanned approximately a third of a second. The model then determined the robot’s next action based on this information. Learning was made easier by initially focusing on observing the actual joint positions and velocity. This provided a solid foundation before progressing to the more challenging task of incorporating observations with added noise, which better reflected real-world conditions. For enhanced sim-to-real transfer, the researchers introduced slight variations to the virtual robot’s body and developed a range of virtual terrains, such as slopes, trip-inducing cables, and bubble wrap.

With extensive training in the digital realm, the controller successfully operated a real robot for an entire week of rigorous tests outdoors, ensuring that the robot maintained its balance without a single instance of falling over. In the lab, the robot successfully withstood external forces, even when an inflatable exercise ball was thrown at it. The controller surpassed the manufacturer’s non-machine-learning controller, effortlessly navigating a series of planks on the ground. While the default controller struggled to climb a step, the RL controller successfully overcame the obstacle, despite not encountering steps during its training.

Reinforcement learning has gained significant popularity in recent years, particularly in the field of four-legged locomotion. Interestingly, these studies have also demonstrated the successful application of these techniques to two-legged robots. According to Pulkit Agrawal, a computer scientist at MIT, these papers have reached a tipping point by either matching or surpassing manually defined controllers. With the immense potential of data, a multitude of capabilities can be unlocked within a remarkably brief timeframe.

It is highly probable that the approaches of the papers are complementary. In order to meet the demands of the future, AI robots will require the same level of resilience as Berkeley’s system and the same level of agility as Google DeepMind’s. In real-world soccer, both aspects are incorporated. Soccer has posed a significant challenge for the field of robotics and artificial intelligence for a considerable period, as noted by Lever.

 

Continue Reading

Artificial Intelligence

Paul Graham asserts Sam Altman did not receive a termination from his position at Y Combinator

blank

Published

on

blank

Paul Graham, the co-founder of startup accelerator Y Combinator, refuted allegations that Sam Altman, the CEO of OpenAI, was forced to step down as president of Y Combinator in 2019 because of possible conflicts of interest. Graham expressed his disagreement in a series of posts on X on Thursday.

“There have been allegations that Y Combinator terminated Sam Altman,” Graham states. “That statement is false.”

Altman joined Y Combinator as a partner in 2011, initially working there part-time. In February 2014, Graham appointed him as the president of Y Combinator.

Altman, together with Elon Musk, Peter Thiel, Jessica Livingston (a founding partner of Y Combinator), and other individuals, established OpenAI as a nonprofit organization in 2015. They successfully raised $1 billion for this venture.

For a number of years, Altman divided his time between Y Combinator and OpenAI, effectively managing both organizations. However, as per Graham’s account, when OpenAI made the announcement in 2019 about creating a profit-making subsidiary with Altman as the CEO, Livingston informed Altman that he had to make a decision between OpenAI and Y Combinator.

Graham writes that they informed him that if he intended to dedicate himself entirely to OpenAI, they would need to appoint a different person to manage YC, and he consented to this arrangement. “Even if he had stated his intention to appoint another CEO for OpenAI in order to fully dedicate himself to YC, we would have accepted that as well.”

Graham’s account of the events contradicts the reported information that Altman was compelled to step down from Y Combinator due to allegations made by the accelerator’s partners. These allegations claimed that Altman prioritized his personal projects, such as OpenAI, over his responsibilities as president. According to The Washington Post, Graham abruptly ended a trip abroad in November in order to personally fire Altman.

Helen Toner, a former member of the OpenAI board, along with others, attempted to remove Altman as CEO due to allegations of deceptive behavior. However, Altman managed to regain his position. Toner also stated on the Ted AI Show podcast that the real reasons behind Altman’s departure from Y Combinator were concealed at the time.

Allegedly, certain partners at Y Combinator expressed concern about the indirect ownership that Altman had in OpenAI while serving as Y Combinator’s president. Y Combinator’s late-stage fund has made a $10 million investment in OpenAI’s subsidiary that operates for-profit.

However, Graham asserts that the investment occurred prior to Altman becoming a full-time employee at OpenAI, and Graham himself was unaware of it.

“The funds did not make a significant investment,” Graham wrote. “Clearly, it had no impact on me, as I only became aware of it 5 minutes ago.”

Bret Taylor and Larry Summers, members of the OpenAI board, wrote an op-ed in The Economist that appears noticeably timed with Graham’s social media posts. This op-ed challenges the claims made by Toner and Tasha McCauley, both former OpenAI board members, that Altman lacks the ability to “consistently resist the influence of profit motives.”

Toner and McCauley’s argument may be valid. According to The Information, Altman is contemplating transforming OpenAI into a profit-making corporation due to pressure from investors, notably Microsoft, who are urging the company to focus on commercial ventures.

Continue Reading

Trending