Connect with us

Robotics

NASA is ready to fully showcase the RoboSimian at the DARPA Robotics Challenge Finals

blank

Published

on

nasa-darpa-robosimian

It seems like NASA is everywhere these days, including at DARPA’s Robotic Challenge Finals apparently. The space agency’s Jet Propulsion Laboratory division is scheduled to compete this weekend with their intriguing, yet somewhat creepy RoboSimian. True to its name, the RoboSimian is a monkey-like robot that many expect to actually win the competition, although much is still uncertain considering that it will have to go up against a bunch of equally interesting machines. DARPA’s Robotic Challenge Finals will have a total of 24 teams from all over the world and their robots competing for $1 million in Pomona, California. While most of the participating robots have been designed to look as humanoid as possible, NASA went in a completely different direction with the RoboSimian, one of the few machines there which uses four limbs for locomotion instead of just two.

At first glance, it may seem like this is just another “who can build the best robot” contest, and it is to some extent. However, each team has specifically designed their robot with a higher purpose in mind, such as helping out in certain situations that would be too dangerous for humans. NASA’s RoboSimian for example, was designed by their Jep Propulsion Laboratory branch as a first-response vehicle that can enter disaster zones and navigate all types of terrain. In order to achieve this, the chimp-like robot was equipped with four rotating arms with which it can walk across debris or even climb over them if need be. In addition, RoboSimian can walk equally well forward, backward or even sideways, and all its limbs are capable of using a variety of tools. What’s more, NASA also gave the robot multiple sets of eyes that have been distributed across its body in order to maximize visibility.

NASA’s Jet Propulsion Laboratory certainly built an impressive machine here, however, the DARPA Robotics Challenge Finals will test it to its absolute limits. If RoboSimian wants to win the competition it will have to open valves, cut holes in the wall, successfully navigate difficult terrain and even drive a vehicle. Despite all of this though, the folks over at JPL are pretty confident in their creation and expect RoboSimian to beat its competitors. With experienced teams from all over the world and several Atlas models to go up against, the mechanical monkey will definitely have to perform exceptionally if it wants to bag the $1 million and the bragging rights associated with winning such a difficult competition.

Artificial Intelligence

Reinforcement learning AI has the potential to introduce humanoid robots into the real world

blank

Published

on

blank

AI tools like ChatGPT are revolutionizing our digital experiences, but the next frontier is bringing AI interactions into the physical world. Humanoid robots, trained with a specific AI, have the potential to be incredibly useful in various settings such as factories, space stations, and nursing homes. Two recent papers in Science Robotics emphasize the potential of reinforcement learning to bring robots like these into existence.

According to Ilija Radosavovic, a computer scientist at the University of California, Berkeley, there has been remarkable advancement in AI within the digital realm, thanks to tools like GPT. However, I believe that AI in the physical world holds immense potential for transformation.

The cutting-edge software that governs the movements of bipedal bots frequently employs a technique known as model-based predictive control. It has resulted in the development of highly advanced systems, like the Atlas robot from Boston Dynamics, known for its impressive parkour abilities. However, programming these robot brains requires a considerable amount of human expertise, and they struggle to handle unfamiliar situations. Using reinforcement learning, AI can learn through trial and error to perform sequences of actions, which may prove to be a more effective approach.

According to Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers, the team aimed to test the limits of reinforcement learning in real robots. Haarnoja and his team decided to create software for a toy robot named OP3, manufactured by Robotis. The team had the goal of teaching OP3 to walk and play one-on-one soccer.

“Soccer provides a favorable setting for exploring general reinforcement learning,” states Guy Lever of Google DeepMind, who coauthored the paper. It demands careful planning, adaptability, curiosity, collaboration, and a drive to succeed.

Operating and repairing larger robots can be quite challenging, but the smaller size of these robots allowed us to iterate quickly,” Haarnoja explains. Similar to a network architect, the researchers first trained the machine learning software on virtual robots before deploying it on real robots. This technique, called sim-to-real transfer, helps ensure that the software is well-prepared for the challenges it may face in the real world, such as the possibility of robots falling over and breaking.

The training of the virtual bots occurred in two stages. During the initial phase, the team focused on training one AI to successfully lift the virtual robot off the ground, while another AI was trained to score goals without losing balance. The AIs were provided with data that included the robot’s joint positions and movements, as well as the positions of other objects in the game captured by external cameras. In a recently published preprint, the team developed a version of the system that utilizes the robot’s visual capabilities. The AIs were required to generate fresh joint positions. If they excelled, their internal parameters were adjusted to promote further replication of the successful actions. During the second stage, the researchers developed an AI that could replicate the behavior of the first two AIs and evaluate its performance against opponents that were similar in skill level (versions of itself).

Similar to a network architect, the researchers adjusted different elements of the simulation, such as friction, sensor delays, and body-mass distribution, in order to fine-tune the control software, known as a controller, for the real-world robots. In addition to scoring goals, the AI was also recognized for its ability to minimize knee torque and prevent injuries.

Robots that were tested with the RL control software demonstrated impressive improvements in their performance. They walked at a significantly faster pace, turned with remarkable agility, and were able to recover from falls in a fraction of the time compared to robots using the scripted controller provided by the manufacturer. However, more sophisticated abilities also surfaced, such as seamlessly connecting actions. “It was fascinating to witness the robots acquiring more advanced motor skills,” comments Radosavovic, who was not involved in the study. And the controller acquired knowledge not only of individual moves, but also the strategic thinking needed to excel in the game, such as positioning oneself to block an opponent’s shot.

According to Joonho Lee, a roboticist at ETH Zurich, the soccer paper is truly impressive. “We have witnessed an unprecedented level of resilience from humanoids.”

But what about humanoid robots that are the size of humans? In another recent paper, Radosavovic collaborated with colleagues to develop a controller for a larger humanoid robot. This particular robot, Digit from Agility Robotics, is approximately five feet tall and possesses knees that bend in a manner reminiscent of an ostrich. The team’s approach resembled that of Google DeepMind. Both teams utilized computer brains known as neural networks. However, Radosavovic employed a specialized variant known as a transformer, which is commonly found in large language models such as those that power ChatGPT.

Instead of processing words and generating more words, the model analyzed 16 observation-action pairs. These pairs represented what the robot had sensed and done in the past 16 snapshots of time, which spanned approximately a third of a second. The model then determined the robot’s next action based on this information. Learning was made easier by initially focusing on observing the actual joint positions and velocity. This provided a solid foundation before progressing to the more challenging task of incorporating observations with added noise, which better reflected real-world conditions. For enhanced sim-to-real transfer, the researchers introduced slight variations to the virtual robot’s body and developed a range of virtual terrains, such as slopes, trip-inducing cables, and bubble wrap.

With extensive training in the digital realm, the controller successfully operated a real robot for an entire week of rigorous tests outdoors, ensuring that the robot maintained its balance without a single instance of falling over. In the lab, the robot successfully withstood external forces, even when an inflatable exercise ball was thrown at it. The controller surpassed the manufacturer’s non-machine-learning controller, effortlessly navigating a series of planks on the ground. While the default controller struggled to climb a step, the RL controller successfully overcame the obstacle, despite not encountering steps during its training.

Reinforcement learning has gained significant popularity in recent years, particularly in the field of four-legged locomotion. Interestingly, these studies have also demonstrated the successful application of these techniques to two-legged robots. According to Pulkit Agrawal, a computer scientist at MIT, these papers have reached a tipping point by either matching or surpassing manually defined controllers. With the immense potential of data, a multitude of capabilities can be unlocked within a remarkably brief timeframe.

It is highly probable that the approaches of the papers are complementary. In order to meet the demands of the future, AI robots will require the same level of resilience as Berkeley’s system and the same level of agility as Google DeepMind’s. In real-world soccer, both aspects are incorporated. Soccer has posed a significant challenge for the field of robotics and artificial intelligence for a considerable period, as noted by Lever.

 

Continue Reading

Engineering

DARPA has announced the first test of an extraordinary uncrewed submarine that takes inspiration from the manta ray

blank

Published

on

blank

Explore the most recent cutting-edge innovation from the Defense Advanced Research Projects Agency, commonly referred to as DARPA. Introducing a colossal uncrewed submarine inspired by the manta ray, created by the same innovators behind hypersonic air-breathing weapons, submarine-detecting shrimp, and robot jazz musicians. Northrop Grumman’s prototype has just finished its initial in-water trial.

The submarine has been designed to transport substantial loads across extensive distances beneath the water’s surface without the presence of any human occupants for assistance. During deployment, it can enter a state of “hibernation,” where it remains attached to the seabed in order to conserve energy.

In 2022, Northrop Grumman stated that their design for the project would serve DARPA’s objective of generating “strategic surprise.” We believe it is safe to assert that they have successfully accomplished that objective.

In February and March of this year, DARPA conducted a comprehensive test of the prototype uncrewed underwater vehicle (UUV) off the coast of Southern California.

“The successful and comprehensive testing of the Manta Ray confirms that the vehicle is prepared to progress towards real-world operations. It was quickly assembled in the field using modular subsections,” stated Dr. Kyle Woerner, the DARPA program manager for Manta Ray. The integration of cross-country modular transportation, on-site assembly, and subsequent deployment showcases a unique capability for an extra-large unmanned underwater vehicle (UUV).

The level of specificity we can currently provide is limited to “extra-large.”. New Atlas reports that DARPA and Northrop Grumman have thus far maintained confidentiality regarding the majority of the technical details of the aircraft. However, it is speculated that the online images reveal concealed propulsors, an antenna, water inlets, and potentially maneuvering thrusters.

By examining the images, we can gain an understanding of the size and observe that its sleek curves truly resemble the animal it is named after—and perhaps even a few science fiction creations as well.

Manta rays, which belong to numerous species, can be found in various bodies of water worldwide. Numerous reports of these creatures actively interacting with divers and snorkelers show that they are sociable and intelligent. However, it was the elegant movement of the manta rays that truly motivated the engineers responsible for the development of the new UUV, thus upholding a longstanding practice of drawing inspiration from nature for design purposes.

Following deployment, the vehicle navigates the water with effective buoyancy-powered gliding, according to Woerner.

blank

An additional significant benefit of the Manta Ray UUV, emphasized by both DARPA and Northrop Grumman, is its capability to be transported in separate components and quickly reconstructed at the desired location. The prototype was transported from the build location in Maryland to the opposite side of the country and could also be useful in the field.

According to Woerner, transporting the vehicle directly to its intended area of operation helps to save energy that would otherwise be used during transit.

DARPA is presently collaborating with the US Navy to determine the subsequent actions for this technology. The exact timeline for the deployment of Manta Ray in actual water remains undisclosed.

Continue Reading

Artificial Intelligence

Boston Dynamics has retired its Atlas robot, showcasing its most impressive moments

blank

Published

on

blank

Boston Dynamics is discontinuing its hydraulic robot Atlas after years of pushing the limits. In order to bid adieu, the innovative firm has compiled a film montage showcasing the most remarkable instances of the mechanoid marvel, encompassing comical dancing routines, impressive acrobatic maneuvers, and a handful of unsuccessful attempts.

Atlas has been a source of inspiration for nearly ten years, igniting our creativity, motivating future generations of roboticists, and surpassing technical obstacles in the area. Boston Dynamics stated in a video aired on April 16 that it is now time for their hydraulic Atlas robot to rest and unwind.

“Please review all the achievements we have made so far with the Atlas platform,” they added.

Boston Dynamics, a robotics company based in Massachusetts, created Atlas for the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s advanced technology division. Initially, it was conceived as a component of a prize competition with the aim of accelerating the progress of a humanoid robot capable of aiding in search and rescue missions.

Upon its public introduction in 2013, Atlas required a tether for stability and was limited to walking in a linear path. Almost.

A 1-year-old youngster has limited ability to walk and frequently stumbles. “As you observe these machines and draw comparisons to science fiction, it is important to bear in mind that this represents our current technological capabilities,” stated Gill Pratt, a program manager at DARPA who was involved in the design and funding of Atlas, in an interview with the New York Times in 2013.

Significant transformations have occurred since that time. The engineers at Boston Dynamics have meticulously tweaked the robot’s technology and algorithms throughout its development, enabling it to carry out physical tasks that would be difficult for most people with ease.

The most recent version of Atlas has a height of 150 cm, which is a little less than 5 feet, and a weight of 89 kilograms, equivalent to 196 pounds. With the help of its 28 hydraulic joints, this machine can achieve speeds of up to 2.5 meters (nearly 8 feet) per second. Additionally, it is capable of executing somersaults, athletic jumps, and 360° spins.

Additionally, it is equipped with a multitude of sensors that are utilized to accurately sense the immediate surroundings and respond accordingly in real-time. For example, if an obstacle is placed in the path of the robot, it will identify the issue and navigate around it. If you push it with a pole, it will elegantly adapt its body to stay upright.

Boston Dynamics has not provided an explanation for its decision to discontinue its renowned robot. Certain analysts have proposed that the corporation is preparing for the release of another novel product, but others have questioned whether Atlas has become a financial liability. While the company has successfully marketed its other inventions, such as the dog-like robot Spot, to different companies for diverse purposes, Atlas was never made available for sale.

According to IEEE Spectrum, Boston Dynamics has announced that they are retiring the hydraulic Atlas robot. Does this imply that a hydraulic Atlas robot is not the next item on the schedule? Currently, the outcome is uncertain and cannot be predicted.

It is uncertain what the future holds for the robots developed by Boston Dynamics, but we can only hope that it does not involve a rebellion by these machines.

Continue Reading

Trending