Connect with us

Robotics

Underwater robot can detect contraband hidden under ships

blank

Published

on

blank

Overseas contraband has been a big issue in international affairs for a long time now, with governments trying to weed out the illegal transportation of tobacco, alcohol and drugs, without any luck. Contraband is one of the day-to-day occurrences that is as hard to stop as theft or embezzlement. MIT researchers have come up with an underwater robot that can detect contraband attached to the bottoms of the ships that carry them, making a real advancement in the prevention of contraband.

underwater-robot

The bottom part of the underwater robot houses the electronics, while the top part houses the propulsion system.

Contraband is not only a threat to a country’s economy, but it is a threat to consumers as well, since all contraband material will not come with guarantees, warranties or any certainty that the product is safe whatsoever. That is why MIT researchers have started working on the underwater robot that will help law enforcement and customs agents detect illegal cargo on ships without being noticed by the perpetrators. With the use of an underwater robot, there could be many costs and risks minimized for government agencies and law enforcement as well.

The underwater robot was created with the purposes of keeping ports safe and making customs checks easier on officers, as well as undetectable. The robot is about the size of a football, and it has the ability to swim around a ship or a boat and detect hollow spaces where contraband could be hidden. The underwater robot uses ultrasound scans to collect data and it was originally designed to detect cracks in nuclear reactor water tanks. The underwater robot has an oval shape with a flat panel on the side used as the scanning device. Although the underwater robot is pretty obtrusive and would stick out in an average ocean environment, MIT researchers say that the mechanism it uses leaves no wakes or ripples behind it and its shape allows for easy camouflage in algae or marine plants.

MIT-Underwater-Robot

The insides of the underwater robot.

The underwater robot seems to be rather easy to make and will not cost a fortune like other robots do, with a price tag of around $600. The robot can be 3D printed and the device consists of a waterproof part which houses the electronics and does the actual scanning, whilst the other part is permeable and houses the propulsion system which pumps water in order for the robot to move around. The low price tag and easy manufacturing would make the underwater robot a good investment for port security because it could be sent out in swarms to sweep the ships coming without them even noticing their presence. The robots could do a collaborative inspection and send the data to the ports without the risk of being caught red-handed.

underwater-robot-scanning

This is how the underwater robot will look like when scanning objects.

The battery in the underwater robot lasts for about 40 minutes and the robot can travel about a meter each second, so the time it has available lets it scan at least three smaller ships. If the underwater robot can be sent out in swarms, with groups of robots doing the inspections in the port while other groups sit back at the port, recharging, the 40 minute battery life should not be a problem. At least not for now, since the robots are still in development. They will probably be capable of wireless charging when the prototypes will be finished and will be able to perform ultrasounds without making surface contact with the object they are scanning (usually necessary for an ultrasound).

The underwater robot is not yet finalized, but if the technology MIT researchers use will allow it, the robot can become quite handy over time. The robots could detect structural defects, nuclear weapons, firearms or even check radiation levels. You can even imagine these robots routing traffic around in ports by communicating with each ship in the port and with all the underwater robot units assigned to their areas. It’s an interesting prototype and it will surely bring a lot of innovation into port security protocols.

As part of the editorial team here at Geekreply, John spends a lot of his time making sure each article is up to snuff. That said, he also occasionally pens articles on the latest in Geek culture. From Gaming to Science, expect the latest news fast from John and team.

Engineering

To make up for a lack of workers, Japan’s railways now have huge humanoid robots doing work

blank

Published

on

blank

JR West is going to fix its railway system in a very Japanese way: by using high-tech robots that look like people.

Starting this month, the company will use big robots that look like Mecha to do a lot of maintenance work on its railway infrastructure. For example, they will paint the support structures above the tracks and cut down tree branches that get in the way of the trains.

The flexible arms can reach heights of up to 12 meters (39 feet) and lift things that weigh up to 40 kilograms (88 pounds). They can also be fitted with different tools to do a wide range of odd jobs.

A person can sit in the truck that goes with the working mechanoid and use a joystick and VR goggles connected to a camera on the bot’s head to control its movement.

Below is a video that shows how the technology works. In one part of the montage, the robot is even seen using a circular saw to cut down tall trees. But don’t worry—the people who made the machine think it’s a safe pair of hands.

JR West recently said that they worked with robotics company Jinki Ittai and tech company Nippon Signal to create the technology. They did this to make their employees safer and lower the risk of accidents at work.

They also said that “labor shortages” were a big reason for the new technology. Japan has one of the oldest populations in the world. About 29% of the people there are over 65 years old. It will be a problem for a lot of people, including the economy, which is already having a hard time because of a lack of workers.

Robots and other new technologies are often blamed for “stealing jobs” from people, but it looks like they can also be used to fill in for workers who aren’t available.

Continue Reading

Artificial Intelligence

A group of humanoid robots from Agility will take care of your spanx

blank

Published

on

blank

So far, the humanoid robotics business has only been full of promises and test runs. These programs only use a few robots and don’t usually lead to anything more important, but they are important for the eventual use of new technology. While a pilot with logistics giant GXO went well, Agility announced on Thursday that it has now signed a formal deal.

Moving plastic totes around a Spanx factory in Georgia will be Digit’s first job, and that’s not a lie. The number of two-legged robots that will be taking boxes off of cobots and putting them on conveyor belts has not been made public, so it is likely that it is still too low. When it comes to tens or hundreds of thousands, most people would be happy to share that information.

They are leased as part of a model called “robots-as-a-service” instead of being bought outright. This way, the client can put off paying the huge upfront costs of such a complicated system while still getting support and software updates.

Last year, GXO started to test drive Digit robots. A pilot deal was just announced between the logistics company and Apptronik, one of Agility’s biggest rivals. I’m not sure how one will change the other.

When Peggy Johnson became CEO of Agility in March, she made it clear that the company was focused on ROI. This is a big change in a field where results are still mostly theoretical.

Johnson said, “There will be many firsts in the humanoid robot market in the years to come, but I’m very proud of the fact that Agility is the first company to have real humanoid robots deployed at a customer site, making money and solving real-world business problems.” “Agility has always been focused on the only metric that matters: giving our customers value by putting Digit to work. This milestone deployment sets a new standard for the whole industry.”

It’s not a surprise that Agility, based in Oregon, was the first to reach another important milestone. The company has been ahead of the rest of the market in terms of development and deployment. Of course, the industry is still very new, and there isn’t a clear market leader yet.

Amazon started testing Agility systems in its own warehouses in October of last year, but neither company has said what will happen next.

Continue Reading

Artificial Intelligence

Reinforcement learning AI has the potential to introduce humanoid robots into the real world

blank

Published

on

blank

AI tools like ChatGPT are revolutionizing our digital experiences, but the next frontier is bringing AI interactions into the physical world. Humanoid robots, trained with a specific AI, have the potential to be incredibly useful in various settings such as factories, space stations, and nursing homes. Two recent papers in Science Robotics emphasize the potential of reinforcement learning to bring robots like these into existence.

According to Ilija Radosavovic, a computer scientist at the University of California, Berkeley, there has been remarkable advancement in AI within the digital realm, thanks to tools like GPT. However, I believe that AI in the physical world holds immense potential for transformation.

The cutting-edge software that governs the movements of bipedal bots frequently employs a technique known as model-based predictive control. It has resulted in the development of highly advanced systems, like the Atlas robot from Boston Dynamics, known for its impressive parkour abilities. However, programming these robot brains requires a considerable amount of human expertise, and they struggle to handle unfamiliar situations. Using reinforcement learning, AI can learn through trial and error to perform sequences of actions, which may prove to be a more effective approach.

According to Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers, the team aimed to test the limits of reinforcement learning in real robots. Haarnoja and his team decided to create software for a toy robot named OP3, manufactured by Robotis. The team had the goal of teaching OP3 to walk and play one-on-one soccer.

“Soccer provides a favorable setting for exploring general reinforcement learning,” states Guy Lever of Google DeepMind, who coauthored the paper. It demands careful planning, adaptability, curiosity, collaboration, and a drive to succeed.

Operating and repairing larger robots can be quite challenging, but the smaller size of these robots allowed us to iterate quickly,” Haarnoja explains. Similar to a network architect, the researchers first trained the machine learning software on virtual robots before deploying it on real robots. This technique, called sim-to-real transfer, helps ensure that the software is well-prepared for the challenges it may face in the real world, such as the possibility of robots falling over and breaking.

The training of the virtual bots occurred in two stages. During the initial phase, the team focused on training one AI to successfully lift the virtual robot off the ground, while another AI was trained to score goals without losing balance. The AIs were provided with data that included the robot’s joint positions and movements, as well as the positions of other objects in the game captured by external cameras. In a recently published preprint, the team developed a version of the system that utilizes the robot’s visual capabilities. The AIs were required to generate fresh joint positions. If they excelled, their internal parameters were adjusted to promote further replication of the successful actions. During the second stage, the researchers developed an AI that could replicate the behavior of the first two AIs and evaluate its performance against opponents that were similar in skill level (versions of itself).

Similar to a network architect, the researchers adjusted different elements of the simulation, such as friction, sensor delays, and body-mass distribution, in order to fine-tune the control software, known as a controller, for the real-world robots. In addition to scoring goals, the AI was also recognized for its ability to minimize knee torque and prevent injuries.

Robots that were tested with the RL control software demonstrated impressive improvements in their performance. They walked at a significantly faster pace, turned with remarkable agility, and were able to recover from falls in a fraction of the time compared to robots using the scripted controller provided by the manufacturer. However, more sophisticated abilities also surfaced, such as seamlessly connecting actions. “It was fascinating to witness the robots acquiring more advanced motor skills,” comments Radosavovic, who was not involved in the study. And the controller acquired knowledge not only of individual moves, but also the strategic thinking needed to excel in the game, such as positioning oneself to block an opponent’s shot.

According to Joonho Lee, a roboticist at ETH Zurich, the soccer paper is truly impressive. “We have witnessed an unprecedented level of resilience from humanoids.”

But what about humanoid robots that are the size of humans? In another recent paper, Radosavovic collaborated with colleagues to develop a controller for a larger humanoid robot. This particular robot, Digit from Agility Robotics, is approximately five feet tall and possesses knees that bend in a manner reminiscent of an ostrich. The team’s approach resembled that of Google DeepMind. Both teams utilized computer brains known as neural networks. However, Radosavovic employed a specialized variant known as a transformer, which is commonly found in large language models such as those that power ChatGPT.

Instead of processing words and generating more words, the model analyzed 16 observation-action pairs. These pairs represented what the robot had sensed and done in the past 16 snapshots of time, which spanned approximately a third of a second. The model then determined the robot’s next action based on this information. Learning was made easier by initially focusing on observing the actual joint positions and velocity. This provided a solid foundation before progressing to the more challenging task of incorporating observations with added noise, which better reflected real-world conditions. For enhanced sim-to-real transfer, the researchers introduced slight variations to the virtual robot’s body and developed a range of virtual terrains, such as slopes, trip-inducing cables, and bubble wrap.

With extensive training in the digital realm, the controller successfully operated a real robot for an entire week of rigorous tests outdoors, ensuring that the robot maintained its balance without a single instance of falling over. In the lab, the robot successfully withstood external forces, even when an inflatable exercise ball was thrown at it. The controller surpassed the manufacturer’s non-machine-learning controller, effortlessly navigating a series of planks on the ground. While the default controller struggled to climb a step, the RL controller successfully overcame the obstacle, despite not encountering steps during its training.

Reinforcement learning has gained significant popularity in recent years, particularly in the field of four-legged locomotion. Interestingly, these studies have also demonstrated the successful application of these techniques to two-legged robots. According to Pulkit Agrawal, a computer scientist at MIT, these papers have reached a tipping point by either matching or surpassing manually defined controllers. With the immense potential of data, a multitude of capabilities can be unlocked within a remarkably brief timeframe.

It is highly probable that the approaches of the papers are complementary. In order to meet the demands of the future, AI robots will require the same level of resilience as Berkeley’s system and the same level of agility as Google DeepMind’s. In real-world soccer, both aspects are incorporated. Soccer has posed a significant challenge for the field of robotics and artificial intelligence for a considerable period, as noted by Lever.

 

Continue Reading

Trending