Connect with us

Nanotechnology

The British Army shows off its brand-new “Speed of Light” laser weapon

blank

Published

on

blank

On top of a British Army combat vehicle, the UK government fired what it calls a “speed of light laser weapon” in a test run.

The Land Laser Directed Energy Weapon (LDEW) Demonstrator program of the UK Ministry of Defense produced the weapon. It has now been tested at a firing range in Porton Down, Salisbury. The Ministry of Defense says the “ground-breaking” test went well, and the laser was able to destroy targets more than a kilometer (0.6 miles) away.

A “speed of light laser weapon” was used in the press release for the new test, which led to some confusing headlines.

All lasers move at the speed of light, which is also the speed that all massless particles must move. This may sound impressive to people who fell asleep in physics class. If you want to sell water, you shouldn’t say “very wet” in the ads.

Still, the laser is impressive if you like shooting down enemy drones. This weapon’s best features are that it is small and light, which lets it be used for the first time on land vehicles.

The successful testing of this powerful laser weapon is a major step forward in our efforts to improve the British Army’s future operational capabilities, according to a press release from Matt Cork, who is in charge of the Defence Science and Technology Laboratory. “This technology offers a precise, powerful and cost effective means to defeat aerial threats, ensuring greater protection for our forces.”

Army members will test the “light speed laser weapon”‘s abilities and benefits in “real-world scenarios” later this year.

 

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Continue Reading
Click to comment
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Engineering

To get gold back from electronic waste, the Royal Mint of the UK is using a new method

blank

Published

on

blank

There are hidden mountains of gold in the junkyards, full of old smartphones, computers that don’t work anymore, and broken laptops. A new project in the UK wants to find and use these hidden riches.
The Royal Mint, which makes British coins for the government, has agreed to work with the Canadian clean tech startup Excir to use a “world-first technology” that can safely get gold and other precious metals out of electronic waste (e-waste) and recycle them.

Electronic devices have circuit boards that have small amounts of gold in their connections because gold is a good conductor. These boards also have useful metals like silver, copper, lead, nickel, and aluminum.

In the past, getting the metals was hard, but Excir’s new technology can quickly and safely recover 99 percent of the gold that is trapped in electronic waste.

They prepare the circuit boards using a “unique process,” and then they use a patented chemical formula to quickly and selectively remove the gold. The liquid that is high in gold is then processed to make pure gold that can be melted down and formed into bars. Palladium, silver, and copper could also be recovered with this method.

“Our entrepreneurial spirit has helped the Royal Mint do well for over 1,100 years, and the Excir technology helps us reach our goal of being a leader in sustainable precious metals.” The chemistry is completely new and can get precious metals back from electronics in seconds. “It has a lot of potential for The Royal Mint and the circular economy, as it helps to reuse our planet’s valuable resources and creates new jobs in the UK,” said Sean Millard, Chief Growth Officer at The Royal Mint.

At the moment, about 22% of electronic waste is collected, stored properly, and recycled. But with this kind of new technology, the problem of old electronics could be lessened.

Every year, the world makes about 62 million metric tons of electronic waste, which is more than 1.5 million 40-tonne trucks’ worth. That number will go up by another 32% by 2030 as more people buy electronics. This will make it the fastest-growing source of solid waste in the world.

The World Health Organization says that e-waste is hazardous waste because it contains harmful materials and can leak harmful chemicals if it is not handled properly. For example, old electronics can release lead and mercury into the environment, which can affect the development of the central nervous system while a person is pregnant, as a baby, as a child, or as a teen. Also, e-waste doesn’t break down naturally and builds up in nature.

Aside from being a huge waste, this is also a big problem for the environment. There could be between $57 billion and $62 billion worth of precious metals in dumps and scrap yards.

Continue Reading

Engineering

A groundbreaking type of cement has the potential to transform homes and roads into massive energy storage systems

blank

Published

on

blank

For lack of a better word, concrete is awful for the environment. Beyond water, it’s the most-used product in the world, and its carbon footprint shows that making cement and concrete alone is responsible for 8% of the world’s CO2 emissions, or more than 4 billion metric tons of greenhouse gases every year.

But MIT researchers have come up with new material that might be able to help solve that issue. After mixing water, cement, and a sooty substance called carbon black, they made a supercapacitor, which is like a big concrete battery and stores energy.

Admir Masic, a scientist at MIT and one of the researchers who came up with the idea, said in a statement last year, “The material is fascinating.”

“You have cement, which is the most common man-made material in the world, mixed with carbon black, which is a well-known historical material because it was used to write the Dead Sea Scrolls,” he said. “These materials are at least 2,000 years old, and when you mix them in a certain way, you get a conductive nanocomposite. That’s when things get really interesting.”

The amazing properties of the material come from the fact that carbon black is both highly conductive and water-resistant. To put it another way, as the mixture hardens, the carbon black rearranges itself into a web of wires that run through the cement.

According to the researchers, it’s not only a huge step forward in the move toward renewable energy around the world, but its recipe also makes it better than other batteries. Even though cement has a high carbon cost, the new material is only made up of three cheap and easy-to-find ingredients. Standard batteries, on the other hand, depend on lithium, which is limited and expensive in terms of CO2: “particularly in hard rock mining, for every tonne of mined lithium, 15 tonnes of CO2 are emitted into the air,” says MIT’s Climate Portal.

Since cement isn’t going anywhere soon, putting it together with a simple and effective way to store energy seems like a clear win. Damian Stefaniuk, one of the researchers who came up with the idea, told BBC Future this week, “Given how common concrete is around the world, this material has the potential to be very competitive and useful in energy storage.”

“If it can be made bigger, the technology can help solve a big problem: how to store clean energy,” he said.

How could that be done? One possible solution is to use it to pave roads. This way, the highways can collect solar energy and then wirelessly charge electric cars that drive on them. Because they release energy much more quickly than regular batteries, capacitors aren’t very good for storing power every day. However, they do have benefits like higher efficiency and lower levels of performance degradation, which makes them almost perfect for giving moving cars extra power in this way.

One more interesting idea is to use it as a building material. The researchers wrote in their paper that a 45-cubic-meter block of the carbon-back-cement mix could store enough energy to power a typical US home for a year. To give you an idea of how big that is, 55 of them would fit in an Olympic-sized swimming pool.

The team says that a house with a foundation made of this material could store a day’s worth of energy from solar panels or windmills and use it whenever it’s needed because the concrete would stay strong.

Franz-Josef Ulm, a structural engineer at MIT, said, “That’s where our technology looks very promising, because cement is everywhere.”

“It’s a fresh way to think about the future of concrete.”

The paper is now out in the journal PNAS.

Continue Reading

Artificial Intelligence

Reinforcement learning AI has the potential to introduce humanoid robots into the real world

blank

Published

on

blank

AI tools like ChatGPT are revolutionizing our digital experiences, but the next frontier is bringing AI interactions into the physical world. Humanoid robots, trained with a specific AI, have the potential to be incredibly useful in various settings such as factories, space stations, and nursing homes. Two recent papers in Science Robotics emphasize the potential of reinforcement learning to bring robots like these into existence.

According to Ilija Radosavovic, a computer scientist at the University of California, Berkeley, there has been remarkable advancement in AI within the digital realm, thanks to tools like GPT. However, I believe that AI in the physical world holds immense potential for transformation.

The cutting-edge software that governs the movements of bipedal bots frequently employs a technique known as model-based predictive control. It has resulted in the development of highly advanced systems, like the Atlas robot from Boston Dynamics, known for its impressive parkour abilities. However, programming these robot brains requires a considerable amount of human expertise, and they struggle to handle unfamiliar situations. Using reinforcement learning, AI can learn through trial and error to perform sequences of actions, which may prove to be a more effective approach.

According to Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers, the team aimed to test the limits of reinforcement learning in real robots. Haarnoja and his team decided to create software for a toy robot named OP3, manufactured by Robotis. The team had the goal of teaching OP3 to walk and play one-on-one soccer.

“Soccer provides a favorable setting for exploring general reinforcement learning,” states Guy Lever of Google DeepMind, who coauthored the paper. It demands careful planning, adaptability, curiosity, collaboration, and a drive to succeed.

Operating and repairing larger robots can be quite challenging, but the smaller size of these robots allowed us to iterate quickly,” Haarnoja explains. Similar to a network architect, the researchers first trained the machine learning software on virtual robots before deploying it on real robots. This technique, called sim-to-real transfer, helps ensure that the software is well-prepared for the challenges it may face in the real world, such as the possibility of robots falling over and breaking.

The training of the virtual bots occurred in two stages. During the initial phase, the team focused on training one AI to successfully lift the virtual robot off the ground, while another AI was trained to score goals without losing balance. The AIs were provided with data that included the robot’s joint positions and movements, as well as the positions of other objects in the game captured by external cameras. In a recently published preprint, the team developed a version of the system that utilizes the robot’s visual capabilities. The AIs were required to generate fresh joint positions. If they excelled, their internal parameters were adjusted to promote further replication of the successful actions. During the second stage, the researchers developed an AI that could replicate the behavior of the first two AIs and evaluate its performance against opponents that were similar in skill level (versions of itself).

Similar to a network architect, the researchers adjusted different elements of the simulation, such as friction, sensor delays, and body-mass distribution, in order to fine-tune the control software, known as a controller, for the real-world robots. In addition to scoring goals, the AI was also recognized for its ability to minimize knee torque and prevent injuries.

Robots that were tested with the RL control software demonstrated impressive improvements in their performance. They walked at a significantly faster pace, turned with remarkable agility, and were able to recover from falls in a fraction of the time compared to robots using the scripted controller provided by the manufacturer. However, more sophisticated abilities also surfaced, such as seamlessly connecting actions. “It was fascinating to witness the robots acquiring more advanced motor skills,” comments Radosavovic, who was not involved in the study. And the controller acquired knowledge not only of individual moves, but also the strategic thinking needed to excel in the game, such as positioning oneself to block an opponent’s shot.

According to Joonho Lee, a roboticist at ETH Zurich, the soccer paper is truly impressive. “We have witnessed an unprecedented level of resilience from humanoids.”

But what about humanoid robots that are the size of humans? In another recent paper, Radosavovic collaborated with colleagues to develop a controller for a larger humanoid robot. This particular robot, Digit from Agility Robotics, is approximately five feet tall and possesses knees that bend in a manner reminiscent of an ostrich. The team’s approach resembled that of Google DeepMind. Both teams utilized computer brains known as neural networks. However, Radosavovic employed a specialized variant known as a transformer, which is commonly found in large language models such as those that power ChatGPT.

Instead of processing words and generating more words, the model analyzed 16 observation-action pairs. These pairs represented what the robot had sensed and done in the past 16 snapshots of time, which spanned approximately a third of a second. The model then determined the robot’s next action based on this information. Learning was made easier by initially focusing on observing the actual joint positions and velocity. This provided a solid foundation before progressing to the more challenging task of incorporating observations with added noise, which better reflected real-world conditions. For enhanced sim-to-real transfer, the researchers introduced slight variations to the virtual robot’s body and developed a range of virtual terrains, such as slopes, trip-inducing cables, and bubble wrap.

With extensive training in the digital realm, the controller successfully operated a real robot for an entire week of rigorous tests outdoors, ensuring that the robot maintained its balance without a single instance of falling over. In the lab, the robot successfully withstood external forces, even when an inflatable exercise ball was thrown at it. The controller surpassed the manufacturer’s non-machine-learning controller, effortlessly navigating a series of planks on the ground. While the default controller struggled to climb a step, the RL controller successfully overcame the obstacle, despite not encountering steps during its training.

Reinforcement learning has gained significant popularity in recent years, particularly in the field of four-legged locomotion. Interestingly, these studies have also demonstrated the successful application of these techniques to two-legged robots. According to Pulkit Agrawal, a computer scientist at MIT, these papers have reached a tipping point by either matching or surpassing manually defined controllers. With the immense potential of data, a multitude of capabilities can be unlocked within a remarkably brief timeframe.

It is highly probable that the approaches of the papers are complementary. In order to meet the demands of the future, AI robots will require the same level of resilience as Berkeley’s system and the same level of agility as Google DeepMind’s. In real-world soccer, both aspects are incorporated. Soccer has posed a significant challenge for the field of robotics and artificial intelligence for a considerable period, as noted by Lever.

 

Continue Reading

Trending

0
Would love your thoughts, please comment.x
()
x