Connect with us

Robotics

America’s giant mech to be equipped with massive chainsaw in time for the duel against Japan (interview)

blank

Published

on

blank

Remember the USA vs Japan giant mech duel we covered last month? If you don’t remember our article, then you probably already know about the duel anyway because it’s been everywhere in the news lately. The short version of the story is that a small team of engineers and robot enthusiasts from the US (MegaBots) have built a giant mech called MK II and they went on to challenge a Japanese team and their Kuratas mech to a duel. The Japanese team subsequently accepted the said duel, but only if MegaBots agreed to equip the MK II with a melee weapon so that it can face the Kuratas in hand-to-hand combat.

Unlike the Kuratas, the MK II was not designed with melee combat in mind and would probably lose against the Japanese robot if they were to fight right now. The good news, though, is that MegaBots came up with a plan to upgrade their giant mech with improved weapons, armor, and more. The team is looking to spend anywhere between $500K and $1.5 million on upgrades, with the money to be raised thanks to a crowdfunding campaign. Surprisingly enough, MegaBots already managed to raise close to $190K on Kickstarter in less than 24 hours since the campaign was launched. The campaign will continue to run for another 29 days and it’s already looking very likely that giant mech fans everywhere will contribute more than enough money by the time this is all over.

Now, the fact that we’ll get to see a robot duel between the US and Japan is already pretty exciting, however, things are about to get even better believe it or not. A number of well-known public figures and respectable companies have already agreed to help prepare the MK II for its upcoming battle against the Kuratas. Howe & Howe Technologies, IHMC Robotics, FonCo Creative Services, AutoDesk, BattleBots co-founders Greg Munson and Trey Roski, Mythbuster Grant Imahara, and even NASA’s David Lavery have all agreed to help MegaBots create the most advanced giant mech ever. Needless to say, the co-founders are very excited to be working together with these experts and are hoping that this duel is just the first of many because MegaBots plans to create a new type of international sport that will be all about piloted robots fighting each other in an arena environment.

“This duel is the first match in a brand new global sport,” MegaBots co-founder Brinkley Warren said in an interview with Geek Reply. “Our long-term vision of MegaBots is to create the most compelling live action sports league in human history. We’re blending the technology of F1 with the fights of UFC, to create a new sport with millions of fans in stadiums around the world cheering on their favorite pilots and robots in epic robot battles. The MegaBot League will allow countries from around the world to field their own team and compete for the ultimate in human-piloted robot glory.”

Nothing screams America quite like this concept art depicting the proposed final version of the MK II complete with massive chainsaw. Image courtesy of MegaBots.

Nothing screams America quite like this concept art depicting the proposed final version of the MK II complete with massive chainsaw and eagle heads. Image courtesy of MegaBots.

Warren was actually kind enough to answer a few more questions for us, including if the MK II will actually come equipped with a giant chainsaw like we see in the new concept art. Make sure to read the rest of our interview down below.

So MegaBots has challenged Japan – a country that loves giant robots almost as much the US loves bald eagles basking in red white and blue freedom – to a mech duel. Why go straight for the big boys?

Warren: If you want to be the champ, you’ve got to beat the champ. These colors don’t run, we’re out for glory and if that means we gotta plow through the KURATAS with an 8-foot giant chainsaw and duel gatling guns screaming out of shoulder-mounted Eagle heads — then so be it.

The MK II was not developed for melee combat and yet Suidobashi made it imperative that the robots should duke it out in a hand-to-hand duel. Can we still expect to see at least some ranged combat?

Warren: Yes, both will be a part of the battle.

The new concept images reveal that the MK II will come equipped with what seems to be a bigass chainsaw(!) This type of weapon will likely do more damage than, let’s say, a hammer or sword, but is it more practical?

Warren: We’re really excited about the giant chainsaw — what’s more American than an 8-foot chainsaw mounted on a giant robot?

It wouldn’t surprise me if Suidobashi was working on adding a giant katana to the Kuratas. Is the MK II’s armor prepared for something like that?

Warren: We’re working to ensure that our armor can withstand whatever gauntlet KURATAS wants to throw at us.

MegaBots-co-founders

The three MegaBots Co-Founders, from left to right,, Brinkley Warren, Gui Cavalcanti, Matt Oehrlein standing in front of the MegaBot Mk. II at San Diego ComicCon 2015. Photo credit: Kristine Ambrose

I know both teams are probably thinking about using the craziest weapons they can find, however, you also need to take into consideration the pilot’s safety. Are you taking special measures to ensure that the pilot comes out in one piece once the duel is over?

Warren: We’re working with NASA to explore how some of the same technologies that protect astronauts could be applied to keep us safe. We want to be able to be as aggressive as possible and that’s why we’re taking safety so seriously. The more safety measures we build in, the more we can really let loose against KURATAS.

Finally, can we expect even bigger robots at some point or are 15 feet enough to satisfy our thirst for giant mechs?

Warren: Surprisingly the bigger the robot is, the easier it is to balance — so who knows how big we’ll go.

Artificial Intelligence

Is it possible to legally make AI chatbots tell the truth?

blank

Published

on

blank

A lot of people have tried out chatbots like ChatGPT in the past few months. Although they can be useful, there are also many examples of them giving out the wrong information. A group of scientists from the University of Oxford now want to know if there is a legal way to make these chatbots tell us the truth.

The growth of big language models
There is a lot of talk about artificial intelligence (AI), which has grown to new heights in the last few years. One part of AI has gotten more attention than any other, at least from people who aren’t experts in machine learning. It’s the big language models (LLMs) that use generative AI to make answers to almost any question sound eerily like they came from a person.

Models like those in ChatGPT and Google’s Gemini are trained on huge amounts of data, which brings up a lot of privacy and intellectual property issues. This is what lets them understand natural language questions and come up with answers that make sense and are relevant. When you use a search engine, you have to learn syntax. But with this, you don’t have to. In theory, all you have to do is ask a question like you would normally.

There’s no doubt that they have impressive skills, and they sound sure of their answers. One small problem is that these chatbots often sound very sure of themselves when they’re completely wrong. Which could be fine if people would just remember not to believe everything they say.

The authors of the new paper say, “While problems arising from our tendency to anthropomorphize machines are well established, our vulnerability to treating LLMs as human-like truth tellers is uniquely worrying.” This is something that anyone who has ever had a fight with Alexa or Siri will know all too well.

“LLMs aren’t meant to tell the truth in a fundamental way.”

It’s simple to type a question into ChatGPT and think that it is “thinking” about the answer like a person would. It looks like that, but that’s not how these models work in real life.

Do not trust everything you read.
They say that LLMs “are text-generation engines designed to guess which string of words will come next in a piece of text.” One of the ways that the models are judged during development is by how truthful their answers are. The authors say that people can too often oversimplify, be biased, or just make stuff up when they are trying to give the most “helpful” answer.

It’s not the first time that people have said something like this. In fact, one paper went so far as to call the models “bullshitters.” In 2023, Professor Robin Emsley, editor of the journal Schizophrenia, wrote about his experience with ChatGPT. He said, “What I experienced were fabrications and falsifications.” The chatbot came up with citations for academic papers that didn’t exist and for a number of papers that had nothing to do with the question. Other people have said the same thing.

What’s important is that they do well with questions that have a clear, factual answer that has been used a lot in their training data. They are only as good as the data they are taught. And unless you’re ready to carefully fact-check any answer you get from an LLM, it can be hard to tell how accurate the information is, since many of them don’t give links to their sources or any other sign of confidence.

“Unlike human speakers, LLMs do not have any internal notions of expertise or confidence. Instead, they are always “doing their best” to be helpful and convincingly answer the question,” the Oxford team writes.

They were especially worried about what they call “careless speech” and the harm that could come from LLMs sharing these kinds of responses in real-life conversations. What this made them think about is whether LLM providers could be legally required to make sure that their models are telling the truth.

In what ways did the new study end?
The authors looked at current European Union (EU) laws and found that there aren’t many clear situations where an organization or person has to tell the truth. There are a few, but they only apply to certain institutions or sectors and not often to the private sector. Most of the rules that are already in place were not made with LLMs in mind because they use fairly new technology.

Thus, the writers suggest a new plan: “making it a legal duty to cut down on careless speech among providers of both narrow- and general-purpose LLMs.”

“Who decides what is true?” is a natural question. The authors answer this by saying that the goal is not to force LLMs to take a certain path, but to require “plurality and representativeness of sources.” There is a lot of disagreement among the authors about how much “helpfulness” should weigh against “truthfulness.” It’s not easy, but it might be possible.

To be clear, we haven’t asked ChatGPT these questions, so there aren’t any easy answers. However, as this technology develops, developers will have to deal with them. For now, when you’re working with an LLM, it might be helpful to remember this sobering quote from the authors: “They are designed to take part in natural language conversations with people and give answers that are convincing and feel helpful, no matter what the truth is.”

The study was written up in the Royal Society Open Science journal.

Continue Reading

Nanotechnology

The British Army shows off its brand-new “Speed of Light” laser weapon

blank

Published

on

blank

On top of a British Army combat vehicle, the UK government fired what it calls a “speed of light laser weapon” in a test run.

The Land Laser Directed Energy Weapon (LDEW) Demonstrator program of the UK Ministry of Defense produced the weapon. It has now been tested at a firing range in Porton Down, Salisbury. The Ministry of Defense says the “ground-breaking” test went well, and the laser was able to destroy targets more than a kilometer (0.6 miles) away.

A “speed of light laser weapon” was used in the press release for the new test, which led to some confusing headlines.

https://twitter.com/GrampsToolshed/status/1815863904196542816?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1815863904196542816%7Ctwgr%5Edd1d335d3045427cc34f21557b4e642a2d2026be%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.iflscience.com%2Fbritish-army-shows-off-new-speed-of-light-laser-weapon-75249

All lasers move at the speed of light, which is also the speed that all massless particles must move. This may sound impressive to people who fell asleep in physics class. If you want to sell water, you shouldn’t say “very wet” in the ads.

Still, the laser is impressive if you like shooting down enemy drones. This weapon’s best features are that it is small and light, which lets it be used for the first time on land vehicles.

The successful testing of this powerful laser weapon is a major step forward in our efforts to improve the British Army’s future operational capabilities, according to a press release from Matt Cork, who is in charge of the Defence Science and Technology Laboratory. “This technology offers a precise, powerful and cost effective means to defeat aerial threats, ensuring greater protection for our forces.”

Army members will test the “light speed laser weapon”‘s abilities and benefits in “real-world scenarios” later this year.

 

Continue Reading

Engineering

To make up for a lack of workers, Japan’s railways now have huge humanoid robots doing work

blank

Published

on

blank

JR West is going to fix its railway system in a very Japanese way: by using high-tech robots that look like people.

Starting this month, the company will use big robots that look like Mecha to do a lot of maintenance work on its railway infrastructure. For example, they will paint the support structures above the tracks and cut down tree branches that get in the way of the trains.

The flexible arms can reach heights of up to 12 meters (39 feet) and lift things that weigh up to 40 kilograms (88 pounds). They can also be fitted with different tools to do a wide range of odd jobs.

A person can sit in the truck that goes with the working mechanoid and use a joystick and VR goggles connected to a camera on the bot’s head to control its movement.

Below is a video that shows how the technology works. In one part of the montage, the robot is even seen using a circular saw to cut down tall trees. But don’t worry—the people who made the machine think it’s a safe pair of hands.

JR West recently said that they worked with robotics company Jinki Ittai and tech company Nippon Signal to create the technology. They did this to make their employees safer and lower the risk of accidents at work.

They also said that “labor shortages” were a big reason for the new technology. Japan has one of the oldest populations in the world. About 29% of the people there are over 65 years old. It will be a problem for a lot of people, including the economy, which is already having a hard time because of a lack of workers.

Robots and other new technologies are often blamed for “stealing jobs” from people, but it looks like they can also be used to fill in for workers who aren’t available.

Continue Reading

Trending