Connect with us

Technology

The head of the UN warns the world not to let AI control nuclear weapons

blank

Published

on

blank

The head of the United Nations (UN), António Guterres, has told everyone not to let artificial intelligence (AI) play a part in the decision to use nuclear weapons.

“Humanity is on a knife’s edge,” Guterres said at a meeting of the Arms Control Association (ACA). He said that the risk of nuclear war was at “heights not seen since the Cold War.”

In his speech, Guterres said, “States are in a qualitative arms race.” “Technologies like artificial intelligence are making things more dangerous, and recklessly threatening a nuclear disaster has come back as nuclear blackmail. At the same time, the rules that are supposed to stop people from using, testing, and spreading nuclear weapons are getting weaker.” “Dear friends, we need to stop having weapons now.”

The Secretary-General told all countries to give up their weapons, and those that already have nuclear weapons should lead the way.

“I also urge the United States and the Russian Federation to get back to the negotiating table, fully implement the new START treaty, and agree on its successor,” he said. “Until these weapons are eliminated, all countries must agree that any decision on nuclear use is made by humans, not machines or algorithms.”

That last part might sound like a threat from a long way off, but automation did play a role in the Cold War.

A “dead hand” system that made sure the Soviet Union would be completely destroyed by a nuclear blast watched for signs that a nuclear weapon had been fired at the superpower by checking for earthquakes, radiation levels, and changes in air pressure. If the system picked up on such a strike, it would check to see if there were open lines of communication between top Soviet officials.

If they were, it would shut down after 15 minutes because there would still be people alive who could decide to launch a strike. If the lines were down, lower-level operators of the dead-hand system would be given the power to launch nuclear weapons. They would be kept safe in a bunker, and the fate of the world would be in the hands of a lower-level officer and a computer system.

You can tell this was never turned on because you’re still alive. On September 26, 1983, however, a system for finding missiles seemed to have picked up five nuclear missiles heading toward the Soviet Union. Stanislav Petrov, a Soviet military officer, didn’t believe that the attack had been found and wouldn’t tell the Soviet command to launch a response. The detection was actually caused by the sun’s glare reflecting off of clouds high in the sky. From satellite data, it looked like there might have been a strike.

It might not be a good idea to use algorithms or AI to make decisions that could wipe out all of humanity. They might have already killed everyone if they had had their way with clouds.

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Engineering

Self-driving cars are safe as long as you don’t plan to turn them around

blank

Published

on

blank

A new study looked at the safety of self-driving cars (AVs) and found that while they are better than humans in some everyday driving tasks, they are not yet as good as humans when it comes to turning or driving in low light.

We need to know that our cars are safe before we can just get in and let them take us where we need to go. The hope is that one day they will be able to drive better than humans. Cars don’t get tired, irritable at other drivers, or lose focus while thinking about something else, after all.

Tests of the technology have been done all over the world, and we now have a lot of information from semi-autonomous systems in cars that are used in real-life traffic situations. The new study from the University of Central Florida looked at accident data from 35,113 human-driven vehicles (HDVs) and data from 2,100 Advanced Driving Systems and Advanced Driver Assistance Systems. The goal was to find out how safe AVs and HDVs are in different situations.

In general, the team found that AVs are safer than human drivers, though there are a few big exceptions.

“The analysis suggests that accidents involving vehicles equipped with advanced driving systems generally have a lower chance of occurring than accidents involving human-driven vehicles in most of the similar accident scenarios,” the team said in their paper.

AVs did better than HDVs at routine traffic tasks like staying in their lanes and adjusting to the flow of traffic. They also had fewer accidents while doing these tasks. Sideswipe accidents were 0.2% less likely in AVs, and rear-end accidents were 0.5% less likely in AVs.

In other traffic situations, though, humans are still better than AI.

“Based on the model estimation results, it can be concluded that ADS [automatic driving systems] in general are safer than HDVs in most accident scenarios for their object detection and avoidance, precision control, and better decision-making,” the team said.

“However, the chances of an ADS accident happening at dawn or dusk or when turning are 5.250 and 1.988 times higher, respectively, than the chances of an HDV accident happening at the same times and places.” The reasons could be a lack of situational awareness in difficult driving situations and a lack of experience driving an AV.

Finding these key problem areas could help researchers improve how well AVs work. It would be helpful to think about finding dangers in new ways right now.

“At dawn and dusk, for instance, the sun’s shadows and reflections may confuse sensors, making it hard for them to distinguish between objects and identify potential hazards,” they wrote. “Furthermore, the fluctuating light conditions can impact the accuracy of object detection and recognition algorithms used by AVs, which can result in false positives or negatives.”

The study might disappoint supporters of self-driving cars. They may be waiting for the crossover point where AVs are better than human drivers. But if performance gets better, it can be sent to all AVs at the same time. Researchers who find a way to make turning better can use it on these kinds of vehicles through software updates, which is something we can’t do with people.

We hope that one day we can get into AVs without having to worry about lights changing or other people on the road getting distracted.

Nature Communicationsis where the study can be found.

Continue Reading

Artificial Intelligence

What a new study says suggests that ChatGPT may have passed the Turing test

blank

Published

on

blank

René Descartes, a French philosopher who may or may not have been high on pot, had an interesting thought in 1637: can a machine think? Alan Turing, an English mathematician and computer scientist, gave the answer to this 300-year-old question in 1950: “Who cares?” He said a better question was what would become known as the “Turing test”: if there was a person, a machine, and a human interrogator, could the machine ever trick the human interrogator into thinking it was the person?

Turing changed the question in this way 74 years ago. Now, researchers at the University of California, San Diego, think they have the answer. A new study that had people talk to either different AI systems or another person for five minutes suggests that the answer might be “yes.”

“After a five-minute conversation, participants in our experiment were no better than random at identifying GPT-4. According to the preprint paper, which has not yet undergone peer review, this suggests that current AI systems can deceive people into believing they are human. “These results probably set a lower bound on how likely it is that someone will lie in more naturalistic settings, where people may not be aware of the possibility of lying or only focus on finding it.”

Even though this is a big event that makes headlines, it’s not a milestone that everyone agrees on. The researchers say that Turing first thought of the imitation game as a way to test intelligence, but “many objections have been raised to this idea.” People, for example, are known for being able to humanize almost anything. We want to connect with things, whether they’re people, dogs, or a Roomba with googly eyes on top of it.

Also, it’s interesting that ChatGPT-4 and ChatGPT-3.5, which was also tested, only persuaded humans that it was a person about half of the time, which isn’t much better than random chance. What does this result really mean?

As it turns out, ELIZA was one of the AI systems that the team built into the experiment as a backup plan. She was made at MIT in the mid-1960s and was one of the first programs of her kind. She was impressive for her time, but she doesn’t have much to do with modern large-language model-based systems or LLM-based systems.

“ELIZA could only give pre-written answers, which greatly limited what it could do. Live Science talked to Nell Watson, an AI researcher at the Institute of Electrical and Electronics Engineers (IEEE), about how it might fool someone for five minutes but soon show its flaws. “Language models are completely adaptable; they can put together answers to a lot of different topics, speak in specific languages or sociolects, and show who they are by displaying personality and values that are based on their characters.” a significant improvement over something that a person, no matter how intelligent and careful they were, programmed by hand.

She was perfect for the experiment because she was the same as everyone else. How do you explain test subjects who are lazy and pick between “human” and “machine” at random? If ELIZA gets the same score as chance, then the test is probably not being taken seriously because she’s not that good. In what way can you tell how much of the effect is just people giving things human traits? How much did ELIZA get them to change their minds? That much is probably how much it is.

In fact, ELIZA got only 22%, which is just over 1 in 5 people believing she was human. It’s more likely that ChatGPT has passed the Turing test now that test subjects could reliably tell the difference between some computers and people, but not ChatGPT, the researchers write.

So, does this mean we’re entering a new era of AI that acts like humans? Are computers smarter than people now? Maybe, but we probably shouldn’t make our decisions too quickly.

The researchers say, “In the end, it seems unlikely that the Turing test provides either necessary or sufficient evidence for intelligence. At best, it provides probabilistic support.” The people who took part weren’t even looking for what you might call “intelligence”; the paper says they “were more focused on linguistic style and socio-emotional factors than more traditional notions of intelligence such as knowledge and reasoning.” This “could reflect interrogators’ latent assumption that social intelligence has become the human trait that is most difficult for machines to copy.”

Which brings up a scary question: is the fall of humans the bigger problem than the rise of machines?

“Real humans were actually more successful, convincing interrogators that they were human two-thirds of the time,” the paper’s co-author, Cameron Jones, told Tech Xplore. “Our results suggest that in the real world, people might not be able to reliably tell if they’re talking to a human or an AI system.”

“In the real world, people might not be as aware that they’re talking to an AI system, so the rate of lying might be even higher,” he warned. “This makes me wonder what AI systems will be used for in the future, whether they are used to do bots, do customer service jobs, or spread fake news or fraud.”

There is a draft of the study on arXiv, but it has not yet been reviewed by other scientists.

Continue Reading

Astronomy

The exciting Lunar Standstill will be streamed live from Stonehenge

blank

Published

on

blank

People are very interested in Stonehenge, which is one of those famous landmarks. It is very clear that it lines up with the sun at the solstices, but no one is sure what the monument is for. But over the next few months, scientists will look at a different kind of alignment: some stones may be lined up with the lunar standstill.

In the sky, things move around. The sun moves around during the year because the planet is tilted with respect to its orbit. This means that the times when it rises and sets are often different. Stonehenge is set up so that the first rays of dawn on the summer solstice and the last rays of sunset on the winter solstice both pass through the middle.

But outside the stone circle are the so-called station stones, whose purpose is unknown. They don’t seem to be linked to the sun, but to the moon. The position of the moonrise and moonset changes because the moon’s orbit is tilted relative to the earth. This is similar to how the sun moves. But it doesn’t happen every year. The cycle goes around and around for 18.6 years.

When the Moon is at the fullest point of its cycle, it moves from 28.725 degrees north to 28.725 degrees south in just one month. The next one won’t happen until January 2025. This time is called the major lunar standstill (lunistice). So, scientists will be going to Stonehenge several times over the next few months, even during the major standstill, to figure out how the monument might line up with our natural satellite.

Talked to Heather Sebire, senior property curator at Stonehenge. “I think the moon in general would have been very important to them.” “And you know, maybe they could do things they couldn’t do other times when there was a full moon because there was more light.”

“They think the lunar standstill might have something to do with this because there are four rocks out in the middle of the ocean that are called “station stones.” Only two of them have been found so far. Together, they form a rectangle, which some people think may have something to do with the setting outside the circle.

When the Moon is in a minor standstill, its distance from the Earth is between 18.134° north and south. It will happen again in 2034.

As archaeologists continue to look into this interesting alignment, Stonehenge wants everyone to join in the fun. As usual, people will be able to enter the circle for the solstice, which this year is the earliest since 1796. However, the next day will be all about the lunistice.

blank

 

 

 

 

 

 

 

 

 

 

 

 

As the moon rises, the lunar standstill event can only be seen online. You can watch the livestream from the comfort of your own home and wonder with the researchers if this great monument was also lined up with the Moon.

 

Continue Reading

Trending

0
Would love your thoughts, please comment.x
()
x