Connect with us

Science

Breakthrough Prize ceremony will be aired live on National Geographic Channel

blank

Published

on

blank

The Breakthrough Prizes is a series of international annual awards that honor outstanding achievements in the fields of physics, mathematics and life sciences. They were founded in 2012 by Silicon Valley innovators Mark Zuckerberg and Priscilla Chan (Facebook), Sergey Brin (Google), Anne Wojcicki (23andMe), Jack Ma (Alibaba), Cathy Zhang (Alibaba), Yuri Milner (DST Global) and Julia Milner.

The prizes are awarded to scientists who have remarkable achievements in their fields. Each of the Breakthrough Prize laureates receives a $3 million dollar prize at an official ceremony. For the 2016 Breakthrough Prizes, the ceremony will be held Sunday, November the 8th and it will be broadcasted live on US National Geographic Channel, with a one-hour version to be scheduled on Sunday, November 29th, on FOX. It will be produced by acclaimed producer Don Mischer.

The host of the ceremony that will be held in Silicon Valley will be Cosmos executive producer and Family Guy creator Seth MacFarlane. This year’s ceremony will feature the Breakthrough Junior Challenge for students, in which a student with an original scientific idea or principle will be awarded $400,000. Since 2013, the Breakthrough Prize has awarded more than $160 million dollars to its recipients.

“With the unparalleled global reach of the National Geographic brand and the power of FOX, we’re taking Breakthrough Prize to the widest possible audience,” said executive producer and director of television Don Mischer. “With Seth hosting, we look to have fun celebrating the world’s foremost leaders in physics, life sciences and mathematics in hopes of inspiring a new generation of disruptors.”

For more information about the Breakthrough Prize, access the link bellow.

https://breakthroughprize.org/

Who doesn’t enjoy listening to a good story. Personally I love reading about the people who inspire me and what it took for them to achieve their success. As I am a bit of a self confessed tech geek I think there is no better way to discover these stories than by reading every day some articles or the newspaper . My bookcases are filled with good tech biographies, they remind me that anyone can be a success. So even if you come from an underprivileged part of society or you aren’t the smartest person in the room we all have a chance to reach the top. The same message shines in my beliefs. All it takes to succeed is a good idea, a little risk and a lot of hard work and any geek can become a success. VENI VIDI VICI .

Physics

According to physics, your enemy’s enemy is actually your friend

blank

Published

on

blank

People are social animals, and their relationships are complicated and change all the time. Several fields of study and theories have tried to figure out how these social networks work and how they change over time. The social balance theory was one of them. It was first put forward in the 1940s. Using statistical physics, researchers have now been able to prove it.

Just like the name says, social balance theory is based on the idea of balance. People in their networks want and try to keep relationships balanced. There should be rules to keep the system balanced. Relationships that are positive are balanced, but relationships that are negative or mixed are not. The classical model is based on the simple idea that relationships that are good are “friends” and relationships that are bad are “enemies.”

First, a friend of a friend is still a friend. Now, this is a made-up example, so don’t think right away of that friend of yours that you hate. Another rule says that a friend of an enemy is also an enemy, and of course, an enemy of a friend is also an enemy. We need to protect our friends. The last rule is a bit more subtle: a friend of an enemy is a friend of an enemy. It looks like the new analysis mostly meets this need, but the scientists had to add a lot of complexity before they could model it.

It’s finally possible to say that social networks match up with expectations that were set 80 years ago, said Bingjie Hao, the study’s lead author from Northwestern University. “Our results can also be used in many different ways in the future.” Because of how we do math, we can put limits on the connections and take into account what each entity in the system wants. That will help when making models of systems other than social networks.

Two things were very important to the new model: not everyone knows each other in real life, and some people are more positive than others. When you use both constraints, you get a social network that is exactly the same as the one Fritz Heider predicted 80 years ago.

“We always thought this social intuition worked, but we didn’t know why,” said István Kovács, who was the lead author of the study. “All that was left was to do the math.” There have been a lot of studies on this idea, but they don’t all point to the same conclusion. We kept getting it wrong for decades. It’s because real life is hard sometimes. We realized we had to deal with both problems at the same time: “who knows whom” and “some people are just friendlier than others.”

The study has been written up in the Science Advances journal.

Continue Reading

Astronomy

A potential development of the first lunar railway is anticipated within the next ten years

blank

Published

on

blank

For people to live on the Moon’s surface permanently, they need to be able to use Moon resources. Not everything can be brought to Earth. But it’s not likely that the base will have everything it needs right there. Some things will need to be moved. It’s not a new idea to have cars (well, buggies) on the Moon, but now scientists are thinking about a very different idea: a railway system that floats.

FLOAT, which stands for “Flexible Levitation on a Track,” is the name of the project. The goal is to make payload transportation that is self-driving, dependable, and effective. As part of its mission, it will move payloads from spacecraft landing zones to the base and from mining sites to places where resources are taken out or where the soil is used for building.

Interesting about the technology is that the tracks are not fixed. Since they are unrolled right onto the lunar regolith, FLOAT doesn’t need much site preparation. Robots that can levitate will be able to move over the tracks. Since they don’t have wheels or legs, they don’t have to deal with the sharp regolith and its damaging power.

There is a layer of graphite on the flexible film track that lets diamagnetic levitation happen, and a flex circuit creates electromagnetic thrust. You don’t have to use the third layer, but if you do, it’s a solar panel that will power the system when it’s in the sun. The robots may be different sizes, but the team thinks that every day they can move 100 tons of stuff over several kilometers.

In phase II, six NASA Innovative Advanced Concepts (NIAC) have been moved forward. FLOAT is one of them. A new way to get astronauts to Mars quickly and an idea for a liquid space telescope are two others. For FLOAT, phase II will be all about designing and building a smaller version of the system that will be tested in a moon-like environment. This will also help us learn more about how the environment affects tracks and robots and what else is needed to make this idea a reality.

In a statement, John Nelson, NIAC program executive at NASA Headquarters in Washington, said, “These different, science fiction-like ideas make up a great group of Phase II studies.” “Our NIAC fellows always amaze and inspire us. This class makes NASA think about what’s possible in the future.”

These projects got $600,000 to keep looking into whether they were possible. As the leader of FLOAT, Ethan Schaler from NASA’s Jet Propulsion Laboratory is in charge. If the system keeps showing what it can do, it could be an important part of life on the Moon by the 2030s.

Phase I projects have also been announced. The ideas include new designs for telescopes, ways to make Mars less dangerous, and even a group of very small spacecraft that could reach our nearest stars in 20 years.

Continue Reading

Science

Pursuing Flawlessness in an Enduring Theoretical Computing Enigma

blank

Published

on

blank

For the majority of individuals in the present era, our recollections of acquiring knowledge about multiplication tables have transformed into a recurring source of amusement. “You won’t have a calculator readily available on a daily basis as an adult,” we were cautioned. However, this statement is proven incorrect, Mrs. Hickinbottom. In today’s world, nearly 100 percent of individuals possess not only a calculator in their pocket but also the ability to access the entire compilation of human knowledge to date.

Mathematicians and computer scientists, however, do not belong to the majority. Since at least the early 19th century, matrix multiplication has emerged as a new form of multiplication. Even in our modern era with advanced technology, it remains a challenging task.

However, is it necessary? Two recent findings, one from November 2023 and another published in January, suggest that the answer is negative, or at least not as significant as previously believed.

The challenges associated with matrix multiplication
Firstly, let’s address the question: what precisely is a matrix? Regrettably, the answer is significantly less impressive than the portrayal in the movie adaptation.

In essence, a matrix is a rectangular arrangement of numbers or other mathematical entities, such as symbols, expressions, or even other matrices, organized in rows and columns. Having the ability to manipulate numbers is crucial in mathematics and science due to their extensive range of applications.

blank

When two matrices are multiplied, the result is another matrix, but only if certain conditions are met. In order to multiply two matrices, it is necessary for the number of columns in the left matrix to be equal to the number of rows in the right matrix.

blank

It is crucial to get this right because, unlike regular multiplication, the matrix operation is not commutative. This means that the order in which the matrices are multiplied is significant. When given two matrices A and B, it is possible to calculate the matrix product AB but not necessarily the product BA. Even if both products are calculable, there is no guarantee that they will yield the same result.

blank

After addressing all the necessary requirements, what is the procedure for determining the product of two matrices? The answer can be represented in mathematical notation as follows:

 

blank

Which, we acknowledge, may not be particularly beneficial. Now, let’s examine an illustration.

 

blank

By now, you may have realized that matrix multiplication requires significantly more effort than regular multiplication – and you would be absolutely correct. That’s one reason why having a computer program capable of performing all tasks for us would be incredibly advantageous. However, it appears that even this solution presents its own set of challenges.

The gradual advance of progress
According to an article from 2005 by Sara Robinson for SIAM News, researchers have long been searching for an efficient method to multiply matrices, a crucial operation that often slows down important algorithms.

The article continues by stating that improving the speed of matrix multiplication would result in more efficient algorithms for various common linear algebra problems, including matrix inversion, solving linear equations, and calculating determinants. Some basic graph algorithms can run at the same speed as matrix multiplication.

So the question arises: how quickly does it move? Regrettably, historically, the speed at which progress has been made in this area has been rather slow. When dealing with matrices of considerable size, such as having 100 rows and columns each, the number of multiplications required to find their product can quickly escalate to 1,000,000 and beyond. It’s important to note that this increase is not linear, but rather follows a cubic pattern. Put simply: by adding just one row and column to those matrices, the complexity of solving the problem increases by over 30,000 multiplications.

There have been extensive research efforts over the years to find ways to improve the speed of the task. Many specialists in the field believe that we will eventually reach a limit where multiplying a pair of 100-by-100 matrices will require around 10,000 steps, but not fewer. However, the method to accomplish this remains a significant challenge in the field of computer science.

“The objective of this research,” stated Renfei Zhou, a theoretical computer science student at Tsinghua University and co-author of the recent papers, in an interview with Quanta Magazine earlier this year, “is to explore the extent to which a value close to two can be reached, and to determine if it is theoretically attainable.”

We have made progress. Ever since 1969, when mathematician Volker Strassen revolutionized matrix multiplication with a more efficient algorithm, the time exponent has significantly decreased to below 2.4. In simpler terms, it now takes fewer than 64,000 calculations to multiply 100-by-100 matrices together. However, progress in this field has been challenging. According to François Le Gall, a computer scientist from Nagoya University, advancements since the late eighties have been minimal and incredibly hard to achieve.

So, you might be wondering, what’s the reason for our excitement over this latest improvement? From a purely numerical perspective, the gain is not particularly significant.

Making the best even better
To understand the problem that was solved between November and January, we need to look at what was going on before that. It turned out to be a bit of a mess.

Two big steps forward were made in 1986 and 1987. First, Volker Strassen (yes, that Strassen again) came up with what is now called the “laser method” for matrix multiplication. Then, a year later, computer scientist Shmuel Winograd and cryptographer Don Coppersmith made an algorithm that built on Strassen’s work and made it better.

When you combine these two methods, you get a very clever result. Back in the 1960s, Strassen was the first person to notice that if you rewrite matrices A and B as block matrices, that is, as matrices whose elements are other matrices, you can find that their product A∙B = C in less than n3 calculations, as long as you do the right ones.

After that, Coppersmith and Winograd can help you figure out what you need to do. MIT computer scientist Virginia Vassilevska Williams, who is also a co-author of one of the new papers, told Quanta that their algorithm “tells me what to multiply, what to add, and what entries go where.” “It’s just a plan for making C from A and B.”

This is where the laser method comes in handy. Coppersmith and Winograd’s algorithm is great, but it’s not perfect. It often creates unnecessary information, with words “overlapping” in some places. Computer scientists use the laser to “cut out” these copies. Le Gall said that it “typically works very well” and “generally finds a good way to kill a subset of blocks to remove the overlap.”

But sometimes you can laser away too much, like a beautician in the early 2000s who had to work with eyebrows that were already there. “A faster matrix multiplication algorithm is the result of being able to keep more blocks without overlap,” Le Gall told Quanta. Duan’s team’s method is based on this exact idea.

Making the scales equal again
The team cut the time it takes to calculate matrix multiplication by the most significant amount in more than ten years by changing how the laser method gives weight to the blocks in a matrix. This means that they are now more likely to be kept instead of being cut out.

Don’t get too excited yet; they only lowered it from 2.373 to 2.372. But that’s not really the point: what really excites computer scientists is not the outcome but the way the team accomplished it. Le Gall told Quanta that after almost forty years of very small improvements to the same set of algorithms, “they found that, well, we can do better.”

We don’t yet know how much better things will be, but if you’re wondering what these ground-breaking results will be used for in real life, you might be let down. There’s already a “galactic algorithm” for the laser method, so named because it’s never used to solve any problems on Earth. And unless something hugely unexpected happens with quantum computing, the same will go for the new, better versions.

Zhou said, “We never run the method.” “We look into it.”

Continue Reading

Trending