Connect with us

Science

Pursuing Flawlessness in an Enduring Theoretical Computing Enigma

blank

Published

on

blank

For the majority of individuals in the present era, our recollections of acquiring knowledge about multiplication tables have transformed into a recurring source of amusement. “You won’t have a calculator readily available on a daily basis as an adult,” we were cautioned. However, this statement is proven incorrect, Mrs. Hickinbottom. In today’s world, nearly 100 percent of individuals possess not only a calculator in their pocket but also the ability to access the entire compilation of human knowledge to date.

Mathematicians and computer scientists, however, do not belong to the majority. Since at least the early 19th century, matrix multiplication has emerged as a new form of multiplication. Even in our modern era with advanced technology, it remains a challenging task.

However, is it necessary? Two recent findings, one from November 2023 and another published in January, suggest that the answer is negative, or at least not as significant as previously believed.

The challenges associated with matrix multiplication
Firstly, let’s address the question: what precisely is a matrix? Regrettably, the answer is significantly less impressive than the portrayal in the movie adaptation.

In essence, a matrix is a rectangular arrangement of numbers or other mathematical entities, such as symbols, expressions, or even other matrices, organized in rows and columns. Having the ability to manipulate numbers is crucial in mathematics and science due to their extensive range of applications.

blank

When two matrices are multiplied, the result is another matrix, but only if certain conditions are met. In order to multiply two matrices, it is necessary for the number of columns in the left matrix to be equal to the number of rows in the right matrix.

blank

It is crucial to get this right because, unlike regular multiplication, the matrix operation is not commutative. This means that the order in which the matrices are multiplied is significant. When given two matrices A and B, it is possible to calculate the matrix product AB but not necessarily the product BA. Even if both products are calculable, there is no guarantee that they will yield the same result.

blank

After addressing all the necessary requirements, what is the procedure for determining the product of two matrices? The answer can be represented in mathematical notation as follows:

 

blank

Which, we acknowledge, may not be particularly beneficial. Now, let’s examine an illustration.

 

blank

By now, you may have realized that matrix multiplication requires significantly more effort than regular multiplication – and you would be absolutely correct. That’s one reason why having a computer program capable of performing all tasks for us would be incredibly advantageous. However, it appears that even this solution presents its own set of challenges.

The gradual advance of progress
According to an article from 2005 by Sara Robinson for SIAM News, researchers have long been searching for an efficient method to multiply matrices, a crucial operation that often slows down important algorithms.

The article continues by stating that improving the speed of matrix multiplication would result in more efficient algorithms for various common linear algebra problems, including matrix inversion, solving linear equations, and calculating determinants. Some basic graph algorithms can run at the same speed as matrix multiplication.

So the question arises: how quickly does it move? Regrettably, historically, the speed at which progress has been made in this area has been rather slow. When dealing with matrices of considerable size, such as having 100 rows and columns each, the number of multiplications required to find their product can quickly escalate to 1,000,000 and beyond. It’s important to note that this increase is not linear, but rather follows a cubic pattern. Put simply: by adding just one row and column to those matrices, the complexity of solving the problem increases by over 30,000 multiplications.

There have been extensive research efforts over the years to find ways to improve the speed of the task. Many specialists in the field believe that we will eventually reach a limit where multiplying a pair of 100-by-100 matrices will require around 10,000 steps, but not fewer. However, the method to accomplish this remains a significant challenge in the field of computer science.

“The objective of this research,” stated Renfei Zhou, a theoretical computer science student at Tsinghua University and co-author of the recent papers, in an interview with Quanta Magazine earlier this year, “is to explore the extent to which a value close to two can be reached, and to determine if it is theoretically attainable.”

We have made progress. Ever since 1969, when mathematician Volker Strassen revolutionized matrix multiplication with a more efficient algorithm, the time exponent has significantly decreased to below 2.4. In simpler terms, it now takes fewer than 64,000 calculations to multiply 100-by-100 matrices together. However, progress in this field has been challenging. According to François Le Gall, a computer scientist from Nagoya University, advancements since the late eighties have been minimal and incredibly hard to achieve.

So, you might be wondering, what’s the reason for our excitement over this latest improvement? From a purely numerical perspective, the gain is not particularly significant.

Making the best even better
To understand the problem that was solved between November and January, we need to look at what was going on before that. It turned out to be a bit of a mess.

Two big steps forward were made in 1986 and 1987. First, Volker Strassen (yes, that Strassen again) came up with what is now called the “laser method” for matrix multiplication. Then, a year later, computer scientist Shmuel Winograd and cryptographer Don Coppersmith made an algorithm that built on Strassen’s work and made it better.

When you combine these two methods, you get a very clever result. Back in the 1960s, Strassen was the first person to notice that if you rewrite matrices A and B as block matrices, that is, as matrices whose elements are other matrices, you can find that their product A∙B = C in less than n3 calculations, as long as you do the right ones.

After that, Coppersmith and Winograd can help you figure out what you need to do. MIT computer scientist Virginia Vassilevska Williams, who is also a co-author of one of the new papers, told Quanta that their algorithm “tells me what to multiply, what to add, and what entries go where.” “It’s just a plan for making C from A and B.”

This is where the laser method comes in handy. Coppersmith and Winograd’s algorithm is great, but it’s not perfect. It often creates unnecessary information, with words “overlapping” in some places. Computer scientists use the laser to “cut out” these copies. Le Gall said that it “typically works very well” and “generally finds a good way to kill a subset of blocks to remove the overlap.”

But sometimes you can laser away too much, like a beautician in the early 2000s who had to work with eyebrows that were already there. “A faster matrix multiplication algorithm is the result of being able to keep more blocks without overlap,” Le Gall told Quanta. Duan’s team’s method is based on this exact idea.

Making the scales equal again
The team cut the time it takes to calculate matrix multiplication by the most significant amount in more than ten years by changing how the laser method gives weight to the blocks in a matrix. This means that they are now more likely to be kept instead of being cut out.

Don’t get too excited yet; they only lowered it from 2.373 to 2.372. But that’s not really the point: what really excites computer scientists is not the outcome but the way the team accomplished it. Le Gall told Quanta that after almost forty years of very small improvements to the same set of algorithms, “they found that, well, we can do better.”

We don’t yet know how much better things will be, but if you’re wondering what these ground-breaking results will be used for in real life, you might be let down. There’s already a “galactic algorithm” for the laser method, so named because it’s never used to solve any problems on Earth. And unless something hugely unexpected happens with quantum computing, the same will go for the new, better versions.

Zhou said, “We never run the method.” “We look into it.”

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Continue Reading
Click to comment
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Engineering

Testing the longest quantum network on existing fiber optics in Boston

blank

Published

on

blank

Imagine a world where information can be transmitted securely across the globe, free from the prying eyes of hackers. Its incredible power lies in the realm of quantum mechanics, making it a groundbreaking advancement with immense potential for the future of telecommunications. There have been obstacles to conquer, but there has also been notable progress, exemplified by a recent achievement from researchers at Harvard University.

Using the existing fiber optics within the city of Boston, the team successfully demonstrated the longest transmission between two nodes. The fiber path covered a total distance of 35 kilometers (22 miles), encircling the entire city. The two nodes that connected to the close path were situated on different floors, making the fiber route not the shortest but rather an intriguing one.

Quantum information has been successfully transmitted over longer distances, showcasing remarkable advancements in this experiment that bring us closer to the realization of a practical quantum internet. The real breakthrough lies in the nodes, going beyond the mere utilization of optical fibers.

A typical network utilizes signal repeaters made of optical fiber. These devices incorporate optical receivers, electrical amplifiers, and optical transmitters. The signal is received, transformed into an electrical form, and subsequently converted back into light before being transmitted. They play a crucial role in expanding the reach of the original signal. And in its present state, this is not suitable for quantum internet.

blank

The issue lies not in the technology, but rather in the fundamental principles of physics. Copying quantum information is not possible in that manner. Quantum information is highly secure due to its entangled state. The Harvard system operates by utilizing individual nodes that function as miniature quantum computers, responsible for storing, processing, and transferring information. This quantum network, consisting of only two nodes, is currently the most extensive one ever achieved, with nodes capable of such remarkable functionality.

“Demonstrating the ability to entangle quantum network nodes in a bustling urban environment is a significant milestone in enabling practical networking between quantum computers,” stated Professor Mikhail Lukin, the senior author.

At each node, a tiny quantum computer is constructed using a small piece of diamond that contains a flaw in its atomic arrangement known as a silicon vacancy center. At temperatures close to absolute zero, the silicon vacancy has the remarkable ability to capture, retain, and interconnect pieces of data, making it an ideal choice for a node.

“Given the existing entanglement between the light and the first node, it has the capability to transmit this entanglement to the second node,” elucidated Can Knaut, a graduate researcher in Lukin’s lab. “This phenomenon is known as photon-mediated entanglement.”

The study has been published in the prestigious journal Nature.

Continue Reading

Astronomy

NASA’s flyby of Europa shows that “something” is moving under the ice

blank

Published

on

blank

Europa’s surface has marks that show the icy crust is vulnerable to the water below. The most important thing is that Juno’s recent visit shows what might be plume activity. If this is real, it would let future missions take samples of the ocean inside the planet without having to land.

Even though it’s been almost two years since Juno got the closest to Europa, its data is still being looked at. Even though Juno has been going around Jupiter since 2016, the five pictures it took on September 29, 2022, were the closest views of Europa since Galileo’s last visit in 2000.

Some might say that’s a shocking lack of interest in one of the Solar System’s most interesting worlds, but it could also have been a good way to see how things had changed over time.

Europa is the smoothest object in the solar system because its ocean keeps it from sinking to the surface. Still, it’s not featureless; Juno saw some deep depressions with steep walls that are 20 to 50 kilometers (12 to 31 miles) wide, as well as fracture patterns that are thought to show “true polar wander.

In a statement, Dr. Candy Hansen of the Planetary Science Institute said, “True polar wander occurs if Europa’s icy shell is separated from its rocky interior. This puts a lot of stress on the shell, which causes it to break in predictable ways.”

The shell that sits on top of Europa’s ocean is thought to be rotating faster than the rest of the moon. This is what true polar wandering means. People think that the water below is moving and pulling the shell along with it. Ocean currents are thought to be causing this. The currents are most likely a result of heat inside Europa’s rocky core, which is heated up as a result of Jupiter and its larger moons pulling on Europa and turning it into a large stress ball.

The ocean and ice could stretch and compress parts of the ice, which is how the cracks and ridges that have been seen since Voyager 2 visited were made.

A group under the direction of Hansen is viewing images of Europa’s southern half. The scientist said, “This is the first time that these fracture patterns have been mapped in the southern hemisphere. This suggests that true polar wander has a bigger effect on Europa’s surface geology than was thought before.”

Ocean currents are not to blame for all of Europa’s map changes. It appears that optical tricks can even fool NASA. Hansen said, “Crater Gwern is no longer there.” “JunoCam data showed that Gwern, which was once thought to be a 13-mile-wide impact crater and one of Europa’s few known impact craters, was actually a group of ridges that crossed each other to make an oval shadow.”

But Juno gives more than it takes away. The team is interested in what they’re calling the Platypus because of its shape, not because it has a lot of parts that shouldn’t go together. Ridges on its edge look like they are collapsing into it. The scientists think this might be because pockets of salt water have partially broken through the icy shell.

blank

The Europa Clipper would find these pockets to be fascinating indirect targets for study, but the dark stains that cryovolcanic activity might have left behind are even more intriguing.

“These features suggest the possibility of current surface activity and the existence of liquid water beneath the surface on Europa,” stated Heidi Becker from the Jet Propulsion Laboratory. There is evidence of such activity in the geysers of Enceladus, but there is still uncertainty regarding whether it is currently happening on Europa.

Engaging in such an endeavor would enable the sampling of the interior ocean to detect signs of life simply by flying through a plume and gathering ice flakes without the need for landing or drilling.

It seems that in the past, there was a significant shift of over 70 degrees in the locations of features on Europa’s surface, although the reasons for this remain unknown. However, at present, polar wander only leads to minor adjustments.

Continue Reading

Bionics

A new syndrome linked to COVID that could be fatal has appeared

blank

Published

on

blank

There is a new outbreak of a rare but deadly autoimmune disorder in the north of England. New research suggests that the outbreak may be linked to COVID-19. Anti-MDA5-positive dermatomyositis is the name of the disease. It was mostly found in Asian people before the pandemic, but now it’s becoming more common among white people in Yorkshire.

Antibodies that target the MDA5 (melanoma differentiation-associated protein 5) enzyme are what cause the illness. It is linked to progressive interstitial lung disease, which scars lung tissue. Between 2020 and 2022, doctors in Yorkshire reported 60 cases of MDA5 autoimmunity, which was the highest number ever. Eight people died as a result.

What the researchers found when they looked at this sudden rise in cases is that it happened at the same time as the main waves of COVID-19 infections during the pandemic’s peak years. This caught their attention right away because MDA5 is an RNA receptor that is very important for finding the SARS-CoV-2 virus.

The study authors write, “This is to report a rise in the rate of anti-MDA5 positivity testing in our region (Yorkshire) in the second year of the COVID-19 pandemic. This was noteworthy because this entity is not commonly found in the UK.” They say this is likely a sign of “a distinct form of MDA5+ disease in the COVID-19 era.” They have named it “MDA5-autoimmunity and Interstitial Pneumonitis Contemporaneous with COVID-19” (MIP-C).

The researchers used tools that look for shared traits among people in the same medical cohort to figure out how this newly discovered symptom works. In this way, they found that people who had MDA5 autoimmunity also tended to have high levels of interleukin-15 (IL-15), a cytokine that causes inflammation.

The author of the study, Pradipta Ghosh, said in a statement that IL-15 “can push cells to the brink of exhaustion and create an immunologic phenotype that is very, very often seen as a hallmark of progressive interstitial lung disease, or fibrosis of the lung.”

Overall, only eight of the 60 patients had tested positive for COVID-19 before. This means that a lot of them may have had infections that didn’t cause any symptoms that they weren’t aware of. This means that even mild infections with no early symptoms might be enough to cause MDA5 autoimmunity.

The researchers say, “Given that the highest number of positive MDA5 tests happened after the highest number of COVID-19 cases in 2021 and at the same time as the highest number of vaccinations, these results suggest an immune reaction or autoimmunity against MDA5 after exposure to SARS-CoV-2 and/or vaccines.”

Ghosh says that the event probably isn’t just happening in Yorkshire. Reports on MIP-C are now coming in from all over the world.

The study was written up in the eBioMedicine journal.

Continue Reading

Trending

0
Would love your thoughts, please comment.x
()
x