Without the protein molecules that support vital biological functions including photosynthesis, enzymatic degradation, sight, and our immune system, life on Earth would not exist as we know it. And like other aspects of nature, humankind is still learning about all the different kinds of proteins that are actually out there. The ESM Metagenomic Atlas, a first-of-its-kind metagenomic database, was created by Meta researchers instead of scouring the planet’s most inhospitable regions in search of novel microorganisms that might possess a new type of organic molecule. This database has the potential to 60 times faster than current protein-folding AI performance.
The name “metagenomics” is really a coincidence. The study of “the structure and function of complete nucleotide sequences extracted and studied from all the organisms (usually microorganisms) in a bulk sample” is a relatively new but very real field of science. These techniques, which work similarly to gas chromatography in that you’re seeking to determine what’s there in a certain sample system, are frequently used to detect the bacterial communities residing on our skin or in the soil.
The NCBI, the European Bioinformatics Institute, and the Joint Genome Institute all launched similar databases that have already compiled billions of previously unknown protein structures. According to a press release from the business, Meta is providing “a revolutionary protein-folding strategy that utilizes huge language models to generate the first comprehensive understanding of the structures of proteins in a metagenomics database at the scale of hundreds of millions of proteins.” The issue is that, even though advances in genomics have identified the sequences for a large number of novel proteins, simply knowing those sequences does not explain how they fit together to form a functional molecule, and it can take anywhere from a few months to a few years to figure it out experimentally. as each molecule. No one has time for that.
“The ESM Metagenomic Atlas will enable scientists to search and analyze the structures of metagenomic proteins at the scale of hundreds of millions of proteins,” the Meta research team wrote on TK. “This can help researchers to identify structures that have not been characterized before, search for distant evolutionary relationships, and discover new proteins that can be useful in medicine and other applications.”
Like languages, proteins are composed of their constituent atoms, which you can combine in any way you like, but only when put together in a certain order will result in a functional molecule, or a coherent thinking (a molecular sentence). Although the analogy isn’t exact, Meta’s system significantly enhances our ability to understand the syntax and grammar of organic chemistry. According to the rules of physics, molecules fold into complicated three-dimensional shapes, which are described by a protein’s sequence, the scientists said. Protein sequences include statistical patterns that reveal details about the folded structure of the protein, according to research.
In particular, Meta’s Evolutionary Scale Modeling AI uses masked language modeling, a type of self-supervised learning, to treat gene sequences like a game of Mad Libs for O-Chem. The research team stated, “We trained a language model using the sequences of millions of natural proteins.” “With this approach, the model must accurately fill in the blanks in a passage of text, such as ‘To _ or not to __, that is the .’ Using millions of different proteins, we trained a language model to fill in the blanks in a protein sequence like “GL KKE AHY G.””
ESM-2, the resulting “protein language model,” has 15 billion parameters and is the largest model of its kind to date. On a cluster of about 2,000 GPUs, the “new structure prediction capacity enabled us to predict sequences for the more than 600 million metagenomic proteins in the atlas in just two weeks.” Well, forget about months and years.
ChatGPT Will Soon “See, Hear, And Speak” With Its Latest AI Update
A major update to ChatGPT lets the chatbot respond to images and voice conversations. The AI will hear your questions, see the world, and respond.
OpenAI, the non-profit group behind ChatGPT and DALL-E, announced the “multimodal” update in a blog post on Monday, saying it will add voice and image features to ChatGPT Plus and Enterprise over the next two weeks.
The post said it would be available for other groups “soon after.” It was unclear when it would be added to free versions.
Part of this update may be like Siri and Alexa, where you can ask a question and get the answer.
Anyone who’s used ChatGPT knows its AI isn’t a sterile search engine. It can find patterns and solve complex problems creatively and conversationally.
According to OpenAI, “Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it” could expand these abilities. To decide what to make for dinner, take pictures of your fridge and pantry at home and ask questions for a recipe. Take a photo, circle the problem set, and have it share hints with your child after dinner to help them with a math problem.
This development “opens doors to many creative and accessibility-focused applications,” said OpenAI. They added that it will pose “new risks, such as the potential for malicious actors to impersonate public figures or commit fraud.”
The update currently only allows voice chat with AI trained with specific voice actors. It seems you can’t ask, “Read this IFLScience article in the voice of Stephen Hawking.”
However, current AI technology can achieve that.
Track People and Read Through Walls with Wi-Fi Signals
Recent research has shown that your Wi-Fi router’s signals can be used as a sneaky surveillance system to track people and read text through walls.
Recently, Carnegie Mellon University computer scientists developed a deep neural network that digitally maps human bodies using Wi-Fi signals.
It works like radar. Many sensors detect Wi-Fi radio waves reflected around the room by a person walking. This data is processed by a machine learning algorithm to create an accurate image of moving human bodies.
“The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input,” the researchers wrote in a December 2022 pre-print paper.
The team claims this experimental technology is “privacy-preserving” compared to a camera, despite concerns about intrusion. The algorithm can only detect rough body positions, not facial features and appearance, so it could provide a new way to monitor people anonymously.
They write, “This technology may be scaled to monitor the well-being of elder people or just identify suspicious behaviors at home.”
Recent research at the University of California Santa Barbara showed another way Wi-Fi signals can be used to spy through walls. They used similar technology to detect Wi-Fi signals through a building wall and reveal 3D alphabet letters.
WiFi still imagery is difficult due to motionlessness. “We then took a completely different approach to this challenging problem by tracing the edges of the objects,” said UC Santa Barbara electrical and computer engineering professor Yasamin Mostofi.
A futurist predicts human immortality by 2030
Ray Kurzweil, a computer scientist and futurist, has set specific timelines for humanity’s immortality and AI’s singularity. If his predictions are correct, you can live forever by surviving the next seven years.
Kurzweil correctly predicted in 1990 that a computer would beat human world chess champions by 2000, the rise of portable computers and smartphones, the shift to wireless technology, and the Internet’s explosion before it was obvious.
He even checked his 20-year-old predictions in 2010. He claims that of his 147 1990 predictions for the years leading up to 2010, 115 were “entirely correct” 12 were essentially correct, and 3 were entirely wrong.
Of course, he miscalculates, predicting self-driving cars by 2009.
Though bold (and probably wrong), immortality claims shouldn’t be dismissed out of hand. Kurzweil has made bold predictions like this for years, sticking to his initial dates.
“2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence,” Kurzweil said in 2017. “I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”
Kurzweil predicts we will “advance human life expectancy” by “more than a year every year” by 2030. Part of this progress toward the singularity 15 years later will involve nanobots in our bloodstream repairing and connecting our brain to the cloud. When this happens, we can send videos (or emails if you want to think about the duller aspects of being a freaking cyborg) from our brains and backup our memories.
Kurzweil believes the singularity will make humans “godlike” rather than a threat.
We’ll be funnier. Our sexiness will increase. We’ll express love better,” he said in 2015.
“If I want to access 10,000 computers for two seconds, I can do that wirelessly,” he said, “and my cloud computing power multiplies ten thousandfold. We’ll use our neocortex.”
“I’m walking along and Larry Page comes, and I need a clever response, but 300 million modules in my neocortex won’t work. One billion for two seconds. Just like I can multiply my smartphone’s intelligence thousands-fold today, I can access that in the cloud.”
Nanobots can deliver drug payloads into brain tumors, but without significant advances in the next few years, it’s unlikely we’ll get there in seven years. Paralyzed patients can now spell sentences and monkeys can finally play Pong with brain-computer interfaces.
Kurzweil says we’re far from the future, with human-AI interactions mostly the old way. His accuracy will be determined by time. Fortunately, his predictions predict plenty of time.
- Gadgets9 years ago
Why the Nexus 7 is still a good tablet in 2015
- Mobile Devices9 years ago
Samsung Galaxy Note 4 vs Galaxy Note 5: is there room for improvement?
- Editorials9 years ago
Samsung Galaxy Note 4 – How bad updates prevent people from enjoying their phones
- Mobile Devices9 years ago
Nexus 5 2015 and Android M born to be together
- Gaming9 years ago
New Teaser For Five Nights At Freddy’s 4
- Mobile Devices9 years ago
Google not releasing Android M to Nexus 7
- Gadgets9 years ago
Moto G Android 5.0.2 Lollipop still has a memory leak bug
- Mobile Devices9 years ago
Nexus 7 2015: Huawei and Google changing the game