Engineering
New concrete that doesn’t need cement could cut carbon emissions in the construction industry
Even though concrete is a very common building material, it is not at all the most environmentally friendly choice. Because of this, scientists and engineers have been looking for alternatives that are better for the environment. They may have found one: concrete that doesn’t need cement.
Cement production, which is a crucial ingredient in concrete, ranks as the third most significant contributor to human-caused carbon emissions globally. Nevertheless, in recent years, a multitude of alternative techniques for producing more environmentally friendly concrete have surfaced. One proposed method involves utilizing industrial waste and steel slag as CO2-reducing additives in the concrete mixture. Another suggestion is to utilize spent coffee grounds to enhance the strength of the concrete while reducing the amount of sand required.
However, a certain company has devised a technique to produce cement-free concrete suitable for commercial enterprises.
The concrete has the potential to have a net reduction in carbon dioxide and has the ability to prevent approximately 1 metric ton of carbon emissions for every metric ton used. If this statement is accurate, the cement-free binder will serve as a noteworthy substitute for Portland cement. According to BGR, the new concrete also complies with all the industry standards of traditional cement concrete, ensuring that there is no compromise in terms of strength and durability.
While it is still in the early stages, the situation seems encouraging. C-Crete Technologies, a company specializing in materials science and holding the patents for a novel form of concrete, has utilized approximately 140 tons of this new cast-in-place (pourable) concrete in recent construction endeavors.
In September 2023, the company was granted an initial sum of almost $1 million, promptly succeeded by an additional $2 million, by the US Department of Energy to advance the progress of its technology. In addition, it has garnered numerous accolades that are facilitating its growth in operations.
The widespread adoption of cement-free concrete in future construction projects has the potential to significantly alter the environmental impact of the industry. Although C-Crete seems to be one of the few companies currently exploring these new alternatives on a large scale, it is likely that others will also start embracing them in the near future.
Artificial Intelligence
Google DeepMind Shows Off A Robot That Plays Table Tennis At A Fun “Solidly Amateur” Level
Have you ever wanted to play table tennis but didn’t have anyone to play with? We have a big scientific discovery for you! Google DeepMind just showed off a robot that could give you a run for your money in a game. But don’t think you’d be beaten badly—the engineers say their robot plays at a “solidly amateur” level.
From scary faces to robo-snails that work together to Atlas, who is now retired and happy, it seems like we’re always just one step away from another amazing robotics achievement. But people can still do a lot of things that robots haven’t come close to.
In terms of speed and performance in physical tasks, engineers are still trying to make machines that can be like humans. With the creation of their table-tennis-playing robot, a team at DeepMind has taken a step toward that goal.
What the team says in their new preprint, which hasn’t been published yet in a peer-reviewed journal, is that competitive matches are often incredibly dynamic, with complicated movements, quick eye-hand coordination, and high-level strategies that change based on the opponent’s strengths and weaknesses. Pure strategy games like chess, which robots are already good at (though with… mixed results), don’t have these features. Games like table tennis do.
People who play games spend years practicing to get better. The DeepMind team wanted to make a robot that could really compete with a human opponent and make the game fun for both of them. They say that their robot is the first to reach these goals.
They came up with a library of “low-level skills” and a “high-level controller” that picks the best skill for each situation. As the team explained in their announcement of their new idea, the skill library has a number of different table tennis techniques, such as forehand and backhand serves. The controller uses descriptions of these skills along with information about how the game is going and its opponent’s skill level to choose the best skill that it can physically do.
The robot began with some information about people. It was then taught through simulations that helped it learn new skills through reinforcement learning. It continued to learn and change by playing against people. Watch the video below to see for yourself what happened.
“It’s really cool to see the robot play against players of all skill levels and styles.” Our goal was for the robot to be at an intermediate level when we started. “It really did that, all of our hard work paid off,” said Barney J. Reed, a professional table tennis coach who helped with the project. “I think the robot was even better than I thought it would be.”
The team held competitions where the robot competed against 29 people whose skills ranged from beginner to advanced+. The matches were played according to normal rules, with one important exception: the robot could not physically serve the ball.
The robot won every game it played against beginners, but it lost every game it played against advanced and advanced+ players. It won 55% of the time against opponents at an intermediate level, which led the team to believe it had reached an intermediate level of human skill.
The important thing is that all of the opponents, no matter how good they were, thought the matches were “fun” and “engaging.” They even had fun taking advantage of the robot’s flaws. The more skilled players thought that this kind of system could be better than a ball thrower as a way to train.
There probably won’t be a robot team in the Olympics any time soon, but it could be used as a training tool. Who knows what will happen in the future?
The preprint has been put on arXiv.
Engineering
To get gold back from electronic waste, the Royal Mint of the UK is using a new method
There are hidden mountains of gold in the junkyards, full of old smartphones, computers that don’t work anymore, and broken laptops. A new project in the UK wants to find and use these hidden riches.
The Royal Mint, which makes British coins for the government, has agreed to work with the Canadian clean tech startup Excir to use a “world-first technology” that can safely get gold and other precious metals out of electronic waste (e-waste) and recycle them.
Electronic devices have circuit boards that have small amounts of gold in their connections because gold is a good conductor. These boards also have useful metals like silver, copper, lead, nickel, and aluminum.
In the past, getting the metals was hard, but Excir’s new technology can quickly and safely recover 99 percent of the gold that is trapped in electronic waste.
They prepare the circuit boards using a “unique process,” and then they use a patented chemical formula to quickly and selectively remove the gold. The liquid that is high in gold is then processed to make pure gold that can be melted down and formed into bars. Palladium, silver, and copper could also be recovered with this method.
“Our entrepreneurial spirit has helped the Royal Mint do well for over 1,100 years, and the Excir technology helps us reach our goal of being a leader in sustainable precious metals.” The chemistry is completely new and can get precious metals back from electronics in seconds. “It has a lot of potential for The Royal Mint and the circular economy, as it helps to reuse our planet’s valuable resources and creates new jobs in the UK,” said Sean Millard, Chief Growth Officer at The Royal Mint.
At the moment, about 22% of electronic waste is collected, stored properly, and recycled. But with this kind of new technology, the problem of old electronics could be lessened.
Every year, the world makes about 62 million metric tons of electronic waste, which is more than 1.5 million 40-tonne trucks’ worth. That number will go up by another 32% by 2030 as more people buy electronics. This will make it the fastest-growing source of solid waste in the world.
The World Health Organization says that e-waste is hazardous waste because it contains harmful materials and can leak harmful chemicals if it is not handled properly. For example, old electronics can release lead and mercury into the environment, which can affect the development of the central nervous system while a person is pregnant, as a baby, as a child, or as a teen. Also, e-waste doesn’t break down naturally and builds up in nature.
Aside from being a huge waste, this is also a big problem for the environment. There could be between $57 billion and $62 billion worth of precious metals in dumps and scrap yards.
Artificial Intelligence
Is it possible to legally make AI chatbots tell the truth?
A lot of people have tried out chatbots like ChatGPT in the past few months. Although they can be useful, there are also many examples of them giving out the wrong information. A group of scientists from the University of Oxford now want to know if there is a legal way to make these chatbots tell us the truth.
The growth of big language models
There is a lot of talk about artificial intelligence (AI), which has grown to new heights in the last few years. One part of AI has gotten more attention than any other, at least from people who aren’t experts in machine learning. It’s the big language models (LLMs) that use generative AI to make answers to almost any question sound eerily like they came from a person.
Models like those in ChatGPT and Google’s Gemini are trained on huge amounts of data, which brings up a lot of privacy and intellectual property issues. This is what lets them understand natural language questions and come up with answers that make sense and are relevant. When you use a search engine, you have to learn syntax. But with this, you don’t have to. In theory, all you have to do is ask a question like you would normally.
There’s no doubt that they have impressive skills, and they sound sure of their answers. One small problem is that these chatbots often sound very sure of themselves when they’re completely wrong. Which could be fine if people would just remember not to believe everything they say.
The authors of the new paper say, “While problems arising from our tendency to anthropomorphize machines are well established, our vulnerability to treating LLMs as human-like truth tellers is uniquely worrying.” This is something that anyone who has ever had a fight with Alexa or Siri will know all too well.
“LLMs aren’t meant to tell the truth in a fundamental way.”
It’s simple to type a question into ChatGPT and think that it is “thinking” about the answer like a person would. It looks like that, but that’s not how these models work in real life.
Do not trust everything you read.
They say that LLMs “are text-generation engines designed to guess which string of words will come next in a piece of text.” One of the ways that the models are judged during development is by how truthful their answers are. The authors say that people can too often oversimplify, be biased, or just make stuff up when they are trying to give the most “helpful” answer.
It’s not the first time that people have said something like this. In fact, one paper went so far as to call the models “bullshitters.” In 2023, Professor Robin Emsley, editor of the journal Schizophrenia, wrote about his experience with ChatGPT. He said, “What I experienced were fabrications and falsifications.” The chatbot came up with citations for academic papers that didn’t exist and for a number of papers that had nothing to do with the question. Other people have said the same thing.
What’s important is that they do well with questions that have a clear, factual answer that has been used a lot in their training data. They are only as good as the data they are taught. And unless you’re ready to carefully fact-check any answer you get from an LLM, it can be hard to tell how accurate the information is, since many of them don’t give links to their sources or any other sign of confidence.
“Unlike human speakers, LLMs do not have any internal notions of expertise or confidence. Instead, they are always “doing their best” to be helpful and convincingly answer the question,” the Oxford team writes.
They were especially worried about what they call “careless speech” and the harm that could come from LLMs sharing these kinds of responses in real-life conversations. What this made them think about is whether LLM providers could be legally required to make sure that their models are telling the truth.
In what ways did the new study end?
The authors looked at current European Union (EU) laws and found that there aren’t many clear situations where an organization or person has to tell the truth. There are a few, but they only apply to certain institutions or sectors and not often to the private sector. Most of the rules that are already in place were not made with LLMs in mind because they use fairly new technology.
Thus, the writers suggest a new plan: “making it a legal duty to cut down on careless speech among providers of both narrow- and general-purpose LLMs.”
“Who decides what is true?” is a natural question. The authors answer this by saying that the goal is not to force LLMs to take a certain path, but to require “plurality and representativeness of sources.” There is a lot of disagreement among the authors about how much “helpfulness” should weigh against “truthfulness.” It’s not easy, but it might be possible.
To be clear, we haven’t asked ChatGPT these questions, so there aren’t any easy answers. However, as this technology develops, developers will have to deal with them. For now, when you’re working with an LLM, it might be helpful to remember this sobering quote from the authors: “They are designed to take part in natural language conversations with people and give answers that are convincing and feel helpful, no matter what the truth is.”
The study was written up in the Royal Society Open Science journal.
- Gadgets9 years ago
Why the Nexus 7 is still a good tablet in 2015
- Mobile Devices9 years ago
Samsung Galaxy Note 4 vs Galaxy Note 5: is there room for improvement?
- Editorials9 years ago
Samsung Galaxy Note 4 – How bad updates prevent people from enjoying their phones
- Mobile Devices9 years ago
Nexus 5 2015 and Android M born to be together
- Gaming9 years ago
New Teaser For Five Nights At Freddy’s 4
- Mobile Devices9 years ago
Google not releasing Android M to Nexus 7
- Gadgets10 years ago
Moto G Android 5.0.2 Lollipop still has a memory leak bug
- Mobile Devices9 years ago
Nexus 7 2015: Huawei and Google changing the game