Most people think NVIDIA is just a producer of computer graphics cards. However, NVIDIA also makes artificial intelligence (AI) programs, robots, and self-driving cars, and soon the company might be synonymous with public safety, all because of its solution system NVIDIA Metropolis.
Yesterday, NVIDIA proposed the creation of an”AI City” using NVIDIA Metropolis. The premise is quite simple: by 2020, countless video cameras will employ NVIDIA’s Jetson TX2 Module. Cameras will record videos and analyze them with a powerful AI capable of “deep learning” — a highly complex machine learning algorithm that, to my understanding, mimics neural networks and lets the AI learn on its own. The cameras will then send data to data centers/cloud networks that use the NVIDIA Tesla. Ideally, this data will be used for multiple applications, such as helping search for missing children or pets, providing alternate travel routes for GPS systems in real time, and reporting traffic accidents.
According to NVIDIA, cameras without the Jetson TX2 Module can still participate in NVIDIA Metropolis. While the proposed AI cameras can analyze videos on the spot, thus letting them save on bandwidth by only transmitting metadata, regular cameras can still send videos to NVIDIA Jetson or Tesla-powered devices for deep analysis. NVIDIA claims that its GPU-powered system is twenty times more efficient than similar CPU-powered systems.
NVIDIA has already partnered with numerous companies to make NVIDIA Metropolis a reality. These companies include:
- Aeryon (makes unmanned camera drones)
- Aqueti (makes the Mantis Camera, a 100 megapixel camera that has fifty times the resolution of an HD camera)
- Avigilon (develops easy-to-use remote viewing software and HD security cameras)
- BriefCam (creates a video synopsis technology that reviews videos and can index events by time)
- Dahua (makes security cameras that can detect and recognize people and cars)
- Hikvision (makes security cameras that can detect and recognize people and cars)
- Milestone (develops GPU-powered video management software)
- MotionDSP (develops programs that analyze videos for forensic evidence)
- Motionloft (creates real-time person and vehicle-tracking sofware)
- Netradyne (makes vision-based hardware that aids driver performance)
- Robotic Assistive Devices (develops autonomous robots that aid in public safety and monitoring)
- Sensetime (creates face, object, and attribute recognition technology)
- VIMOC (creates software that captures and processes sensory data)
The NVIDIA Metropolis system has the potential to improve public safety. Only time will tell if people accept it.
ChatGPT Will Soon “See, Hear, And Speak” With Its Latest AI Update
A major update to ChatGPT lets the chatbot respond to images and voice conversations. The AI will hear your questions, see the world, and respond.
OpenAI, the non-profit group behind ChatGPT and DALL-E, announced the “multimodal” update in a blog post on Monday, saying it will add voice and image features to ChatGPT Plus and Enterprise over the next two weeks.
The post said it would be available for other groups “soon after.” It was unclear when it would be added to free versions.
Part of this update may be like Siri and Alexa, where you can ask a question and get the answer.
Anyone who’s used ChatGPT knows its AI isn’t a sterile search engine. It can find patterns and solve complex problems creatively and conversationally.
According to OpenAI, “Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it” could expand these abilities. To decide what to make for dinner, take pictures of your fridge and pantry at home and ask questions for a recipe. Take a photo, circle the problem set, and have it share hints with your child after dinner to help them with a math problem.
This development “opens doors to many creative and accessibility-focused applications,” said OpenAI. They added that it will pose “new risks, such as the potential for malicious actors to impersonate public figures or commit fraud.”
The update currently only allows voice chat with AI trained with specific voice actors. It seems you can’t ask, “Read this IFLScience article in the voice of Stephen Hawking.”
However, current AI technology can achieve that.
Track People and Read Through Walls with Wi-Fi Signals
Recent research has shown that your Wi-Fi router’s signals can be used as a sneaky surveillance system to track people and read text through walls.
Recently, Carnegie Mellon University computer scientists developed a deep neural network that digitally maps human bodies using Wi-Fi signals.
It works like radar. Many sensors detect Wi-Fi radio waves reflected around the room by a person walking. This data is processed by a machine learning algorithm to create an accurate image of moving human bodies.
“The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input,” the researchers wrote in a December 2022 pre-print paper.
The team claims this experimental technology is “privacy-preserving” compared to a camera, despite concerns about intrusion. The algorithm can only detect rough body positions, not facial features and appearance, so it could provide a new way to monitor people anonymously.
They write, “This technology may be scaled to monitor the well-being of elder people or just identify suspicious behaviors at home.”
Recent research at the University of California Santa Barbara showed another way Wi-Fi signals can be used to spy through walls. They used similar technology to detect Wi-Fi signals through a building wall and reveal 3D alphabet letters.
WiFi still imagery is difficult due to motionlessness. “We then took a completely different approach to this challenging problem by tracing the edges of the objects,” said UC Santa Barbara electrical and computer engineering professor Yasamin Mostofi.
A futurist predicts human immortality by 2030
Ray Kurzweil, a computer scientist and futurist, has set specific timelines for humanity’s immortality and AI’s singularity. If his predictions are correct, you can live forever by surviving the next seven years.
Kurzweil correctly predicted in 1990 that a computer would beat human world chess champions by 2000, the rise of portable computers and smartphones, the shift to wireless technology, and the Internet’s explosion before it was obvious.
He even checked his 20-year-old predictions in 2010. He claims that of his 147 1990 predictions for the years leading up to 2010, 115 were “entirely correct” 12 were essentially correct, and 3 were entirely wrong.
Of course, he miscalculates, predicting self-driving cars by 2009.
Though bold (and probably wrong), immortality claims shouldn’t be dismissed out of hand. Kurzweil has made bold predictions like this for years, sticking to his initial dates.
“2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence,” Kurzweil said in 2017. “I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”
Kurzweil predicts we will “advance human life expectancy” by “more than a year every year” by 2030. Part of this progress toward the singularity 15 years later will involve nanobots in our bloodstream repairing and connecting our brain to the cloud. When this happens, we can send videos (or emails if you want to think about the duller aspects of being a freaking cyborg) from our brains and backup our memories.
Kurzweil believes the singularity will make humans “godlike” rather than a threat.
We’ll be funnier. Our sexiness will increase. We’ll express love better,” he said in 2015.
“If I want to access 10,000 computers for two seconds, I can do that wirelessly,” he said, “and my cloud computing power multiplies ten thousandfold. We’ll use our neocortex.”
“I’m walking along and Larry Page comes, and I need a clever response, but 300 million modules in my neocortex won’t work. One billion for two seconds. Just like I can multiply my smartphone’s intelligence thousands-fold today, I can access that in the cloud.”
Nanobots can deliver drug payloads into brain tumors, but without significant advances in the next few years, it’s unlikely we’ll get there in seven years. Paralyzed patients can now spell sentences and monkeys can finally play Pong with brain-computer interfaces.
Kurzweil says we’re far from the future, with human-AI interactions mostly the old way. His accuracy will be determined by time. Fortunately, his predictions predict plenty of time.
- Gadgets9 years ago
Why the Nexus 7 is still a good tablet in 2015
- Mobile Devices9 years ago
Samsung Galaxy Note 4 vs Galaxy Note 5: is there room for improvement?
- Editorials9 years ago
Samsung Galaxy Note 4 – How bad updates prevent people from enjoying their phones
- Mobile Devices8 years ago
Nexus 5 2015 and Android M born to be together
- Gaming9 years ago
New Teaser For Five Nights At Freddy’s 4
- Mobile Devices8 years ago
Google not releasing Android M to Nexus 7
- Gadgets9 years ago
Moto G Android 5.0.2 Lollipop still has a memory leak bug
- Mobile Devices8 years ago
Nexus 7 2015: Huawei and Google changing the game