Connect with us

Virtual assistants on our phones have become more and more common, though widespread use is still not as big of a thing as companies have hoped for. There’s something that still feels weird about talking and giving voice commands to your phone, especially in public. I know there are plenty of situations where I could have used a virtual assistant where I instead just googled the information. With that said, virtual assistants are getting smarter and will likely become a staple of our day-to-day lives as they become more and more intelligent. For a long time the big players were Apple’s Siri and Google Now, but Windows Cortana has also recently come onto the scene. With Cortana now available on Android phones, does it surpass the utility of Google Now?

Cortana

Cortana on Android has a variety of features, but is it worth using over Google’s Google Now? We’ll take a look at some of the pros and cons of the virtual assistant to try to help you make an informed decision.

My Day

The first option when you open Cortana is called “My Day”. This section of the application provides information it thinks will be relevant to your current day, and it learns over time what information is most useful for you and adjusts its display accordingly. It includes appointments, weather, news headlines, places to eat near your location, and more.

Reminders

Cortana makes it pretty easy to set up reminders. Simply open the app and either enter a time to be reminded or have the app remind you when you enter a certain location. The location-based reminders is a really cool feature that I’m excited to try out when I give Cortana another try.

Meetings

When you enter the meetings section of your calendar, you can give Cortana permission to access your calendar. The app then turns into an agenda-style version of your calendar. Tapping on any appointment will open it in google calendar, and there’s a bunch of other options to support other calendars like Microsoft exchange and more.

Cortana also functions similarly to Siri in that it can answer basic questions and do web searches for you based on voice commands.

Google Now

Google Now seeks to preempt information you need, displaying what it thinks you need to know before you even have to ask for it. Google Now functions via a series of cards that display relevant data based on what you’re currently doing. It’s less of a voice activated assistant and more of a relevant information “board”

Here’s a list of functions of what Google Now is capable of:

Tell you the weather

Tell you how to get home by car

Tell you how get home on foot

How to get home by bus/train

Remind you about calendar events

Notify you of emailed item dispatch notices, flight times etc

Give you updates on your sports team

Give you stock updates

Offer info based on your web searches

Lets you search the web with your voice/typing

Launch contextual assistance based on what’s on screen (Android 6.0)

Identify music

Play music

Which one’s better?

The two assistants are difficult to compare because they’re decently different. If you’re looking for a voice activated personal assistant that integrates seamlessly with your Windows PC, Cortana might be the way to go. If you’re looking for smart contextual data personalized for your current situation, Google Now might pull ahead.

Personally, I’m more of a fan of Cortana the more I use it. However, that comes with a caveat. Cortana works really well…when it actually works. There are times when Cortana is really hit or miss. As Microsoft continues to update Cortana to be smarter and smarter I think it will pull even further ahead of Google Now.

Cortana

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Artificial Intelligence

ChatGPT Will Soon “See, Hear, And Speak” With Its Latest AI Update

blank

Published

on

blank

A major update to ChatGPT lets the chatbot respond to images and voice conversations. The AI will hear your questions, see the world, and respond.

OpenAI, the non-profit group behind ChatGPT and DALL-E, announced the “multimodal” update in a blog post on Monday, saying it will add voice and image features to ChatGPT Plus and Enterprise over the next two weeks.

The post said it would be available for other groups “soon after.” It was unclear when it would be added to free versions.

Part of this update may be like Siri and Alexa, where you can ask a question and get the answer.

Anyone who’s used ChatGPT knows its AI isn’t a sterile search engine. It can find patterns and solve complex problems creatively and conversationally.

blank

According to OpenAI, “Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it” could expand these abilities. To decide what to make for dinner, take pictures of your fridge and pantry at home and ask questions for a recipe. Take a photo, circle the problem set, and have it share hints with your child after dinner to help them with a math problem.

This development “opens doors to many creative and accessibility-focused applications,” said OpenAI. They added that it will pose “new risks, such as the potential for malicious actors to impersonate public figures or commit fraud.”

The update currently only allows voice chat with AI trained with specific voice actors. It seems you can’t ask, “Read this IFLScience article in the voice of Stephen Hawking.”

However, current AI technology can achieve that.

Continue Reading

Artificial Intelligence

Track People and Read Through Walls with Wi-Fi Signals

blank

Published

on

blank

Recent research has shown that your Wi-Fi router’s signals can be used as a sneaky surveillance system to track people and read text through walls.

Recently, Carnegie Mellon University computer scientists developed a deep neural network that digitally maps human bodies using Wi-Fi signals.

It works like radar. Many sensors detect Wi-Fi radio waves reflected around the room by a person walking. This data is processed by a machine learning algorithm to create an accurate image of moving human bodies.

“The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input,” the researchers wrote in a December 2022 pre-print paper.

The team claims this experimental technology is “privacy-preserving” compared to a camera, despite concerns about intrusion. The algorithm can only detect rough body positions, not facial features and appearance, so it could provide a new way to monitor people anonymously.

They write, “This technology may be scaled to monitor the well-being of elder people or just identify suspicious behaviors at home.”

Recent research at the University of California Santa Barbara showed another way Wi-Fi signals can be used to spy through walls. They used similar technology to detect Wi-Fi signals through a building wall and reveal 3D alphabet letters.

WiFi still imagery is difficult due to motionlessness. “We then took a completely different approach to this challenging problem by tracing the edges of the objects,” said UC Santa Barbara electrical and computer engineering professor Yasamin Mostofi.

 

Continue Reading

Artificial Intelligence

A futurist predicts human immortality by 2030

blank

Published

on

blank

Ray Kurzweil, a computer scientist and futurist, has set specific timelines for humanity’s immortality and AI’s singularity. If his predictions are correct, you can live forever by surviving the next seven years.

Kurzweil correctly predicted in 1990 that a computer would beat human world chess champions by 2000, the rise of portable computers and smartphones, the shift to wireless technology, and the Internet’s explosion before it was obvious.

He even checked his 20-year-old predictions in 2010. He claims that of his 147 1990 predictions for the years leading up to 2010, 115 were “entirely correct” 12 were essentially correct, and 3 were entirely wrong.

Of course, he miscalculates, predicting self-driving cars by 2009.

Though bold (and probably wrong), immortality claims shouldn’t be dismissed out of hand. Kurzweil has made bold predictions like this for years, sticking to his initial dates.

“2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence,” Kurzweil said in 2017. “I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”

Kurzweil predicts we will “advance human life expectancy” by “more than a year every year” by 2030. Part of this progress toward the singularity 15 years later will involve nanobots in our bloodstream repairing and connecting our brain to the cloud. When this happens, we can send videos (or emails if you want to think about the duller aspects of being a freaking cyborg) from our brains and backup our memories.

Kurzweil believes the singularity will make humans “godlike” rather than a threat.

We’ll be funnier. Our sexiness will increase. We’ll express love better,” he said in 2015.

“If I want to access 10,000 computers for two seconds, I can do that wirelessly,” he said, “and my cloud computing power multiplies ten thousandfold. We’ll use our neocortex.”

“I’m walking along and Larry Page comes, and I need a clever response, but 300 million modules in my neocortex won’t work. One billion for two seconds. Just like I can multiply my smartphone’s intelligence thousands-fold today, I can access that in the cloud.”

Nanobots can deliver drug payloads into brain tumors, but without significant advances in the next few years, it’s unlikely we’ll get there in seven years. Paralyzed patients can now spell sentences and monkeys can finally play Pong with brain-computer interfaces.

Kurzweil says we’re far from the future, with human-AI interactions mostly the old way. His accuracy will be determined by time. Fortunately, his predictions predict plenty of time.

Continue Reading

Trending