Connect with us

Artificial Intelligence

Elon Musk, Stephen Hawking likely wrong to fear advanced artificial intelligence

blank

Published

on

artificial-intelligence-lawrence-krauss.jpg

Artificial intelligence has become a very popular topic these days thanks in no small part to well-known public figures such as Elon Musk or Stephen Hawking. Unfortunately, most of what we hear about this subject is pretty scary and usually ends with humanity being either destroyed or enslaved once our robotic creations inevitably rise up against us. But what if artificial intelligence could still stay on our side even after it becomes much smarter than us? For some reason most people seem to think that this is an unlikely scenario, Elon Musk, Stephen Hawking and even Bill Gates included. Well, the good news is that not everybody feels the same way about this topic and there are, in fact, quite a few people that view AI as a great opportunity, as opposed to a threat to humanity.

One of these people is professor Lawrence Krauss, a world-renowned theoretical physicist who also happens to be the author of multiple bestselling books, the director of the Origins Project, and (among others) one of the first scientists to propose the concept of “dark energy.” In short, the man knows his stuff. Professor Krauss recently jumped in to share his own thoughts on the possibility of advanced artificial intelligence and for once the prospect doesn’t seem as grim anymore. Lawrence Krauss believes that computers will indeed become conscious one day, but contrary to many predictions, that day won’t necessarily have to kickstart the beginning of the end for mankind.

An interesting point made by professor Krauss is that AI could eventually have emotions, just like us. Since artificial intelligence is expected to become very smart thanks to its ability to learn new things at an extremely fast rate, who’s to say that it won’t end up learning what it means to be sad, happy, or afraid? If that case, it’s not far-fetched to think that artificial intelligence would have empathy for living things rather than an urge to destroy them like we see in most movies. The physicist also mentioned that he is aware of the concerns expressed by his colleague Stephen Hawking and others. Although he understands why some people are scared that AI could cause major problems for us at some point, Krauss is an optimist and thinks that a lot of good can come out of this technology as well. Here’s hoping he’s right on this one.

Machines are useful because they’re tools that help us do what we want to do. And I think computation machines are good examples of that. One has to be very careful in creating machines to not assume they’re more capable than they are. That’s true in cars. That’s true in vehicles that we make. That’s true in weapons we create. That’s true in defensive mechanisms we create. And so to me the dangers of AI are mostly due to the fact that people may assume the devices they create are more capable than they are and don’t need more control and monitoring. I guess I find the opportunities to be far more exciting than the dangers. The unknown is always dangerous but ultimately machines and computational machines are improving our lives in many ways.

Besides, at the moment artificial intelligence is far less advanced than people give it credit for. We may need to start worrying in a decade or so, but for now AI robots apparently can’t even fold laundry properly according to professor Krauss. Admittedly, Facebook does have rather impressive artificial intelligence that can generate some pretty realistic images, however, that software is unlikely to become sentient anytime soon.

Artificial Intelligence

The company pcTattletale, which develops spyware, has announced that it is ceasing operations and shutting down following a data breach

blank

Published

on

blank

The creator of the spyware application pcTattletale declared that his company is now defunct and permanently closed due to a recent data breach that occurred over the weekend.

The shutdown occurred shortly after a hacker vandalized the spyware manufacturer’s website and shared links containing substantial quantities of data from pcTattletale’s servers, including databases of customer information and some stolen data from victims.

pcTattletale was a covert surveillance application, commonly referred to as “stalkerware,” due to its capability to monitor individuals without their awareness. This app enabled the user to remotely access screenshots of the target’s Android or Windows device, as well as their confidential information, from any location worldwide. pcTattletale marketed its spyware application as a means to monitor employees while also openly endorsing its capability to surreptitiously observe spouses and domestic partners without their consent, which is a violation of the law.

According to the data breach notification site Have I Been Pwned, the app, which is no longer in operation, had a total of 138,000 customers who had registered to use the service.

The hacker claimed on the vandalized website that pcTattletale’s servers could be manipulated to disclose the private keys for its Amazon Web Services account. These keys were utilized by the spyware manufacturer to store a vast number of screenshots of the devices on which the spyware was installed.

pcTattletale’s website is currently inaccessible

Bryan Fleming, the founder of pcTattletale, informed me via text message on Tuesday that he no longer possesses the ability to access the Amazon Web Services account of the company.

Fleming stated that they deleted all data as a precautionary measure due to the potential risk of a data breach that could have compromised their customers’ information.

Fleming stated that the account has been closed and the servers have been deleted.

An examination of the compromised data reveals that pcTattletale stored over 300 million screenshots of victims’ devices on its Amazon S3 storage server, spanning several years. I verified that there were screenshots from pcTattletale-monitored devices available to the public.

Amazon appears to have taken measures against the manufacturer of spyware. The Amazon S3 storage server pcTattletale previously utilized for storing device screenshots now displays an error code, “AllAccessDisabled,” which is employed by Amazon to restrict all access to a customer’s account. This includes the customer themselves, who can only seek resolution by contacting Amazon for additional support. Nevertheless, Fleming refrained from providing an answer regarding whether AWS had terminated the service, and AWS spokesperson Grant Milne also declined to comment.

Fleming stated that he did not retain a duplicate of the data and did not provide an explanation for the company’s deletion of the data without prior notification to those individuals whose information was compromised in the data breach. He ceased to reply to our inquiries.

The situation at pcTattletale is not exceptional. Spyware applications are widely recognized for their inherent software flaws and are notorious for their tendency to inadvertently disclose or release data. Inadequate security practices have led federal regulators to prohibit stalkerware makers from participating in the surveillance industry in the past.

FTC spokesperson Juliana Gruenwald Henderson declined to provide any information regarding the agency’s investigation into pcTattletale.

Other manufacturers of spyware have ceased operations following comparable security breaches. The Polish-originated espionage software LetMeSpy was terminated in June 2023 due to a cyber attack that compromised its systems and resulted in the deletion of its clients’ data. Additionally, a state of New York investigation resulted in the shutdown of the spyware programs PhoneSpector and Highster.

 

Continue Reading

Artificial Intelligence

Apple’s Design Award nominees focus on small businesses and independent designers, but they mostly ignore AI (except for Arc)

blank

Published

on

blank

Apple is honoring indie apps and startups rather than larger tech companies, including those that provide AI chatbots, through its selection of finalists for the Apple Design Awards.

Amidst scrutiny from lawmakers and regulators regarding its App Store model, Apple is now focusing on recognizing smaller developers in its annual list of the most exceptional and technologically advanced software available on its platform. ChatGPT is notably absent from Apple’s list of finalists. Apple prioritizes small to midsize app developers such as Copilot Money, SmartGym, Crouton (a recipe app), Procreate Dreams (a creative app), Gentler Streak, and others. They also give preference to venture-backed startups like Rooms (a creativity app) and Arc Search (a reimagined web browser).

The latter app has integrated artificial intelligence (AI) with an agent that performs browsing tasks on your behalf. Additionally, it includes a new feature that allows you to ask questions by simply raising the phone to your ear and saying, “Call Arc.” Notably, this app is the only one on the list that explicitly mentions the technology that has gained significant attention and popularity in the App Store and the wider tech industry in the past year.

Despite achieving high download numbers since its launch last year, ChatGPT was not designated as the “app of the year” by either Apple or Google. The ADAs would have provided Apple with another chance to acknowledge the innovation, but once again, it was disregarded.

Apple’s selection of finalists includes indie games such as Rytmos from Floppy Club, a Copenhagen-based game developer; finity, a match-three puzzle game available on Apple Arcade; The Wreck from The Pixel Hunt, an independent game studio based in Paris; The Bear from Mucks Games, a group of creative individuals from Germany; and several others.

The non-game apps highlighted by Apple this year are primarily indie efforts. For instance, India-based independent developer RhythmicWorks Software created the meditation timer Meditate. A small group of independent developers from Italy under the direction of Nicholas Mariniello created Sunlitt, a sun-tracking app. Dudel Draw, a drawing app, is developed by indie outfit Silly Little Apps in the U.S. Isuru Wanasinghe, an Australian developer, created the journaling app Gratitude. Last but not least, ex-Googlers created and a16z backed Rooms, a creative app for designing imaginative spaces in an 8-bit style. Apple has nominated Rooms in two categories, making it doubly blessed.

However, there are indeed some prominent developers included in the list, such as Neowiz from South Korea, which has been nominated for its game “Lies of P”; 505 Games’ “Death Stranding Director’s Cut”; HoYoverse, the creator of “Genshin Impact,” for their game “Honkai: Star Rail”; and Activision’s “Call of Duty: Warzone Mobile.” However, when using Apple technologies like MetalFX or optimizations created especially for its M1 and superior chips (or perhaps incorporating in-app purchases! ), Apple’s decision is at least partially influenced.

Additional titles receiving recognition this year include What the Car?, NYT Games, Hello Kitty Island Adventure, Cityscapes: Sim Builder, How We Feel, Ahead: Emotions Coach, The Bear, Lost in Play, Wavelength, Little Nightmares, and a few select apps and games specifically designed for the Vision Pro, such as Blackbox, Loóna, Synth Riders, djay, NBA, and Sky Guide. Significantly, a number of these applications were initially developed for iOS and subsequently adapted for Vision Pro.

Additionally, the presence of an “Inclusivity” section enhances Apple’s worldwide app community, which includes individuals in the European Union (EU), where the Digital Markets Act is currently being implemented. Apple nominations in this section encompass a range of applications and games. These include the app “oko” from Belgium, designed specifically for low-vision users. Another notable nomination is “Complete Anatomy 2024” from Ireland, which focuses on diversity. Additionally, the app “Tiimo” from Denmark caters to neurodivergent users. The nominated games include “Unpacking” from the digital storefront Humble Bundle, “Quadline,” developed by Kovalov Ivan from Ukraine, and “Crayola Adventures.”

 

Continue Reading

Artificial Intelligence

AI models have preferences for certain numbers due to their ability to simulate human-like behavior

blank

Published

on

blank

AI models consistently astonish us, not just with their capabilities but also with their limitations and the reasons behind them. A noteworthy characteristic of these systems is that they select random numbers in a manner that resembles human behavior, albeit in a flawed manner.

However, first, what precisely does that signify? Is it not possible for individuals to select numbers in a random manner? And how can you determine if someone is accomplishing this task effectively or not? Humans possess a longstanding and widely recognized limitation: we tend to excessively analyze and misinterpret randomness.

Instruct an individual to forecast the outcome of 100 coin tosses and then contrast their predictions with the actual results of 100 coin tosses. It is typically possible to distinguish between the two sets of outcomes since, contrary to what one might expect, the actual coin tosses appear to exhibit a lesser degree of randomness. It is typical to see a string of six or seven consecutive heads or tails occurrences, which human predictors rarely include in their top 100 predictions.

Similarly, the situation remains unchanged when you want someone to select a number from the range of 0 to 100. Individuals rarely select the numbers 1 or 100. Numbers that are divisible by 5 are infrequent, as are numbers that have repeated digits such as 66 and 99. These selections do not appear to be random to us, as they possess certain qualities: tiny, large, and distinctive. Alternatively, we frequently select numbers that conclude with the digit 7, typically from a position in the center.

There are numerous instances of this type of predictability in psychology. However, the fact that AIs engage in the same behavior does not diminish its strangeness.

Indeed, a group of inquisitive engineers at Gramener conducted a casual yet captivating experiment in which they directly queried multiple prominent LLM chatbots to select a random number between the range of 0 to 100.

The outcomes were non-random, reader.

blank

Each of the three examined models exhibited a consistent “preferred” number that consistently emerged as their response when operating in the most deterministic mode. However, this number also occurred frequently even when the models were set to higher “temperatures,” which is a feature that enhances the diversity of their outcomes.

OpenAI’s GPT-3.5 Turbo has a strong preference for the number 47. In the past, it had a preference for the number 42, which gained popularity because to Douglas Adams’ novel, The Hitchhiker’s Guide to the Galaxy, where it was portrayed as the answer to the ultimate questions about life, the world, and everything.

The name of the product is “Anthropic’s Claude 3”. The number 42 was present with Haiku. Gemini has a preference for the number 72.

Significantly, all three models exhibited a bias similar to that of humans in the other numbers they chose, even when the temperature was high.

Everyone tended to avoid numbers that were either too low or too high. Claude, in particular, never exceeded 87 or fell below 27, and even those values were considered outliers. Numbers in the double digits, such as 33, 55, and 66, were deliberately avoided; however, a number ending in 7, namely 77, appeared. There are very few whole numbers, save for one instance when Gemini, at its maximum temperature, unexpectedly selected 0.

What is the reason for this? Artificial intelligences is not human. Why would they be concerned about something that appears to be random? Have they finally attained consciousness and is this their way of demonstrating it?

Negative. The solution, as is typically the situation with such matters, is that we are attributing human characteristics to something to an excessive extent. These models are indifferent to the distinction between what is and what is not random. They lack understanding of the concept of “randomness”. The question is answered using the same approach as for all other questions: by analyzing the training data and reproducing the most often written response following a question resembling “choose a random number.” The frequency of its appearance directly correlates with the frequency of repetition by the model.

In their training data, they would encounter the value of 100 in rare instances, as it is an infrequent response. From the perspective of the AI model, the answer of 100 is deemed unacceptable for that particular query. Lacking any cognitive capacity for reasoning and devoid of any comprehension of numerical concepts, it can only respond in a manner akin to that of a stochastic parrot. Likewise, they have shown a tendency to struggle with basic arithmetic tasks, like as multiplying a small set of numbers. This is because it is highly improbable that the specific calculation “112 multiplied by 894, then multiplied by 32 equals 3,204,096” is included in their training data. However, more recent models will detect the presence of a mathematical problem and transfer it to a subroutine.

This serves as a perfect example of LLM habits and the seeming display of humanity they can exhibit. It is important to remember that these systems have been trained to mimic human behavior, even if that was not the original purpose. Hence, the evasion or prevention of pseudanthropy is exceedingly challenging.

In the headline, I stated that these models possess the belief that they are human beings, yet that statement is somewhat deceptive. As we frequently emphasize, they lack the ability to reason. However, in their replies, they consistently mimic individuals without any requirement for knowledge or cognitive processing. Regardless of whether you’re seeking a recipe for chickpea salad, investment guidance, or a random number, the procedure remains unchanged. The results possess a human quality because they originate from human-generated information and are subsequently modified—for your convenience as well as to benefit the significant financial interests of artificial intelligence.

Continue Reading

Trending