Connect with us

Artificial Intelligence

GeekReply News Round-up: October 15-21





This has been a very eventful week for everyone involved. Some people might think that nothing exactly substantial has happened, but this time we got a lot of Loot Box talk. However, we also got some other oddities showing up in the realms of tech and gaming, so let’s talk about the major developments in this week.

  • Nintendo switch gets new tricks.

As of October 18 the Nintendo switch got version 4.0.0. This brought with it the ability to transfer profiles to other switch consoles as well as a way to record 30 second clips for you to trim and upload by briefly holding the screen capture button, but so far only from The Legend of Zelda: Breath of the Wild, Mario Kart 8 Deluxe, ARMS, and Splatoon 2. Capturing is limited to just 30 seconds, however.

On their way to bring everyone’s favorite generation to Pokémon GO. October 20 to November 3 will see some new spooky friends in Pokémon GO. Not only are there new Ghost, and  Dark type to catch, but also a special “Witch” Pikachu. Player’s capture rewards will be doubled and every player will receive a Mimikyu hat, because every holiday event needs a free hat… So get ready to be the ghost buster you always wanted to be.

In 2014 a man wearing a Pikachu hat, and carrying a Pikachu doll jumped the white house fence before being apprehended, while his reasons were not completely known his would be successor, Curtis Combs did it for attempted fame. Curtis donned a full Pikachu suit and jumped fence while recording with plans to upload it to YouTube

  • Starborne

Heads up, MMORTS fans. There’s a new game on the horizon, Starborne. This free to play game is the ambition of Solid Clouds, an indie studio based in Iceland and headed by designers like Ásgeir Ásgeirsson, EVE Online’s former art director, and Hrafnkell Óskarsson, who played a major role in EVE’s lore and design.

They have set out to create the first 3D MMORTS game, while currently in an alpha build, the game shows promise, with single player activities, a card based upgrade system, and 5 different classes, the scout, bomber, industrial, offensive, and defensive player.

The Starborne client is being developed on Windows but will cross-platform upon release, including MacOS, iOS, Android, Linux and browsers, however the test rounds are only on Windows.

It seems like EA became a serial studio killer, as they shut down Visceral Games for good as well. People were outraged by this because this was one of the most beloved game studios in gaming. There have been creatives such as the Director of God of War jumping in and expressing his disgust towards EA in a tweet.

The best feature of the new models of Toyota cars is coming. Now cars will have a very capable AI that will be able to talk with the drivers to know their preferences. This will be the first step into creating incredibly capable autonomous cars.

This concludes this week’s news round-up, be on the lookout for more as we go along in GeekReply.

I always wanted to be a journalist who listens. The Voice of the Unspoken and someone heavily involved in the gaming community. From playing as a leader of a competitive multi-branch team to organizing tournaments for the competitive scene to being involved in a lot of gaming communities. I want to keep moving forward as a journalist.

Continue Reading

Artificial Intelligence

The newly formed AI council at Meta consists exclusively of individuals who identify as white males





Meta recently announced the formation of an AI advisory council comprised exclusively of individuals from a specific demographic. What else could we possibly anticipate? For years, women and people of color have been voicing their concerns about being overlooked and marginalized in the field of artificial intelligence, despite their qualifications and significant contributions to its development.

Meta did not promptly address our inquiry regarding the diversity of the advisory board.

This new advisory board has a different composition compared to Meta’s actual board of directors and its oversight board. The latter two boards prioritize diversity in terms of gender and racial representation. The shareholders did not elect this AI board, and it has no fiduciary responsibility. Meta informed Bloomberg that the board would provide valuable insights and recommendations regarding technological advancements, innovation, and strategic growth opportunities. We would meet on a regular basis.

It’s interesting to note that the AI advisory council consists solely of businesspeople and entrepreneurs, rather than ethicists or individuals with an academic or extensive research background. Although it may be true that the executives from Stripe, Shopify, and Microsoft have a strong background in bringing numerous products to market, it is important to note that AI is a unique and complex field that requires specialized expertise. It’s a high-stakes endeavor with potential far-reaching consequences, especially for marginalized communities.

Sarah Myers West, managing director of the AI Now Institute, a nonprofit that studies the social effects of AI, told me that it’s important to “critically examine” the companies that are making AI to “make sure the public’s needs are served.”

“This technology makes mistakes a lot of the time, and we know from our own research that those mistakes hurt communities that have been discriminated against for a long time more than others,” she said. “We should set a very, very high bar.”

The bad things about AI happen to women a lot more often than to men. In 2019, Sensity AI found that 96% of AI deepfake videos online were sexually explicit videos that people did not agree to watch. Since then, generative AI has spread widely, and women are still the ones who suffer from it.

In a notable incident that occurred in January, explicit deepfake videos of Taylor Swift, created without her consent, gained widespread attention on X. One particular post, which garnered hundreds of thousands of likes and accumulated 45 million views, was particularly popular. Social platforms such as X have traditionally been unsuccessful in safeguarding women from these situations. However, due to Taylor Swift’s immense influence as one of the most influential women globally, X took action by prohibiting search terms like “taylor swift ai” and “taylor swift deepfake.”

However, if this situation occurs to you and you are not a worldwide popular sensation, then you may be unfortunate. There are a plethora of reports documenting instances where students in middle school and high school have created explicit deepfakes of their fellow classmates. Although this technology has existed for some time, it has become increasingly accessible. One no longer needs to possess advanced technological skills to download applications that are explicitly marketed for the purpose of removing clothing from photos of women or replacing their faces with those in pornographic content. According to NBC reporter Kat Tenbarge, Facebook and Instagram displayed advertisements for an application called Perky AI, which claimed to be a tool for creating explicit images.

Two of the advertisements, which purportedly evaded Meta’s detection until Tenbarge brought the matter to the company’s attention, featured images of celebrities Sabrina Carpenter and Jenna Ortega with their bodies intentionally obscured, encouraging users to prompt the application to digitally remove their clothing. The advertisements featured a photograph of Ortega taken when she was only 16 years old.

The decision to permit Perky AI to advertise was not a singular occurrence. The company’s improper handling of complaints about artificial intelligence-generated sexually explicit content has prompted investigations by the Oversight Board of Meta.

It is crucial to include the perspectives of women and people of color in the development of artificial intelligence products. Historically, marginalized groups have been systematically excluded from participating in the creation of groundbreaking technologies and research, leading to catastrophic outcomes.

A clear illustration is the historical exclusion of women from clinical trials until the 1970s, resulting in the development of entire fields of research without considering the potential effects on women. A 2019 study conducted by the Georgia Institute of Technology revealed that black individuals, specifically, experience the consequences of technology that is not designed with their needs in mind. For instance, self-driving cars are more prone to colliding with black individuals due to the difficulty their sensors may face in detecting black skin.

Algorithms that are trained using biased data simply replicate the same prejudices that humans have instilled in them. In general, we are already witnessing AI systems actively perpetuating and intensifying racial discrimination in areas such as employment, housing, and criminal justice. Voice assistants encounter difficulties in comprehending various accents and frequently identify the content produced by non-native English speakers as being generated by artificial intelligence, as highlighted by Axios. This is due to the fact that English is the primary language for AI. Facial recognition systems exhibit a higher frequency of identifying black individuals as potential matches for criminal suspects compared to white individuals.

The present advancement of AI reflects the prevailing power structures pertaining to social class, race, gender, and Eurocentrism, which are also evident in other domains. Unfortunately, it appears that leaders are not paying enough attention to this issue. On the contrary, they are strengthening it. Investors, founders, and tech leaders are excessively fixated on rapid progress and disruptive innovation, to the extent that they fail to comprehend the potential negative consequences of generative AI, which is currently a highly popular AI technology. McKinsey’s report suggests that artificial intelligence (AI) has the potential to automate around 50% of jobs that do not necessitate a four-year college degree and have an annual salary of over $42,000. These jobs are more commonly held by minority workers.

There is legitimate concern regarding the ability of a team consisting solely of white men at a highly influential tech company, who are competing to develop AI technology to save the world, to provide advice on products that cater to the needs of all individuals, given that they only represent a limited demographic. Developing technology that is accessible to every single individual will require a substantial and concerted endeavor. The complexity of constructing AI systems that are both safe and inclusive, encompassing research and understanding at an intersectional societal level, is so intricate that it is apparent that this advisory board will not effectively assist Meta in achieving this goal. Where Meta lacks, another startup has the potential to emerge.

Continue Reading

Artificial Intelligence

Microsoft releases Copilot+ PCs because it wants to turn Windows into an AI operating system





Microsoft wants to make generative AI a big part of Windows and the PCs that run it.

This week, at a keynote event before its annual Build developer conference, the company showed off a new line of Windows computers it calls Copilot+ PCs. The computers also have generative AI-powered features like Recall that help users find apps, files, and other content they’ve seen before. Copilot is Microsoft’s brand of generative AI, and it will soon be a lot more built into Windows 11. Also, new Microsoft Surface gadgets are on the way.

All the important news has been put together here.

The Copilot+ PCs


Copilot+ PCs are Microsoft’s idea of the best Windows hardware for AI. All of them have special chips, called NPUs, that power AI games like Recall. They also come with at least 16GB of RAM and SSD storage.

Qualcomm’s Snapdragon X Elite and Plus chips will be in the first Copilot+ PCs. Microsoft says these chips can power up to 15 hours of web browsing and 20 hours of video playback. Intel and AMD have also promised to work with a number of manufacturers, such as Acer, Asus, Dell, HP, Lenovo, and Samsung, to make processors for Copilot+ devices.

Some Copilot+ PCs can be preordered today for as little as $999.

Surface Book and Surface Pro
The Surface Laptop and Surface Pro, which Microsoft just released, are designed to be fast and last a long time.


The most recent iteration of the Surface Laptop, which comes in either a 13.8- or 15-inch size, has undergone a redesign featuring sleeker screen bezels and contemporary aesthetics. According to the company, the device has a battery life of up to 22 hours and is claimed to be 86% faster than the Surface Laptop 5. Additionally, it is compatible with Wi-Fi 7 and features a touchpad with haptic feedback.


Regarding the new Surface Pro, Microsoft claims that it offers a performance boost of up to 90% compared to the previous-generation Surface Pro (the Surface Pro 9). It also features a new OLED display with HDR, Wi-Fi 7 (with the option for 5G connectivity), and an improved ultrawide front-facing camera. In addition, the keyboard of the device has been strengthened with extra carbon fiber and now includes haptic feedback.


Windows 11’s upcoming Recall feature has the ability to retain information about apps and content that a user accessed on their PC, even if it was weeks or months ago. This feature can be helpful in locating specific information, such as in a Discord chat where the user was discussing potential clothing purchases. Recall’s timeline feature allows users to easily review their recent work and navigate through files, such as PowerPoint presentations, to find relevant information for their searches.

Microsoft claims that Recall has the capability to establish connections between colors, images, and other elements, enabling users to search for virtually anything on their personal computers using natural language, similar to the technology employed by startup Rewind. The company asserts that all user data linked to Recall is maintained as private and stored on the device without being utilized for training AI models, which is of significant importance.

Here is additional information from Microsoft: “Your snapshots are stored exclusively on your personal computer and are not shared or stored elsewhere.” To delete specific snapshots, modify and remove time intervals, or pause the process, you can access the corresponding options in the Settings menu. Alternatively, you can directly perform these actions by clicking on the icon located in the System Tray on your Taskbar. You also have the option to apply filters that prevent certain apps and websites from being saved.

Photo manipulation and real-time language translations
Windows now incorporates a greater amount of artificial intelligence (AI) than ever before, with certain AI features available exclusively on the new Copilot+ PCs.

The Super Resolution feature has the ability to automatically enhance and enlarge old photos. Copilot now has the capability to analyze images and provide users with suggestions for creative compositions. Users can utilize the Cocreator feature to generate images and instruct the AI model to modify or transform the image based on their drawings.


Additionally, live captions with live translations can translate any audio that passes through a PC, regardless of its source (such as YouTube or a local file), into the user’s preferred language. Live translations will initially support approximately 40 languages, including English, Spanish, Mandarin, and Russian.

The Windows Copilot Runtime is a crucial component for your system.
Enabling features like Recall and Super Resolution is the Windows Copilot Runtime, a set of approximately 40 generative AI models that Microsoft refers to as “a new layer” of Windows. Similar to an IT project manager, the Windows Copilot Runtime enables generative AI-powered apps, including third-party apps, to run on an individual Copilot+ PC without relying on an internet connection. The semantic index and a vector-based system are responsible for making this possible.

According to Microsoft’s announcement, CapCut, the well-known video editor from TikTok owner ByteDance, will use the Windows Copilot Runtime to improve the effectiveness of its AI capabilities.

Continue Reading

Artificial Intelligence

OpenAI asked Scarlett Johansson to use her voice, she says





OpenAI is taking away one of the voices that ChatGPT uses. The company announced on Monday that users thought it sounded like Scarlett Johansson. In response, Johansson said in a statement that she had hired a lawyer to look into the Sky voice and find out exactly how it was made. The game Sky is being stopped because OpenAI used it last week to show off its new GPT-4o model.

According to the company in a blog post, AI voices shouldn’t try to sound like famous people. For example, Sky’s voice is not an imitation of Scarlett Johansson’s; it belongs to a different professional actress who uses her own natural speaking voice. “To protect their privacy, we can’t say who does our voice work.”

Last week, a video of the demo went viral on social media because people thought the voice sounded like Scarlett Johansson’s. There were jokes about how flirtatious the voice was, and some people compared it to a man’s fantasy.

People have said that the flirty voice sounds a lot like Scarlett Johansson’s role as a seductive virtual assistant in the 2013 movie “Her.” Joaquin Phoenix plays the lead role in the movie, and the main character falls in love with the virtual assistant.

The company hasn’t said anything about the similarity between Sky’s voice and Johansson’s, but OpenAI CEO Sam Altman tweeted the word “Her” after the event.

OpenAI’s demo last week was supposed to show off the chatbot’s better ability to have conversations, but it went viral when the sultry voice laughed at almost everything an OpenAI employee said. It said to the worker, “Wow, that’s quite the outfit you’re wearing at one point.” There was another time when the chatbot got complimented and said, “Stop it, you’re making me blush.”

It says in a blog post that it wants the voices of its chatbots to sound “approachable” and “inspire trust.” It also wants them to have a voice that is “warm, engaging, confidence-building, and charismatic.”

In the future, OpenAI says it will “add more voices to ChatGPT to better match the wide range of users’ interests and preferences.”

The whole statement from Johansson is:

“In September, Sam Altman asked me to work for him as a voice actor for the current ChatGPT 4.0 system.” He told me that he thought that by speaking out about the system, I could help tech companies and artists work together and make people feel better about the huge change between humans and AI. He told me that he thought my voice would make people feel better.

I turned down the offer after giving it a lot of thought and for personal reasons. After nine months, everyone, including my friends, family, and strangers, said that the new system called “Sky” sounded a lot like me.

When I heard the demo that was made public, I was shocked, angry, and confused that Mr. Altman would be interested in a voice that sounded so much like mine that even my closest friends and news outlets couldn’t tell the difference. By tweeting the word “her,” Mr. Altman even made it sound like the resemblance was on purpose, referring to the movie in which I voiced Samantha, a chatbot who becomes close with a human.

Mr. Altman called my agent two days before the ChatGPT 4.0 demo came out and asked me to think again. It was out there before we could connect.

Because of what they did, I had to hire a lawyer. The lawyer sent two letters to Mr. Altman and OpenAI explaining what they had done and asking them to explain in detail how they made the “Sky” voice. Because of this, OpenAI reluctantly agreed to remove the “Sky” voice.

Many of us are dealing with deep fakes and how to protect our own likenesses, work, and identities right now. I think these are questions that need to be answered completely. “I look forward to resolution in the form of openness and the passing of appropriate legislation to help protect individual rights.”

OpenAI shared this quote from Altman: “Sky’s voice is not Scarlett Johansson’s, and it was never meant to sound like hers.” Before we reached out to Ms. Johansson, we hired the voice actor who would be Sky’s voice. Because we respect Ms. Johansson, we have stopped using Sky’s voice in our apps and games. We’re sorry that we didn’t talk to Ms. Johansson better.

Continue Reading