Connect with us

Artificial Intelligence

The newly formed AI council at Meta consists exclusively of individuals who identify as white males

blank

Published

on

blank

Meta recently announced the formation of an AI advisory council comprised exclusively of individuals from a specific demographic. What else could we possibly anticipate? For years, women and people of color have been voicing their concerns about being overlooked and marginalized in the field of artificial intelligence, despite their qualifications and significant contributions to its development.

Meta did not promptly address our inquiry regarding the diversity of the advisory board.

This new advisory board has a different composition compared to Meta’s actual board of directors and its oversight board. The latter two boards prioritize diversity in terms of gender and racial representation. The shareholders did not elect this AI board, and it has no fiduciary responsibility. Meta informed Bloomberg that the board would provide valuable insights and recommendations regarding technological advancements, innovation, and strategic growth opportunities. We would meet on a regular basis.

It’s interesting to note that the AI advisory council consists solely of businesspeople and entrepreneurs, rather than ethicists or individuals with an academic or extensive research background. Although it may be true that the executives from Stripe, Shopify, and Microsoft have a strong background in bringing numerous products to market, it is important to note that AI is a unique and complex field that requires specialized expertise. It’s a high-stakes endeavor with potential far-reaching consequences, especially for marginalized communities.

Sarah Myers West, managing director of the AI Now Institute, a nonprofit that studies the social effects of AI, told me that it’s important to “critically examine” the companies that are making AI to “make sure the public’s needs are served.”

“This technology makes mistakes a lot of the time, and we know from our own research that those mistakes hurt communities that have been discriminated against for a long time more than others,” she said. “We should set a very, very high bar.”

The bad things about AI happen to women a lot more often than to men. In 2019, Sensity AI found that 96% of AI deepfake videos online were sexually explicit videos that people did not agree to watch. Since then, generative AI has spread widely, and women are still the ones who suffer from it.

In a notable incident that occurred in January, explicit deepfake videos of Taylor Swift, created without her consent, gained widespread attention on X. One particular post, which garnered hundreds of thousands of likes and accumulated 45 million views, was particularly popular. Social platforms such as X have traditionally been unsuccessful in safeguarding women from these situations. However, due to Taylor Swift’s immense influence as one of the most influential women globally, X took action by prohibiting search terms like “taylor swift ai” and “taylor swift deepfake.”

However, if this situation occurs to you and you are not a worldwide popular sensation, then you may be unfortunate. There are a plethora of reports documenting instances where students in middle school and high school have created explicit deepfakes of their fellow classmates. Although this technology has existed for some time, it has become increasingly accessible. One no longer needs to possess advanced technological skills to download applications that are explicitly marketed for the purpose of removing clothing from photos of women or replacing their faces with those in pornographic content. According to NBC reporter Kat Tenbarge, Facebook and Instagram displayed advertisements for an application called Perky AI, which claimed to be a tool for creating explicit images.

Two of the advertisements, which purportedly evaded Meta’s detection until Tenbarge brought the matter to the company’s attention, featured images of celebrities Sabrina Carpenter and Jenna Ortega with their bodies intentionally obscured, encouraging users to prompt the application to digitally remove their clothing. The advertisements featured a photograph of Ortega taken when she was only 16 years old.

The decision to permit Perky AI to advertise was not a singular occurrence. The company’s improper handling of complaints about artificial intelligence-generated sexually explicit content has prompted investigations by the Oversight Board of Meta.

It is crucial to include the perspectives of women and people of color in the development of artificial intelligence products. Historically, marginalized groups have been systematically excluded from participating in the creation of groundbreaking technologies and research, leading to catastrophic outcomes.

A clear illustration is the historical exclusion of women from clinical trials until the 1970s, resulting in the development of entire fields of research without considering the potential effects on women. A 2019 study conducted by the Georgia Institute of Technology revealed that black individuals, specifically, experience the consequences of technology that is not designed with their needs in mind. For instance, self-driving cars are more prone to colliding with black individuals due to the difficulty their sensors may face in detecting black skin.

Algorithms that are trained using biased data simply replicate the same prejudices that humans have instilled in them. In general, we are already witnessing AI systems actively perpetuating and intensifying racial discrimination in areas such as employment, housing, and criminal justice. Voice assistants encounter difficulties in comprehending various accents and frequently identify the content produced by non-native English speakers as being generated by artificial intelligence, as highlighted by Axios. This is due to the fact that English is the primary language for AI. Facial recognition systems exhibit a higher frequency of identifying black individuals as potential matches for criminal suspects compared to white individuals.

The present advancement of AI reflects the prevailing power structures pertaining to social class, race, gender, and Eurocentrism, which are also evident in other domains. Unfortunately, it appears that leaders are not paying enough attention to this issue. On the contrary, they are strengthening it. Investors, founders, and tech leaders are excessively fixated on rapid progress and disruptive innovation, to the extent that they fail to comprehend the potential negative consequences of generative AI, which is currently a highly popular AI technology. McKinsey’s report suggests that artificial intelligence (AI) has the potential to automate around 50% of jobs that do not necessitate a four-year college degree and have an annual salary of over $42,000. These jobs are more commonly held by minority workers.

There is legitimate concern regarding the ability of a team consisting solely of white men at a highly influential tech company, who are competing to develop AI technology to save the world, to provide advice on products that cater to the needs of all individuals, given that they only represent a limited demographic. Developing technology that is accessible to every single individual will require a substantial and concerted endeavor. The complexity of constructing AI systems that are both safe and inclusive, encompassing research and understanding at an intersectional societal level, is so intricate that it is apparent that this advisory board will not effectively assist Meta in achieving this goal. Where Meta lacks, another startup has the potential to emerge.

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Artificial Intelligence

What a new study says suggests that ChatGPT may have passed the Turing test

blank

Published

on

blank

René Descartes, a French philosopher who may or may not have been high on pot, had an interesting thought in 1637: can a machine think? Alan Turing, an English mathematician and computer scientist, gave the answer to this 300-year-old question in 1950: “Who cares?” He said a better question was what would become known as the “Turing test”: if there was a person, a machine, and a human interrogator, could the machine ever trick the human interrogator into thinking it was the person?

Turing changed the question in this way 74 years ago. Now, researchers at the University of California, San Diego, think they have the answer. A new study that had people talk to either different AI systems or another person for five minutes suggests that the answer might be “yes.”

“After a five-minute conversation, participants in our experiment were no better than random at identifying GPT-4. According to the preprint paper, which has not yet undergone peer review, this suggests that current AI systems can deceive people into believing they are human. “These results probably set a lower bound on how likely it is that someone will lie in more naturalistic settings, where people may not be aware of the possibility of lying or only focus on finding it.”

Even though this is a big event that makes headlines, it’s not a milestone that everyone agrees on. The researchers say that Turing first thought of the imitation game as a way to test intelligence, but “many objections have been raised to this idea.” People, for example, are known for being able to humanize almost anything. We want to connect with things, whether they’re people, dogs, or a Roomba with googly eyes on top of it.

Also, it’s interesting that ChatGPT-4 and ChatGPT-3.5, which was also tested, only persuaded humans that it was a person about half of the time, which isn’t much better than random chance. What does this result really mean?

As it turns out, ELIZA was one of the AI systems that the team built into the experiment as a backup plan. She was made at MIT in the mid-1960s and was one of the first programs of her kind. She was impressive for her time, but she doesn’t have much to do with modern large-language model-based systems or LLM-based systems.

“ELIZA could only give pre-written answers, which greatly limited what it could do. Live Science talked to Nell Watson, an AI researcher at the Institute of Electrical and Electronics Engineers (IEEE), about how it might fool someone for five minutes but soon show its flaws. “Language models are completely adaptable; they can put together answers to a lot of different topics, speak in specific languages or sociolects, and show who they are by displaying personality and values that are based on their characters.” a significant improvement over something that a person, no matter how intelligent and careful they were, programmed by hand.

She was perfect for the experiment because she was the same as everyone else. How do you explain test subjects who are lazy and pick between “human” and “machine” at random? If ELIZA gets the same score as chance, then the test is probably not being taken seriously because she’s not that good. In what way can you tell how much of the effect is just people giving things human traits? How much did ELIZA get them to change their minds? That much is probably how much it is.

In fact, ELIZA got only 22%, which is just over 1 in 5 people believing she was human. It’s more likely that ChatGPT has passed the Turing test now that test subjects could reliably tell the difference between some computers and people, but not ChatGPT, the researchers write.

So, does this mean we’re entering a new era of AI that acts like humans? Are computers smarter than people now? Maybe, but we probably shouldn’t make our decisions too quickly.

The researchers say, “In the end, it seems unlikely that the Turing test provides either necessary or sufficient evidence for intelligence. At best, it provides probabilistic support.” The people who took part weren’t even looking for what you might call “intelligence”; the paper says they “were more focused on linguistic style and socio-emotional factors than more traditional notions of intelligence such as knowledge and reasoning.” This “could reflect interrogators’ latent assumption that social intelligence has become the human trait that is most difficult for machines to copy.”

Which brings up a scary question: is the fall of humans the bigger problem than the rise of machines?

“Real humans were actually more successful, convincing interrogators that they were human two-thirds of the time,” the paper’s co-author, Cameron Jones, told Tech Xplore. “Our results suggest that in the real world, people might not be able to reliably tell if they’re talking to a human or an AI system.”

“In the real world, people might not be as aware that they’re talking to an AI system, so the rate of lying might be even higher,” he warned. “This makes me wonder what AI systems will be used for in the future, whether they are used to do bots, do customer service jobs, or spread fake news or fraud.”

There is a draft of the study on arXiv, but it has not yet been reviewed by other scientists.

Continue Reading

Artificial Intelligence

Threads’s API for developers is now live

blank

Published

on

blank

Meta finally put out its long-awaited API for Threads today, so developers can start making games and apps that use it. Third-party developers will be able to create new experiences around

Mark Zuckerberg also posted about the launch of the API, saying, “The Threads API is now widely available and will be coming to more of you soon.”

Engineer for Threads Jesse Chen wrote in a blog post that developers can now use the new API to publish posts, get their own content, and set up reply management tools. In other words, developers can let users hide or show replies or reply to certain ones.

It will also have analytics that let developers see things like the number of views, likes, replies, reposts, and quotes at the media and account level, the company said.

Adam Mosseri, the CEO of Instagram, first talked about the company’s work on the Threads API in October 2023. The API was first released in a closed beta with partners like Techmeme, Sprinklr, Sprout Social, Social News Desk, Hootsuite, and a few other developers. Chen said at that time that Meta planned to let many developers use the API in June. As promised, the company kept its word.

Along with the launch of the new API, the company also put out an open-source reference app on GitHub so developers can play with it.

In 2023, it was hard for third-party developers who made tools for social networks because social networks like Twitter (now X) and Reddit limited or shut down API access at different levels. This is because decentralized social networks like Mastodon and Bluesky are more open to developers. With more than 150 million users, Meta’s Threads is the most popular new social network. Since Threads now works with the fediverse and has an API, third-party developers can make some great social media experiences.

Continue Reading

Artificial Intelligence

Apple has officially announced its intention to collaborate with Google’s Gemini platform in the future

blank

Published

on

blank

After delivering a keynote presentation at WWDC 2024, which unveiled Apple Intelligence and announced a collaboration with OpenAI to integrate ChatGPT into Siri, Senior Vice President Craig Federighi confirmed the intention to collaborate with more third-party models. The initial instance provided by the executive was one of the companies that Apple was considering for a potential partnership.

“In the future, we are excited about the prospect of integrating with other models, such as Google Gemini,” Federighi expressed during a post-keynote discussion. He promptly stated that the company currently has no announcements to make, but that is the overall direction they are heading in.

OpenAI’s ChatGPT is set to become the first external model to be integrated at a later date this year. Apple announces that users will have the ability to access the system without the requirement of creating an account or paying for premium services. Regarding the integration of that platform with the updated iOS 18 version of Siri, Federighi confirmed that the voice assistant will notify users before utilizing its own internal models.

“Now you can accomplish this task directly using Siri, without the need for any additional tools,” stated the Apple executive. “Siri, it is crucial to ascertain whether you will inquire before proceeding to ChatGPT.” Subsequently, you can engage in a dialogue with ChatGPT. Subsequently, if there is any pertinent data mentioned in your inquiry that you wish to provide to ChatGPT, we will inquire, ‘Would you like to transmit this photograph?’ From a privacy standpoint, you always maintain control and have complete visibility.

Continue Reading

Trending