Connect with us

Artificial Intelligence

A.I. Accurately Detected Cancer 86% of the Time.

blank

Published

on

Cancer

The fight against Cancer has been seeing a lot of developments with the increasing improvements in technology. However, it seems like the people from Yokohama, Japan have perfected a new development in Cancer Research. This Artificial Intelligence could help detect colorectal cancer even before benign tumors become malignant.

The way this works is the AI observes a colorectal polyp magnified by 500 times in order to spot its variations. Afterwards, the program cross-references these variations against a database of over 30,000 images of pre-cancerous and cancerous cells that were used to train the machine-learning program.

With all of this knowledge the AI has found itself to be capable of detecting Cancer as fast as under a second. This is the first time an AI like this is used for this specific purpose under this sort of training. And the results are nothing short of amazing, as the accuracy has been a whopping 86% of matching results.

“Overall, 306 polyps were assessed real-time by using the AI-assisted system, providing a sensitivity of 94 percent specificity of 79 percent, accuracy of 86 percent, and positive and negative predictive values of 79 percent and 93 percent respectively, in identifying neoplastic changes.” Mentions the lead behind the project, Dr. Yuichi Mori

Dr. Yuichi Mori hails from the Showa University in Japan. This study has been presented to the United European Gastroenterology Conference in Barcelona. All to show the improvements of Artificial Intelligence on the Medical field in recent generations.

Let’s remind the readers that colorectal cancer is the second deadliest form of cancer, right behind lung cancer. The reason for this is in its later stages, the cancerous cells can exit the thin tissue of the colon, rectum, and intestine, and enter directly into the bloodstream. Spreading much quicker as a result.

“We believe these results are acceptable for clinical application and our immediate goal is to obtain regulatory approval for the diagnostic system,” he said in a statement to Inverse. Since this study has the potential to increase survival rates even further. The sooner it’s used to perfect the AI’s capabilities, the better.

I always wanted to be a journalist who listens. The Voice of the Unspoken and someone heavily involved in the gaming community. From playing as a leader of a competitive multi-branch team to organizing tournaments for the competitive scene to being involved in a lot of gaming communities. I want to keep moving forward as a journalist.

Artificial Intelligence

Meta’s latest innovation allows users to seamlessly share images from their Ray-Ban smart glasses to their Instagram Story

blank

Published

on

blank

Meta has just announced the addition of new hands-free functionality to its Ray-Ban smart glasses. One exciting feature is the ability to effortlessly share images from smart glasses to Instagram Stories, eliminating the need to use a phone.

Once you’ve captured a photo using the smart glasses, simply command, “Hey Meta, share my most recent photo on Instagram.” Alternatively, you can instruct Meta to post a photo to Instagram by saying, “Hey Meta, take a new photo in the moment.”

The introduction of the new feature brings to mind the Snap Spectacles, which were first released in 2016. These smart glasses enabled users to effortlessly capture photos and videos, which could then be shared directly to their Snapchat Stories.

Meta’s Ray-Ban smart glasses now have seamless integrations with Amazon Music and the meditation app Calm, allowing for a truly hands-free experience.

Streaming music from Amazon Music is now easier than ever. Simply say “Hey Meta, play Amazon Music” and enjoy your favorite tunes without reaching for your phone. With touch or voice controls, you have the convenience of controlling your audio playback while keeping your phone safely tucked away in your pocket.

Users can now access the new hands-free Calm integration by simply saying “Hey Meta, play the Daily Calm” on their smart glasses. This allows for easy access to mindfulness exercises and self-care content.

Furthermore, Meta is broadening its range of styles, now accessible in 15 countries, such as the U.S., Canada, Australia, and various parts of Europe. The expansion features the Skyler style in Shiny Chalky Gray with Gradient Cinnamon Pink Lenses, Skyler in Shiny Black with Transitions Cerulean Blue Lenses, and Headliner Low Bridge Fit in Shiny Black with Polar G15 Lenses. You can find the glasses on both Meta’s and Ray-Ban’s websites.

The introduction of the new features follows the recent AI enhancement of the smart glasses, which occurred just a month ago. Meta has introduced a cutting-edge multimodal AI feature for their smart glasses, allowing users to effortlessly inquire about their surroundings. For example, if you come across a menu in French, the smart glasses can utilize their integrated camera and Meta AI to provide real-time text translation.

The concept behind the launch is to enable the smart glasses to function as a personal AI assistant beyond your smartphone, resembling Humane’s Ai pin.

Continue Reading

Artificial Intelligence

Elon Musk’s artificial intelligence company, xAI, has secured $6 billion in funding from Valor, a16z, and Sequoia

blank

Published

on

blank

Elon Musk’s AI startup, xAI, announced today that it has secured an impressive $6 billion in a recent funding round. This substantial investment positions xAI as a formidable player in the rapidly growing AI industry, allowing it to confidently challenge competitors such as OpenAI, Microsoft, and Google.

Valor Equity Partners, Vy Capital, Andreessen Horowitz, Sequoia Capital, Fidelity, Prince Alwaleed Bin Talal and Kingdom Holding are some of the investors who have contributed to xAI’s Series B funding, as mentioned in a blog post by the startup.

The funding validates  previous report in April about xAI’s intention to secure $6 billion in funding. At the time, xAI was in the process of finalizing a round that would have resulted in a valuation of $18 billion, as reported . xAI, a relatively new company that emerged from social network X, has caught my attention. I wonder if X has also made an investment in this promising venture.

Musk confirmed that the investment round was valued at $18 billion before any additional funding was added.

Musk is a pioneer in the AI industry, known for his early involvement and prominent presence. Tesla, the car company he leads, is the leading manufacturer of electric vehicles with advanced self-driving technologies. He is also a co-founder of OpenAI, a startup that he has made significant financial contributions to. Musk’s enthusiasm for OpenAI has diminished recently. In March, he filed a lawsuit against OpenAI and its co-founder Sam Altman, claiming that they had strayed from their original mission and had essentially become a subsidiary of Microsoft with closed-source practices. In addition, he has made allegations that Google has incorporated coding bias into its AI products.

Following the establishment of xAI, Musk unveiled the Grok 1.0 model, a chatbot that competes with ChatGPT, in November. Afterwards, the company decided to offer the model to Premium+ users on X through a chatbot. These users, who pay $16 a month, can now access the model with ease. In April, the company unveiled the latest Grok 1.5 model and also granted Premium users on X the ability to access the chatbot. In April, the Musk-owned company showcased Grok’s multimodal capabilities. Earlier this year, the company made the Grok model open-source, but it did not include any training code.

xAI intends to utilize the funds raised in the recent financing round to bring its initial range of products to the market, establish cutting-edge infrastructure, and expedite the research and development of upcoming technologies, as stated in the blog post. The company may seek partnerships to expand the reach of Grok to users outside of X.

 

Continue Reading

Artificial Intelligence

The newly formed AI council at Meta consists exclusively of individuals who identify as white males

blank

Published

on

blank

Meta recently announced the formation of an AI advisory council comprised exclusively of individuals from a specific demographic. What else could we possibly anticipate? For years, women and people of color have been voicing their concerns about being overlooked and marginalized in the field of artificial intelligence, despite their qualifications and significant contributions to its development.

Meta did not promptly address our inquiry regarding the diversity of the advisory board.

This new advisory board has a different composition compared to Meta’s actual board of directors and its oversight board. The latter two boards prioritize diversity in terms of gender and racial representation. The shareholders did not elect this AI board, and it has no fiduciary responsibility. Meta informed Bloomberg that the board would provide valuable insights and recommendations regarding technological advancements, innovation, and strategic growth opportunities. We would meet on a regular basis.

It’s interesting to note that the AI advisory council consists solely of businesspeople and entrepreneurs, rather than ethicists or individuals with an academic or extensive research background. Although it may be true that the executives from Stripe, Shopify, and Microsoft have a strong background in bringing numerous products to market, it is important to note that AI is a unique and complex field that requires specialized expertise. It’s a high-stakes endeavor with potential far-reaching consequences, especially for marginalized communities.

Sarah Myers West, managing director of the AI Now Institute, a nonprofit that studies the social effects of AI, told me that it’s important to “critically examine” the companies that are making AI to “make sure the public’s needs are served.”

“This technology makes mistakes a lot of the time, and we know from our own research that those mistakes hurt communities that have been discriminated against for a long time more than others,” she said. “We should set a very, very high bar.”

The bad things about AI happen to women a lot more often than to men. In 2019, Sensity AI found that 96% of AI deepfake videos online were sexually explicit videos that people did not agree to watch. Since then, generative AI has spread widely, and women are still the ones who suffer from it.

In a notable incident that occurred in January, explicit deepfake videos of Taylor Swift, created without her consent, gained widespread attention on X. One particular post, which garnered hundreds of thousands of likes and accumulated 45 million views, was particularly popular. Social platforms such as X have traditionally been unsuccessful in safeguarding women from these situations. However, due to Taylor Swift’s immense influence as one of the most influential women globally, X took action by prohibiting search terms like “taylor swift ai” and “taylor swift deepfake.”

However, if this situation occurs to you and you are not a worldwide popular sensation, then you may be unfortunate. There are a plethora of reports documenting instances where students in middle school and high school have created explicit deepfakes of their fellow classmates. Although this technology has existed for some time, it has become increasingly accessible. One no longer needs to possess advanced technological skills to download applications that are explicitly marketed for the purpose of removing clothing from photos of women or replacing their faces with those in pornographic content. According to NBC reporter Kat Tenbarge, Facebook and Instagram displayed advertisements for an application called Perky AI, which claimed to be a tool for creating explicit images.

Two of the advertisements, which purportedly evaded Meta’s detection until Tenbarge brought the matter to the company’s attention, featured images of celebrities Sabrina Carpenter and Jenna Ortega with their bodies intentionally obscured, encouraging users to prompt the application to digitally remove their clothing. The advertisements featured a photograph of Ortega taken when she was only 16 years old.

The decision to permit Perky AI to advertise was not a singular occurrence. The company’s improper handling of complaints about artificial intelligence-generated sexually explicit content has prompted investigations by the Oversight Board of Meta.

It is crucial to include the perspectives of women and people of color in the development of artificial intelligence products. Historically, marginalized groups have been systematically excluded from participating in the creation of groundbreaking technologies and research, leading to catastrophic outcomes.

A clear illustration is the historical exclusion of women from clinical trials until the 1970s, resulting in the development of entire fields of research without considering the potential effects on women. A 2019 study conducted by the Georgia Institute of Technology revealed that black individuals, specifically, experience the consequences of technology that is not designed with their needs in mind. For instance, self-driving cars are more prone to colliding with black individuals due to the difficulty their sensors may face in detecting black skin.

Algorithms that are trained using biased data simply replicate the same prejudices that humans have instilled in them. In general, we are already witnessing AI systems actively perpetuating and intensifying racial discrimination in areas such as employment, housing, and criminal justice. Voice assistants encounter difficulties in comprehending various accents and frequently identify the content produced by non-native English speakers as being generated by artificial intelligence, as highlighted by Axios. This is due to the fact that English is the primary language for AI. Facial recognition systems exhibit a higher frequency of identifying black individuals as potential matches for criminal suspects compared to white individuals.

The present advancement of AI reflects the prevailing power structures pertaining to social class, race, gender, and Eurocentrism, which are also evident in other domains. Unfortunately, it appears that leaders are not paying enough attention to this issue. On the contrary, they are strengthening it. Investors, founders, and tech leaders are excessively fixated on rapid progress and disruptive innovation, to the extent that they fail to comprehend the potential negative consequences of generative AI, which is currently a highly popular AI technology. McKinsey’s report suggests that artificial intelligence (AI) has the potential to automate around 50% of jobs that do not necessitate a four-year college degree and have an annual salary of over $42,000. These jobs are more commonly held by minority workers.

There is legitimate concern regarding the ability of a team consisting solely of white men at a highly influential tech company, who are competing to develop AI technology to save the world, to provide advice on products that cater to the needs of all individuals, given that they only represent a limited demographic. Developing technology that is accessible to every single individual will require a substantial and concerted endeavor. The complexity of constructing AI systems that are both safe and inclusive, encompassing research and understanding at an intersectional societal level, is so intricate that it is apparent that this advisory board will not effectively assist Meta in achieving this goal. Where Meta lacks, another startup has the potential to emerge.

Continue Reading

Trending