Connect with us


The implementation of generative AI in healthcare is imminent, and not everyone is enthusiastic about it





Generative artificial intelligence (AI), capable of producing and examining images, text, audio, videos, and other forms of data, is becoming more prevalent in the healthcare industry. Large technology companies and emerging businesses are both driving this trend.

Google Cloud is partnering with Highmark Health, a nonprofit healthcare company based in Pittsburgh, to develop generative AI tools that aim to customize the patient intake process. Amazon’s AWS division is collaborating with undisclosed clients to explore the application of generative AI in analyzing medical records for “social determinants of health.” Microsoft Azure is assisting Providence, a non-profit healthcare network, in constructing a generative AI system that can automatically prioritize and assign messages from patients to care professionals.

Notable generative AI businesses in the healthcare industry consist of Ambience Healthcare, which is currently working on a generative AI application for medical professionals; Nabla, an ambient AI assistant for practitioners; and Abridge, which specializes in building analytics tools for medical documentation.

The widespread interest in generative AI is shown in the investments made in generative AI initiatives focused on healthcare. Healthcare businesses utilizing generative AI have collectively secured tens of millions of dollars in venture capital thus far. Furthermore, the overwhelming majority of health investors acknowledge that generative AI has had a substantial impact on their investment strategy.

However, there is a divergence of opinions among both experts and patients on the readiness of healthcare-focused generative AI for widespread deployment.

There is a possibility that people may not desire generative AI
Only 53% of customers in the United States expressed confidence in the ability of generative AI to improve healthcare, according to a recent survey by Deloitte. This includes improving accessibility and reducing waiting times for appointments. Less than 50% of respondents expressed anticipation that generative AI would lead to a reduction in the cost of medical treatment.

Andrew Borkowski, the chief AI officer of the VA Sunshine Healthcare Network, which is the largest health system of the U.S. Department of Veterans Affairs, believes that the pessimism is justified. Borkowski cautioned that the implementation of generative AI may be premature due to its substantial limitations and worries over its effectiveness.

“According to him, a major problem with generative AI is its incapacity to effectively address intricate medical inquiries or urgent situations,” he informed. “Due to its limited knowledge base, which lacks current clinical information, and its lack of human expertise, it is not suitable for offering comprehensive medical advice or treatment recommendations.”

Multiple studies indicate that there is validity to those arguments

A study in the journal JAMA Pediatrics found that ChatGPT, an OpenAI AI chatbot, had an error rate of 83% when diagnosing pediatric disorders. Some healthcare organizations have tested this chatbot for specific purposes. During the evaluation of OpenAI’s GPT-4 as a diagnostic helper, physicians at Beth Israel Deaconess Medical Center in Boston noticed that the model incorrectly identified the top diagnosis in almost two-thirds of cases.

Present-day generative AI systems also have challenges when it comes to handling the medical administrative duties that are an integral part of clinicians’ everyday workflows. GPT-4 had a failure rate of 35% on the MedAlign benchmark, which assesses the ability of generative AI to summarize patient health records and search through notes.

OpenAI and other generative AI suppliers caution against depending on their models for medical guidance. However, Borkowski and other individuals assert that they possess the capability to accomplish further tasks. “Exclusively, depending on generative AI for healthcare may result in incorrect diagnoses, unsuitable treatments, or potentially life-threatening circumstances,” said Borkowski.

Jan Egger, the head of AI-guided therapies at the University of Duisburg-Essen’s Institute for AI in Medicine, which focuses on exploring the uses of developing technology in patient care, expresses the same worries as Borkowski. He asserts that the sole secure method to utilize generative AI in healthcare at present is under the vigilant supervision of a physician.

“The results can be highly inaccurate, and it is becoming increasingly challenging to remain cognizant of this,” Egger stated. Indeed, generative AI can be employed, specifically for the purpose of pre-writing discharge letters. However, it is the duty of physicians to verify and ultimately make the final decision.

Generative AI has the potential to reinforce and maintain preconceptions
Generative AI in healthcare can be particularly detrimental when it perpetuates preconceptions.

A team of researchers from Stanford Medicine conducted a study in 2023 to evaluate the performance of ChatGPT and other chatbots powered by generative AI. The study focused on answering questions related to kidney function, lung capacity, and skin thickness. According to the co-authors, ChatGPT not only provided incorrect answers, but it also perpetuated long-standing false notions about biological distinctions between Black and white individuals. These falsehoods have been known to cause medical professionals to make incorrect diagnoses.

The paradox lies in the fact that the individuals who are most susceptible to discrimination by generative AI in the field of healthcare are also the ones who are most inclined to utilize it.

The Deloitte poll revealed that individuals who do not have healthcare coverage, particularly persons of color, as indicated by a KFF study, are more inclined to utilize generative AI for tasks such as locating a doctor or accessing mental health assistance. If the AI’s suggestions are tainted by bias, it has the potential to worsen disparities in how people are treated.

Nevertheless, certain experts contend that generative AI is making progress in addressing this issue.

According to a study conducted by Microsoft and published in late 2023, researchers reported a remarkable accuracy of 90.2% on four difficult medical benchmarks by utilizing GPT-4. Vanilla GPT-4 was unable to achieve this score. However, the researchers assert that by employing prompt engineering, which involves creating specific prompts for GPT-4 to generate desired results, they managed to enhance the model’s performance by as much as 16.2 percentage points. (It is important to mention that Microsoft is a significant investor in OpenAI.)

Expanding beyond the capabilities of chatbots
However, asking a chatbot for information is not the sole use where generative AI excels. Several academics argue that the field of medical imaging might significantly use the capabilities of generative artificial intelligence.

A group of scientists developed a method known as complementarity-driven deferral to clinical workflow (CoDoC), which they published in a paper in Nature in July. The method is specifically built to determine the optimal circumstances for medical imaging specialists to utilize AI for diagnostics as opposed to conventional techniques. According to the co-authors, CoDoC outperformed specialists and reduced clinical workflows by 66%.

In November, a Chinese research team showcased Panda, an artificial intelligence (AI) algorithm designed to identify possible pancreatic abnormalities in X-ray images. A study demonstrated that Panda exhibits a high level of accuracy in categorizing these lesions, which are frequently identified too late for surgical intervention.

Arun Thirunavukarasu, a clinical research fellow at the University of Oxford, stated that there is no distinct feature that prevents the use of generative AI in healthcare settings.

“In the short- and mid-term, there are practical uses for generative AI technology such as text correction, automatic documentation of notes and letters, and enhanced search capabilities to optimize electronic patient records,” he stated. “If generative AI technology is effective, there is no justification for not immediately implementing it in these types of positions.”

“Rigorous science” refers to the application of a systematic and meticulous approach to scientific research and experimentation.
However, although generative AI demonstrates potential in some limited domains of medicine, experts such as Borkowski highlight the technological and compliance obstacles that need to be addressed before generative AI can be effectively utilized and relied upon as a comprehensive supportive healthcare tool.

“The utilization of generative AI in healthcare raises substantial concerns regarding privacy and security,” stated Borkowski. The inherent sensitivity of medical data and the possibility of its misuse or illegal access present significant dangers to patient confidentiality and the trust placed in the healthcare system. Moreover, the legislative and legal framework for the utilization of generative AI in healthcare is continuously developing. There are still unresolved issues pertaining to liability, data protection, and the involvement of non-human entities in medical practice.

Thirunavukarasu, who is quite optimistic about the use of generative AI in healthcare, emphasizes the importance of having a strong scientific foundation for patient-facing solutions.

“Because there will not be any direct clinician oversight, there needs to be pragmatic randomized control trials that show the clinical benefit in order to justify the use of patient-facing generative AI,” he said. “It is crucial to have effective governance in place in order to address any unforeseen negative consequences that may arise after widespread implementation.”

The World Health Organization has recently issued guidelines that support the use of scientific methods and human supervision in generative AI for healthcare. The guidelines also recommend the implementation of audits, transparency, and effect evaluations on this AI by independent third parties. The objective, as stated in the guidelines of the WHO, is to promote involvement from a varied group of individuals in the creation of generative AI for healthcare. This will give them an opportunity to express their concerns and contribute their ideas at every stage of the process.

“Unless the concerns are adequately resolved and appropriate precautions are implemented,” Borkowski stated, “the extensive adoption of medically generative AI could potentially pose harm to patients and the healthcare industry as a whole.”

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.


Threads finally starts its own program to check facts





Meta’s latest social network, Threads, is launching its own fact-checking initiative after leveraging Instagram and Facebook’s networks for a brief period.

Adam Mosseri, the CEO of Instagram, stated that the company has recently implemented a feature that allows fact-checkers to assess and label false content on threads. Nevertheless, Mosseri refrained from providing specific information regarding the exact timing of the program’s implementation and whether it was restricted to certain geographical regions.

The fact-checking partners for Threads—which organizations are affiliated with Meta—are not clearly specified. We have requested additional information from the company and will revise the story accordingly upon receiving a response.

The upcoming U.S. elections appear to be the main driving force behind the decision. India is currently in the midst of its general elections. However, it is improbable that a social network would implement a fact-checking program specifically during an election cycle rather than initiating the project prior to the elections.

In December, Meta announced its intention to implement the fact-checking program on Threads.

“At present, we align the fact-check ratings from Facebook or Instagram with Threads. However, our objective is to empower fact-checking partners to evaluate and assign ratings to misinformation on the application,” Mosseri stated in a post during that period.

Continue Reading


Mark Zuckerberg reports that Threads has a total of 150 million users who engage with the app on a monthly basis





Threads, Meta’s alternative to Twitter and X, is experiencing consistent and steady growth. During the Q1 2024 earnings call, Mark Zuckerberg stated that the social network currently has over 150 million monthly active members, which is an increase from 130 million in February.

Threads made significant progress in integrating with ActivityPub, the decentralized protocol that powers networks such as Mastodon, during the last quarterly earnings conference. In March, the firm granted U.S.-based users who are 18 years of age or older the ability to link their accounts to the Fediverse, enabling their posts to be seen on other servers.

By June, the business intends to make its API available to a broad range of developers, enabling them to create experiences centered on the social network. Nevertheless, it remains uncertain whether Threads will enable developers to create comprehensive third-party clients.

Meta just introduced their AI chatbot on various platforms like Facebook, Messenger, WhatsApp, and Instagram. Threads was conspicuously omitted from this list, perhaps because of its lack of built-in direct messaging capabilities.

Threads introduced a new test feature on Wednesday that allows users to automatically archive their posts after a certain length of time. Additionally, users have the ability to store or remove specific postings from an archive and make them accessible to the public.

Threads is around nine months old, and Meta has consistently expanded its readership. Nevertheless, Threads cannot be considered a viable substitute for X, as Instagram’s head, Adam Mosseri, explicitly stated in October that Threads will not “amplify news on the platform.” However, Meta’s social network continues to grow in popularity. According to app analytics company Apptopia, Threads now has more daily active users in the U.S. than X, as Business Insider reported earlier this week.

Continue Reading


TikTok Shop is now introducing its collection of pre-owned high-end fashion items to customers in the United Kingdom





TikTok Shop, the social commerce marketplace of TikTok, is introducing a new section dedicated to secondhand luxury items in the United Kingdom. This move positions TikTok Shop in direct rivalry with existing platforms such as The RealReal, Vestiaire Collective, Depop, Poshmark, and Mercari. The offering has been present at TikTok Shop U.S. for a duration exceeding six months.

The addition of this new category enables clients in the United Kingdom to conveniently buy second-hand luxury garments, designer purses, and various accessories from within the TikTok application. Upon its inception, the platform offers a selection of only five British brands, namely Sellier, Luxe Collective, Sign of the Times, HardlyEverWornIt, and Break Archive.

Since its introduction in 2022, TikTok Shop has generated sales of approximately $1 billion or more in merchandise value. Nevertheless, despite its triumph, some contend that TikTok Shop is undermining the short-form video-sharing platform, alleging that counterfeit and substandard merchandise are inundating the market. The purchase of pre-owned luxury goods online carries the greatest danger of encountering counterfeit products, even for major e-commerce platforms such as Amazon, eBay, and others, which also struggle with ensuring authenticity.

TikTok Shop, like other resale marketplaces, implements an anti-counterfeit policy that ensures a complete reimbursement in the event that a seller is verified to have sold a counterfeit item. Bloomberg has disclosed that the corporation is engaged in discussions with luxury goods company LVMH to enhance efforts to combat counterfeiting.

Every secondhand brand on TikTok Shop in the U.S. must possess certificates from third-party authenticators. TikTok collaborated with authentication providers Entrupy and Real Authentication to verify the authenticity of designer handbags available on the platform.

Concurrently, a representative from TikTok informed me that the five British brands each possess their own internal verification procedure. They declined to provide the commencement date for accepting secondhand brands other than their own.

TikTok Shop’s introduction of a used luxury category is a calculated maneuver to access the expanding market for previously owned high-end goods. The secondhand luxury market is a prosperous industry valued at around $49.3 billion (€45 billion) in 2023, with global sales of pre-owned designer items.

Moreover, this expansion is in line with the growing inclination of individuals towards adopting preloved fashion, and it creates new opportunities for secondhand brands in the U.K. to access a broader client demographic. The prevalence of secondhand fashion on TikTok is apparent, as seen by more than 144,000 TikTok postings utilizing the hashtag #secondhandfashion, resulting in nearly 1.2 billion views.

Today’s statement follows closely after the U.S. House of Representatives passed a bill mandating that ByteDance sell TikTok or else risk a ban in the U.S. This bill seems to be gaining favor in the Senate. An embargo would have a significant impact on American merchants who sell their products on the application. As per the company’s statement, the brief video-sharing application produced a total of $14.7 billion in revenue for small- to mid-size enterprises in the year 2023.

Continue Reading