Apps
The implementation of generative AI in healthcare is imminent, and not everyone is enthusiastic about it
Generative artificial intelligence (AI), capable of producing and examining images, text, audio, videos, and other forms of data, is becoming more prevalent in the healthcare industry. Large technology companies and emerging businesses are both driving this trend.
Google Cloud is partnering with Highmark Health, a nonprofit healthcare company based in Pittsburgh, to develop generative AI tools that aim to customize the patient intake process. Amazon’s AWS division is collaborating with undisclosed clients to explore the application of generative AI in analyzing medical records for “social determinants of health.” Microsoft Azure is assisting Providence, a non-profit healthcare network, in constructing a generative AI system that can automatically prioritize and assign messages from patients to care professionals.
Notable generative AI businesses in the healthcare industry consist of Ambience Healthcare, which is currently working on a generative AI application for medical professionals; Nabla, an ambient AI assistant for practitioners; and Abridge, which specializes in building analytics tools for medical documentation.
The widespread interest in generative AI is shown in the investments made in generative AI initiatives focused on healthcare. Healthcare businesses utilizing generative AI have collectively secured tens of millions of dollars in venture capital thus far. Furthermore, the overwhelming majority of health investors acknowledge that generative AI has had a substantial impact on their investment strategy.
However, there is a divergence of opinions among both experts and patients on the readiness of healthcare-focused generative AI for widespread deployment.
There is a possibility that people may not desire generative AI
Only 53% of customers in the United States expressed confidence in the ability of generative AI to improve healthcare, according to a recent survey by Deloitte. This includes improving accessibility and reducing waiting times for appointments. Less than 50% of respondents expressed anticipation that generative AI would lead to a reduction in the cost of medical treatment.
Andrew Borkowski, the chief AI officer of the VA Sunshine Healthcare Network, which is the largest health system of the U.S. Department of Veterans Affairs, believes that the pessimism is justified. Borkowski cautioned that the implementation of generative AI may be premature due to its substantial limitations and worries over its effectiveness.
“According to him, a major problem with generative AI is its incapacity to effectively address intricate medical inquiries or urgent situations,” he informed. “Due to its limited knowledge base, which lacks current clinical information, and its lack of human expertise, it is not suitable for offering comprehensive medical advice or treatment recommendations.”
Multiple studies indicate that there is validity to those arguments
A study in the journal JAMA Pediatrics found that ChatGPT, an OpenAI AI chatbot, had an error rate of 83% when diagnosing pediatric disorders. Some healthcare organizations have tested this chatbot for specific purposes. During the evaluation of OpenAI’s GPT-4 as a diagnostic helper, physicians at Beth Israel Deaconess Medical Center in Boston noticed that the model incorrectly identified the top diagnosis in almost two-thirds of cases.
Present-day generative AI systems also have challenges when it comes to handling the medical administrative duties that are an integral part of clinicians’ everyday workflows. GPT-4 had a failure rate of 35% on the MedAlign benchmark, which assesses the ability of generative AI to summarize patient health records and search through notes.
OpenAI and other generative AI suppliers caution against depending on their models for medical guidance. However, Borkowski and other individuals assert that they possess the capability to accomplish further tasks. “Exclusively, depending on generative AI for healthcare may result in incorrect diagnoses, unsuitable treatments, or potentially life-threatening circumstances,” said Borkowski.
Jan Egger, the head of AI-guided therapies at the University of Duisburg-Essen’s Institute for AI in Medicine, which focuses on exploring the uses of developing technology in patient care, expresses the same worries as Borkowski. He asserts that the sole secure method to utilize generative AI in healthcare at present is under the vigilant supervision of a physician.
“The results can be highly inaccurate, and it is becoming increasingly challenging to remain cognizant of this,” Egger stated. Indeed, generative AI can be employed, specifically for the purpose of pre-writing discharge letters. However, it is the duty of physicians to verify and ultimately make the final decision.
Generative AI has the potential to reinforce and maintain preconceptions
Generative AI in healthcare can be particularly detrimental when it perpetuates preconceptions.
A team of researchers from Stanford Medicine conducted a study in 2023 to evaluate the performance of ChatGPT and other chatbots powered by generative AI. The study focused on answering questions related to kidney function, lung capacity, and skin thickness. According to the co-authors, ChatGPT not only provided incorrect answers, but it also perpetuated long-standing false notions about biological distinctions between Black and white individuals. These falsehoods have been known to cause medical professionals to make incorrect diagnoses.
The paradox lies in the fact that the individuals who are most susceptible to discrimination by generative AI in the field of healthcare are also the ones who are most inclined to utilize it.
The Deloitte poll revealed that individuals who do not have healthcare coverage, particularly persons of color, as indicated by a KFF study, are more inclined to utilize generative AI for tasks such as locating a doctor or accessing mental health assistance. If the AI’s suggestions are tainted by bias, it has the potential to worsen disparities in how people are treated.
Nevertheless, certain experts contend that generative AI is making progress in addressing this issue.
According to a study conducted by Microsoft and published in late 2023, researchers reported a remarkable accuracy of 90.2% on four difficult medical benchmarks by utilizing GPT-4. Vanilla GPT-4 was unable to achieve this score. However, the researchers assert that by employing prompt engineering, which involves creating specific prompts for GPT-4 to generate desired results, they managed to enhance the model’s performance by as much as 16.2 percentage points. (It is important to mention that Microsoft is a significant investor in OpenAI.)
Expanding beyond the capabilities of chatbots
However, asking a chatbot for information is not the sole use where generative AI excels. Several academics argue that the field of medical imaging might significantly use the capabilities of generative artificial intelligence.
A group of scientists developed a method known as complementarity-driven deferral to clinical workflow (CoDoC), which they published in a paper in Nature in July. The method is specifically built to determine the optimal circumstances for medical imaging specialists to utilize AI for diagnostics as opposed to conventional techniques. According to the co-authors, CoDoC outperformed specialists and reduced clinical workflows by 66%.
In November, a Chinese research team showcased Panda, an artificial intelligence (AI) algorithm designed to identify possible pancreatic abnormalities in X-ray images. A study demonstrated that Panda exhibits a high level of accuracy in categorizing these lesions, which are frequently identified too late for surgical intervention.
Arun Thirunavukarasu, a clinical research fellow at the University of Oxford, stated that there is no distinct feature that prevents the use of generative AI in healthcare settings.
“In the short- and mid-term, there are practical uses for generative AI technology such as text correction, automatic documentation of notes and letters, and enhanced search capabilities to optimize electronic patient records,” he stated. “If generative AI technology is effective, there is no justification for not immediately implementing it in these types of positions.”
“Rigorous science” refers to the application of a systematic and meticulous approach to scientific research and experimentation.
However, although generative AI demonstrates potential in some limited domains of medicine, experts such as Borkowski highlight the technological and compliance obstacles that need to be addressed before generative AI can be effectively utilized and relied upon as a comprehensive supportive healthcare tool.
“The utilization of generative AI in healthcare raises substantial concerns regarding privacy and security,” stated Borkowski. The inherent sensitivity of medical data and the possibility of its misuse or illegal access present significant dangers to patient confidentiality and the trust placed in the healthcare system. Moreover, the legislative and legal framework for the utilization of generative AI in healthcare is continuously developing. There are still unresolved issues pertaining to liability, data protection, and the involvement of non-human entities in medical practice.
Thirunavukarasu, who is quite optimistic about the use of generative AI in healthcare, emphasizes the importance of having a strong scientific foundation for patient-facing solutions.
“Because there will not be any direct clinician oversight, there needs to be pragmatic randomized control trials that show the clinical benefit in order to justify the use of patient-facing generative AI,” he said. “It is crucial to have effective governance in place in order to address any unforeseen negative consequences that may arise after widespread implementation.”
The World Health Organization has recently issued guidelines that support the use of scientific methods and human supervision in generative AI for healthcare. The guidelines also recommend the implementation of audits, transparency, and effect evaluations on this AI by independent third parties. The objective, as stated in the guidelines of the WHO, is to promote involvement from a varied group of individuals in the creation of generative AI for healthcare. This will give them an opportunity to express their concerns and contribute their ideas at every stage of the process.
“Unless the concerns are adequately resolved and appropriate precautions are implemented,” Borkowski stated, “the extensive adoption of medically generative AI could potentially pose harm to patients and the healthcare industry as a whole.”
Android
Google Chrome now has a ‘picture-in-picture’ feature
Google is getting ready to make a big change to how its Chrome browser works. This is because new browsers from startups like Arc are making the market more competitive. The company said on Wednesday that it will be adding a new feature called “Minimized Custom Tabs” that will let users tap to switch between a native app and their web content. When you do this, the Custom Tab turns into a small window that floats above the content of the native app.
The new feature is all about using Custom Tabs, which is a feature in Android browsers that lets app developers make their own browser experience right in their app. Users don’t have to open their browser or a WebView, which doesn’t support all of the web platform’s features. Custom tabs let users stay in their app while browsing. Custom tabs can help developers keep users in their apps longer and keep them from leaving and never coming back.
If you make the Custom Tab into a picture-in-picture window, switching to the web view might feel more natural, like you’re still in the native app. People who send their customers to a website to sign up for accounts or subscriptions might also find this change useful, since it makes it easier for users to switch between the website and the native app.
After being shrunk down to the picture-in-picture window, the Custom Tab can be pushed to the side of the screen. Users can tap on a down arrow to bring the page back to the picture-in-picture window when it is full screen.
The new web experience comes at a time when Google is making it easier for Android users to connect to the web. People can find their way to the web with AI-powered features like Circle to Search and other integrations that let them do things like circle or highlight items.
The change is coming to the newest version of Chrome (M124), and developers who already use Chrome’s Custom Tabs will see it automatically. Google says that the change only affects Chrome browsers, but it hopes that other browser makers will add changes like these.
Apps
Threads finally starts its own program to check facts
Meta’s latest social network, Threads, is launching its own fact-checking initiative after leveraging Instagram and Facebook’s networks for a brief period.
Adam Mosseri, the CEO of Instagram, stated that the company has recently implemented a feature that allows fact-checkers to assess and label false content on threads. Nevertheless, Mosseri refrained from providing specific information regarding the exact timing of the program’s implementation and whether it was restricted to certain geographical regions.
The fact-checking partners for Threads—which organizations are affiliated with Meta—are not clearly specified. We have requested additional information from the company and will revise the story accordingly upon receiving a response.
The upcoming U.S. elections appear to be the main driving force behind the decision. India is currently in the midst of its general elections. However, it is improbable that a social network would implement a fact-checking program specifically during an election cycle rather than initiating the project prior to the elections.
In December, Meta announced its intention to implement the fact-checking program on Threads.
“At present, we align the fact-check ratings from Facebook or Instagram with Threads. However, our objective is to empower fact-checking partners to evaluate and assign ratings to misinformation on the application,” Mosseri stated in a post during that period.
Apps
Mark Zuckerberg reports that Threads has a total of 150 million users who engage with the app on a monthly basis
Threads, Meta’s alternative to Twitter and X, is experiencing consistent and steady growth. During the Q1 2024 earnings call, Mark Zuckerberg stated that the social network currently has over 150 million monthly active members, which is an increase from 130 million in February.
Threads made significant progress in integrating with ActivityPub, the decentralized protocol that powers networks such as Mastodon, during the last quarterly earnings conference. In March, the firm granted U.S.-based users who are 18 years of age or older the ability to link their accounts to the Fediverse, enabling their posts to be seen on other servers.
By June, the business intends to make its API available to a broad range of developers, enabling them to create experiences centered on the social network. Nevertheless, it remains uncertain whether Threads will enable developers to create comprehensive third-party clients.
Meta just introduced their AI chatbot on various platforms like Facebook, Messenger, WhatsApp, and Instagram. Threads was conspicuously omitted from this list, perhaps because of its lack of built-in direct messaging capabilities.
Threads introduced a new test feature on Wednesday that allows users to automatically archive their posts after a certain length of time. Additionally, users have the ability to store or remove specific postings from an archive and make them accessible to the public.
Threads is around nine months old, and Meta has consistently expanded its readership. Nevertheless, Threads cannot be considered a viable substitute for X, as Instagram’s head, Adam Mosseri, explicitly stated in October that Threads will not “amplify news on the platform.” However, Meta’s social network continues to grow in popularity. According to app analytics company Apptopia, Threads now has more daily active users in the U.S. than X, as Business Insider reported earlier this week.
- Gadgets10 years ago
Why the Nexus 7 is still a good tablet in 2015
- Mobile Devices10 years ago
Samsung Galaxy Note 4 vs Galaxy Note 5: is there room for improvement?
- Editorials10 years ago
Samsung Galaxy Note 4 – How bad updates prevent people from enjoying their phones
- Mobile Devices9 years ago
Nexus 5 2015 and Android M born to be together
- Gaming10 years ago
New Teaser For Five Nights At Freddy’s 4
- Mobile Devices9 years ago
Google not releasing Android M to Nexus 7
- Gadgets10 years ago
Moto G Android 5.0.2 Lollipop still has a memory leak bug
- Mobile Devices9 years ago
Nexus 7 2015: Huawei and Google changing the game