Connect with us

Lung cancer is a huge killer world-wide. In the United states there are around 225,000 cases every year and it is responsible for almost a quarter of all cancer deaths. The problem is even worse in China, with approximately 600,000 deaths due to lung cancer every year. This is probably why the Chinese startup Infervision has created an AI to detect lung cancer.

Infervision has three separate tools that it uses. The Intelligent CT Assisted Diagnosis (AI-CT) is designed to assist in early stage lung cancer screening. This AI is capable of detecting and highlighting cancer features on a series of CT images. It highlights any potential cancer nodules and lets doctors more easily identify them. It is designed to improve early diagnosis of lung cancer and give patients a better chance of survival.

The Intelligent X-ray Assisted Diagnosis (AI-DR) is designed to help radiologists identify lesions in the lung. The AI can detect over 20 different kinds of lesions and has even spotted ones that radiologists missed. The AI-DR is supposed to take on a lot of the difficult time consuming tasks to free the radiologist up to focus on more complex problems.

There is one final tool in Infervision’s arsenal. The AI scholar. This is an intelligent deep learning research platform that is designed to make it easier for doctors with no computing background to take advantage of complex learning algorithms. The AI scholar can process more than 100 images at once and will give doctors access to new tools.

Ai to detect lung cancer

Air Pollution and heavy smoking in China is expected to raise the number of lung cancer cases to 800,00 per year by 2020 – Credit Latin times

Why is Infervision pushing so hard to design an AI to detect lung cancer? The company hopes to address the growing problem that Lung Cancer presents in China. Massive air pollution is causing more and more cases and doctors are hard pressed to cope. Lung Cancer is particularly difficult to treat unless it is caught very early and in many parts of the world the resources for proper screening simply aren’t there.

Infervision hopes that by using an AI to detect lung cancer they can relieve some of the strain on doctors and hand over the repetitive, time consuming task of identifying cancer to an AI that will help assist radiologists in detection. While lung cancer is an acute problem in China early detection could help free up precious medical resources in other countries and using AIs to detect lung cancer and other forms of cancer is a step in the right direction.

Infervision have demonstrated the value of AI in assisting medical professionals and they have processed roughly 100,000 CT scans and 100,000 x-rays over the last year, which is phenomenal. Infervision’s efforts show that the automation revolution can be positive. The hope is that they will expand deep learning concepts to other areas of the medical profession.

You'll find me wandering around the Science sections mostly, excitedly waving my arms around while jumping up and down about the latest science and tech news. I am also occasionally found in the gaming section, trying to convince everyone else that linux is the future of the computer gaming.

Apps

Google fully embraces generative artificial intelligence (AI) during the Google Cloud Next event

blank

Published

on

blank

30,000 individuals gathered in Las Vegas this week to receive the most up-to-date and innovative information from Google Cloud. The only thing they heard was continuous generative AI. Google Cloud primarily operates as a vendor that provides cloud infrastructure and platform services. Without prior knowledge, it is possible that you overlooked this information amidst the abundance of AI news.

While acknowledging the impressive showcase by Google, it is worth noting that, similar to Salesforce’s event in New York City last year, the company did not focus much on its primary operations, save when discussing generative AI.

Google unveiled a range of artificial intelligence (AI) improvements aimed at assisting users in leveraging the Gemini large language model (LLM) and enhancing productivity throughout the platform. Undoubtedly, it is a commendable objective. During the primary keynote on Day 1 and the subsequent Developer Keynote, Google incorporated a substantial number of demonstrations to exemplify the capabilities of these solutions.

However, several of them were overly basic, especially considering the constraint of being condensed into a keynote presentation with a restricted time frame. Their reliance was primarily on cases within the Google ecosystem, despite the fact that most companies store a significant portion of their data in repositories that are not affiliated with Google.

Several of the examples seemed feasible without the use of artificial intelligence. During an e-commerce demonstration, the presenter contacted the vendor to finalize an online transaction. The purpose of its design was to demonstrate the communication capabilities of a sales bot. However, in reality, the customer could have completed the task on the website with ease.

Generative AI possesses powerful applications, such as code generation, content analysis with query capabilities, and log data analysis to determine the cause of a website outage. In addition, the company has produced task- and role-based agents that may effectively utilize generative AI to assist individual developers, creative individuals, employees, and others.

However, when it comes to developing AI tools using Google’s models, rather than using the ones created by Google and other vendors for their customers, I couldn’t help but notice that they were downplaying many of the challenges that could hinder the successful implementation of generative AI. Despite attempts to downplay the difficulty, the truth is that integrating advanced technology into large businesses poses a significant obstacle.

Significant transformation is not simple
Similar to earlier technological advancements over the previous 15 years, such as mobile, cloud, containerization, and marketing automation, there have been numerous claims of potential benefits. However, each of these developments brings up its own degree of intricacy, and prominent corporations proceed with more prudence than we could envision. Artificial intelligence (AI) appears to require significantly more effort and resources than what Google or other major vendors are openly acknowledging.

Our experience with past technological revolutions has shown that they often generate excessive excitement and ultimately result in widespread disappointment. After several years, we continue to observe prominent corporations that, while having the opportunity, are only experimenting with or completely abstaining from utilizing these sophisticated technologies, even long after their introduction.

Companies may fail to take advantage of technological innovation due to various factors. These include organizational inertia, a rigid technology infrastructure that hinders the adoption of newer solutions, and a group of individuals within the company who oppose any well-intentioned initiatives. These individuals may belong to legal, HR, IT, or other departments and may reject substantive change for reasons such as internal politics.

Vineet Jain, the Chief Executive Officer (CEO) of Egnyte, a company specializing in storage, governance, and security, identifies two categories of companies: those that have already made a substantial transition to the cloud and will find it relatively easier to adopt generative AI, and those that have been slow in embracing new technologies and are likely to face challenges in adopting generative AI.

He engages in conversations with numerous firms that predominantly rely on on-premises technology and have a significant amount of progress to make before considering the potential benefits of AI. “We engage with numerous ‘late’ cloud adopters who have either not initiated or are in the initial stages of their pursuit of digital transformation,” Jain informed.

The introduction of AI may compel these organizations to carefully consider pursuing digital transformation, although they may encounter difficulties due to their significant lag in progress, according to his statement. “Before incorporating AI, these companies must first address and resolve the existing issues and establish a robust data security and governance framework,” he stated.

The data was consistently the main factor
Major industry players such as Google present the implementation of these solutions as straightforward, but the apparent simplicity on the surface does not guarantee that it is not complex behind the scenes. Throughout this week, I frequently encountered the notion that the quality of the data used to train Gemini and other extensive language models is crucial. It is evident that if the input data is of poor quality, the output generated by generative AI will also be of poor quality.

The process begins with the collection and analysis of data. If your data is not organized, it will be challenging to prepare it for training the LLMs for your specific use case. Kashif Rahamatullah, a principal at Deloitte responsible for overseeing the Google Cloud practice, expressed his admiration for Google’s recent developments. However, he also highlighted that certain firms without organized data may encounter difficulties when using generative AI solutions. “The initial AI conversation often transitions into a focus on data cleaning and consolidation, as this is crucial for maximizing the benefits of generative AI,” Rahamatullah explained.

Google has developed generative AI tools to facilitate data engineers in constructing data pipelines that connect to both internal and external data sources within the Google ecosystem. “The purpose is to enhance the efficiency of data engineering teams by automating the labor-intensive tasks associated with data movement and preparation for these models,” explained Gerrit Kazmaier, Google’s Vice President and General Manager for Database, Data Analytics, and Looker, in an interview with.

This will be beneficial for data integration and data cleansing, particularly in firms that have made significant progress in their digital transformation. However, for firms like the ones mentioned by Jain, who have not made significant progress in terms of digital transformation, these tools developed by Google could pose further challenges.

Furthermore, it is important to note that AI presents additional hurdles beyond mere implementation. According to Andy Thurai, an analyst at Constellation Research, this is true whether one is developing an application based on an existing model or attempting to create a customized model. “During the implementation of either solution, companies must consider governance, liability, security, privacy, ethical and responsible use, and compliance with these implementations,” stated Thurai. And all of that is significant.

Executives, IT professionals, developers, and other attendees of GCN this week may have sought insights into Google Cloud’s future offerings. However, if they were not actively seeking AI or if they were not adequately prepared as an organization, they may have left Sin City feeling overwhelmed by Google’s intense focus on AI. Organizations that lack digital expertise may need a significant amount of time before they can fully utilize these technologies, especially those that are not as comprehensive as the solutions provided by Google and other suppliers.

Continue Reading

Apps

The implementation of generative AI in healthcare is imminent, and not everyone is enthusiastic about it

blank

Published

on

blank

Generative artificial intelligence (AI), capable of producing and examining images, text, audio, videos, and other forms of data, is becoming more prevalent in the healthcare industry. Large technology companies and emerging businesses are both driving this trend.

Google Cloud is partnering with Highmark Health, a nonprofit healthcare company based in Pittsburgh, to develop generative AI tools that aim to customize the patient intake process. Amazon’s AWS division is collaborating with undisclosed clients to explore the application of generative AI in analyzing medical records for “social determinants of health.” Microsoft Azure is assisting Providence, a non-profit healthcare network, in constructing a generative AI system that can automatically prioritize and assign messages from patients to care professionals.

Notable generative AI businesses in the healthcare industry consist of Ambience Healthcare, which is currently working on a generative AI application for medical professionals; Nabla, an ambient AI assistant for practitioners; and Abridge, which specializes in building analytics tools for medical documentation.

The widespread interest in generative AI is shown in the investments made in generative AI initiatives focused on healthcare. Healthcare businesses utilizing generative AI have collectively secured tens of millions of dollars in venture capital thus far. Furthermore, the overwhelming majority of health investors acknowledge that generative AI has had a substantial impact on their investment strategy.

However, there is a divergence of opinions among both experts and patients on the readiness of healthcare-focused generative AI for widespread deployment.

There is a possibility that people may not desire generative AI
Only 53% of customers in the United States expressed confidence in the ability of generative AI to improve healthcare, according to a recent survey by Deloitte. This includes improving accessibility and reducing waiting times for appointments. Less than 50% of respondents expressed anticipation that generative AI would lead to a reduction in the cost of medical treatment.

Andrew Borkowski, the chief AI officer of the VA Sunshine Healthcare Network, which is the largest health system of the U.S. Department of Veterans Affairs, believes that the pessimism is justified. Borkowski cautioned that the implementation of generative AI may be premature due to its substantial limitations and worries over its effectiveness.

“According to him, a major problem with generative AI is its incapacity to effectively address intricate medical inquiries or urgent situations,” he informed. “Due to its limited knowledge base, which lacks current clinical information, and its lack of human expertise, it is not suitable for offering comprehensive medical advice or treatment recommendations.”

Multiple studies indicate that there is validity to those arguments

A study in the journal JAMA Pediatrics found that ChatGPT, an OpenAI AI chatbot, had an error rate of 83% when diagnosing pediatric disorders. Some healthcare organizations have tested this chatbot for specific purposes. During the evaluation of OpenAI’s GPT-4 as a diagnostic helper, physicians at Beth Israel Deaconess Medical Center in Boston noticed that the model incorrectly identified the top diagnosis in almost two-thirds of cases.

Present-day generative AI systems also have challenges when it comes to handling the medical administrative duties that are an integral part of clinicians’ everyday workflows. GPT-4 had a failure rate of 35% on the MedAlign benchmark, which assesses the ability of generative AI to summarize patient health records and search through notes.

OpenAI and other generative AI suppliers caution against depending on their models for medical guidance. However, Borkowski and other individuals assert that they possess the capability to accomplish further tasks. “Exclusively, depending on generative AI for healthcare may result in incorrect diagnoses, unsuitable treatments, or potentially life-threatening circumstances,” said Borkowski.

Jan Egger, the head of AI-guided therapies at the University of Duisburg-Essen’s Institute for AI in Medicine, which focuses on exploring the uses of developing technology in patient care, expresses the same worries as Borkowski. He asserts that the sole secure method to utilize generative AI in healthcare at present is under the vigilant supervision of a physician.

“The results can be highly inaccurate, and it is becoming increasingly challenging to remain cognizant of this,” Egger stated. Indeed, generative AI can be employed, specifically for the purpose of pre-writing discharge letters. However, it is the duty of physicians to verify and ultimately make the final decision.

Generative AI has the potential to reinforce and maintain preconceptions
Generative AI in healthcare can be particularly detrimental when it perpetuates preconceptions.

A team of researchers from Stanford Medicine conducted a study in 2023 to evaluate the performance of ChatGPT and other chatbots powered by generative AI. The study focused on answering questions related to kidney function, lung capacity, and skin thickness. According to the co-authors, ChatGPT not only provided incorrect answers, but it also perpetuated long-standing false notions about biological distinctions between Black and white individuals. These falsehoods have been known to cause medical professionals to make incorrect diagnoses.

The paradox lies in the fact that the individuals who are most susceptible to discrimination by generative AI in the field of healthcare are also the ones who are most inclined to utilize it.

The Deloitte poll revealed that individuals who do not have healthcare coverage, particularly persons of color, as indicated by a KFF study, are more inclined to utilize generative AI for tasks such as locating a doctor or accessing mental health assistance. If the AI’s suggestions are tainted by bias, it has the potential to worsen disparities in how people are treated.

Nevertheless, certain experts contend that generative AI is making progress in addressing this issue.

According to a study conducted by Microsoft and published in late 2023, researchers reported a remarkable accuracy of 90.2% on four difficult medical benchmarks by utilizing GPT-4. Vanilla GPT-4 was unable to achieve this score. However, the researchers assert that by employing prompt engineering, which involves creating specific prompts for GPT-4 to generate desired results, they managed to enhance the model’s performance by as much as 16.2 percentage points. (It is important to mention that Microsoft is a significant investor in OpenAI.)

Expanding beyond the capabilities of chatbots
However, asking a chatbot for information is not the sole use where generative AI excels. Several academics argue that the field of medical imaging might significantly use the capabilities of generative artificial intelligence.

A group of scientists developed a method known as complementarity-driven deferral to clinical workflow (CoDoC), which they published in a paper in Nature in July. The method is specifically built to determine the optimal circumstances for medical imaging specialists to utilize AI for diagnostics as opposed to conventional techniques. According to the co-authors, CoDoC outperformed specialists and reduced clinical workflows by 66%.

In November, a Chinese research team showcased Panda, an artificial intelligence (AI) algorithm designed to identify possible pancreatic abnormalities in X-ray images. A study demonstrated that Panda exhibits a high level of accuracy in categorizing these lesions, which are frequently identified too late for surgical intervention.

Arun Thirunavukarasu, a clinical research fellow at the University of Oxford, stated that there is no distinct feature that prevents the use of generative AI in healthcare settings.

“In the short- and mid-term, there are practical uses for generative AI technology such as text correction, automatic documentation of notes and letters, and enhanced search capabilities to optimize electronic patient records,” he stated. “If generative AI technology is effective, there is no justification for not immediately implementing it in these types of positions.”

“Rigorous science” refers to the application of a systematic and meticulous approach to scientific research and experimentation.
However, although generative AI demonstrates potential in some limited domains of medicine, experts such as Borkowski highlight the technological and compliance obstacles that need to be addressed before generative AI can be effectively utilized and relied upon as a comprehensive supportive healthcare tool.

“The utilization of generative AI in healthcare raises substantial concerns regarding privacy and security,” stated Borkowski. The inherent sensitivity of medical data and the possibility of its misuse or illegal access present significant dangers to patient confidentiality and the trust placed in the healthcare system. Moreover, the legislative and legal framework for the utilization of generative AI in healthcare is continuously developing. There are still unresolved issues pertaining to liability, data protection, and the involvement of non-human entities in medical practice.

Thirunavukarasu, who is quite optimistic about the use of generative AI in healthcare, emphasizes the importance of having a strong scientific foundation for patient-facing solutions.

“Because there will not be any direct clinician oversight, there needs to be pragmatic randomized control trials that show the clinical benefit in order to justify the use of patient-facing generative AI,” he said. “It is crucial to have effective governance in place in order to address any unforeseen negative consequences that may arise after widespread implementation.”

The World Health Organization has recently issued guidelines that support the use of scientific methods and human supervision in generative AI for healthcare. The guidelines also recommend the implementation of audits, transparency, and effect evaluations on this AI by independent third parties. The objective, as stated in the guidelines of the WHO, is to promote involvement from a varied group of individuals in the creation of generative AI for healthcare. This will give them an opportunity to express their concerns and contribute their ideas at every stage of the process.

“Unless the concerns are adequately resolved and appropriate precautions are implemented,” Borkowski stated, “the extensive adoption of medically generative AI could potentially pose harm to patients and the healthcare industry as a whole.”

Continue Reading

Artificial Intelligence

AT&T reports to regulatory authorities following a compromise of customer data

blank

Published

on

blank

AT&T has commenced the process of informing U.S. state authorities and regulators about a security breach. They have confirmed the authenticity of the millions of customer records that were recently exposed online.

As part of a mandatory submission to the attorney general’s office in Maine, the telecommunications behemoth of the United States disclosed that it dispatched letters to alert over 51 million people of the compromise of their personal data in a security breach. This includes over 90,000 people residing in Maine. AT&T has also informed the attorney general of California about the hack.

AT&T, the largest telecommunications company in the United States, stated that the compromised data consisted of users’ complete name, email address, physical address, date of birth, phone number, and Social Security number.

The client information that was leaked dates back to mid-2019 and prior. AT&T has reported that the databases included accurate information about over 7.9 million of their existing customers.

AT&T responded around three years after a portion of the disclosed data initially surfaced on the internet, hindering any substantial examination of the data. Last month, the entire collection of 73 million leaked customer records was released online, enabling users to authenticate the authenticity of their data. Several of the records contained duplicate entries.

The disclosed data also contained encrypted account passcodes, which grant entry to consumer accounts.

Shortly after the complete information was made public, a security researcher informed us that the encrypted passcodes discovered in the leaked data were easily interpretable. AT&T changed the account passcodes after being informed on March 26 about the potential danger to users. It delayed publishing its article until AT&T finished resetting the passcodes of customers who were affected.

AT&T ultimately admitted that the compromised data pertains to their clientele, encompassing around 65 million individuals who were previously customers.

Under state data breach notification rules, companies are obligated to disclose incidents of data breaches that impact a significant number of individuals to U.S. attorneys general. AT&T has stated in its official notifications submitted in Maine and California that it is providing affected customers with identity theft protection and credit monitoring services.

AT&T has yet to determine the origin of the leak.

Continue Reading

Trending