top of page
Search

Panning for Insights: AI’s Promise and Pitfalls in Healthcare

by Bryan Allinson, Principal Halo Ventures

March 15, 2025

ree

Introduction


Among the wide range of fields adopting artificial intelligence (AI), the field of medicine presents both tremendous potential, balancing excitement with regulatory caution and perceived risk. 

The rapid evolution of AI has heightened both excitement and caution within the medical community. Much like prospectors in the 19th-century Gold Rush searching for riches, healthcare organizations are navigating a complex landscape of data, regulations, and ethical considerations, all in pursuit of transformative improvements in patient care.

This document explores the AI Gold Rush in Healthcare, including:

  • The Expanding Role of AI in Healthcare

  • Medicine is “Different” relative to other industries

  • Navigating the AI Hype

  • Empowering Clinical Leadership

  • Strategic Considerations by Healthcare Organizations for Resiliency

  • Recommendations for Clinical Staff using Health AI Solutions


The Expanding Role of AI in Healthcare


AI's impact in medicine has broadened significantly, with applications spanning from operational efficiency to advanced diagnostics. Recent advancements in 2024-2025 include:

  • Real-time clinical documentation through AI-assisted scribing tools

  • Enhanced AI-driven imaging analysis for early disease detection

  • Predictive analytics integrating patient history, genetics, and biomarkers

  • AI-supported treatment decision-making personalized to patient profiles

  • Automation of administrative workflows such as claims processing and patient scheduling

  • AI-powered clinical trial recruitment optimizing patient matching and diversity


There is virtually no area in medicine and care delivery that is not already being touched by AI. While some AI applications in medicine narrowly define tasks using one data modality, others seek to assist broad organizational decision making, disease diagnoses, prognostic evaluations, and treatment planning, and other complex application.


The range of applications includes, not limited to:

  • Capture the dictation of medical notes

  • Synthesize patient interviews and laboratory test results to write notes directly with limited or no clinician intervention

  • Assist caregivers in making medical insurance claims and helping payors in adjudicating them

  • Interpreting images including radiographs, histology, MRIs, CT scans, and optic fundi

  • Analyzing and interpreting large research databases containing information ranging from laboratory findings to clinical data

  • Risk stratify patient disease to predict disease onset, progression, and/or complications

  • Assess individual genetic profiles, medical history, and treatment responses

  • Support administrative automation and workflow optimization including scheduling, billing, and managing electronic health records (EHRs) though tools such as chatbots and virtual assistants

  • Predict their efficacy and risk for drug interactions 

  • Establish highly tailored patient cohorts for clinical trials, medical management, population health, reporting to accrediting bodies, outcomes research


Medicine is “Different” relative to other industries


While the AI opportunity is massive, there are serious ethical, governance, and regulatory considerations critical to the design, implementation, and integration of AI applications in the healthcare environment. Because of these concerns, new applications need to adhere to the same standards applied to other medical technologies evident in all areas of healthcare. 


To achieve this, AI solutions need to stand up to the same level of rigor in testing as that used in other areas of medicine. However, this approach can present challenges, such as the “dataset shift” that can result when there is a mismatch between the data set with which an AI system was developed and the data on which it is being deployed. 


Despite AI's potential, healthcare adoption remains cautious,, due to:

  • Algorithmic Transparency: The complexity of deep learning models continues to make AI a "black box" in clinical decision-making. Efforts are underway to improve explainability and ensure AI-generated recommendations are interpretable. Advances in neural networks are pushing the boundaries of what AI can do. However, it is often difficult to understand how a specific prediction is generated, meaning AI algorithms are perceived due to the black box issue. The lack of transparency erodes trust in AI, especially considering that healthcare is highly regulated. Providing clarity with respect to the underlying AI algorithm processes and interpretation is paramount. Some vendors, show the user how the AI generated its results, and ask the user to verify results of the AI. 

  • Data Fragmentation: The lack of interoperability between electronic health records (EHRs) and disparate data sources remains a significant challenge in AI deployment.

  • Regulatory Challenges: The evolving regulatory landscape, including new FDA guidelines and global AI governance frameworks, has slowed AI adoption as organizations work to ensure compliance.

  • Workforce Concerns: The potential displacement of clinical roles, particularly in radiology and pathology, has sparked debates about AI's role as an augmentation tool rather than a replacement for healthcare professionals.


Barriers to adoption of AI in medicine include:

  • Data access: Medical data is often difficult to collect and access. If the data collection process interrupts workflows, then medical staff resist those processes. Another issue is that data coming from divergent, such as different hospitals or clinics, is based on non-standardized formats, and it is difficult to pool. The result is that data collection is localized and fragmented. Without large, high-quality data sets, it can be difficult to build useful AIs. This, in turn, means that health care providers may be slower to take up the technology. A recent survey of healthcare leaders found that 80% do not fully trust in the data they rely on for everyday use.

  • Regulatory barriers: Privacy regulations can make it difficult to collect and pool health care data and it may be too difficult to use real health data to train AI models as quickly or effectively as in other industries. The regulatory approval process for a new medical technology takes time, and the technology receives substantial scrutiny before wide adoption. Health innovations are notoriously slow and can take years to navigate the approval processes required prior to adoption. Another issue is liability concerns as health providers are concerned about tort law. Finally, regulation in health care is, appropriately, more cautious than regulation in many other industries. 

  • Misaligned incentives: Another barrier is the perceived disruption to mid-level clinical and operating staff relative to their roles and corresponding workflows. For example, there is no shortage of warnings about radiologists losing their jobs. In 2016, Geoff Hinton, who won computer science’s highest award, the Turing Award, for his work on neural networks, said that “We should stop training radiologists now; it is just completely obvious deep learning is going to do better than radiologists.” While the prediction was informed by advances in AI based image-based diagnosis, there are plenty of radiologists and operational teams supporting them.


Navigating the AI Hype


Clinical leaders must sift through the noise, separating myths and overblown promises from the actual tools that deliver tangible benefits. These strategies equip clinicians to critically analyze AI solutions, assessing their efficacy, safety, and alignment with patient care objectives. 


The challenge lies in distinguishing between genuinely impactful AI tools backed by robust evidence and those that overpromise and underdeliver—what’s often referred to as "snake-oil." Clinical leaders increasingly seek alignment with organizational-wide objectives, question unsubstantiated claims, seek references from peer organizations, and look for evidence-based deployment. 


The mismatch between the inflated rhetoric surrounding AI capabilities and practical realities on the ground reflects, in part, the extent to which stakeholders in medicine have been unprepared for, and relatively naive in their response to, the self-serving promotional activities of entrepreneurial entities looking to capture a portion of the $3 trillion United States healthcare market:

  • There is general distrust of AI in healthcare as the industry-wide and discreet organizational knowledge surrounding AI in medicine is inflated with hype. 

  • Some have speculated that AI will render doctors and providers obsolete. Yet, systematic reviews of AI applications in medicine find that few are ready for clinical deployment,. For example, the abject failure of IBM Waston as an AI solution is documented. 


This environment creates a dynamic in which it can be difficult for clinicians, administrators, policy makers, and other stakeholders to use a detailed understanding of the capabilities and, most importantly, the limits of current AI systems to counter the bandwagon effects of inflated enthusiasm.


Healthcare leaders must navigate the growing divide between AI capabilities and exaggerated marketing claims. Key strategies include:

  • Evaluating AI tools based on peer-reviewed evidence and regulatory approvals

  • Ensuring AI aligns with clinical workflows rather than disrupting them

  • Avoiding premature adoption of AI solutions without adequate validation

  • Establishing AI review committees within healthcare organizations to assess new technologies


Empowering Clinical Leadership through AI Committees


To effectively navigate the AI hype, recently some health organizations have announced formation of special advisory committees focused on streamlining organization-wide alignment and addressing the barriers.  Their charge is to navigate the proliferation of AI solutions based on decision frameworks such as ethics and governance and principles for adoption. 


World Health Organization – Ethics and Governance Framework


The World Health Organization’s publication, “Ethics & Governance of Artificial Intelligence for Health”, is based on feedback from leading experts in healthcare, ethics, digital technology, law, and patient / human rights.  The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use. 

Principles

Descriptions

Protecting human autonomy

Humans should remain in control of health care systems and medical decisions

Promoting human well-being and safety and the public interest

The designers of AI technologies should satisfy regulatory requirements for safety, accuracy, and efficacy for well-defined use cases or indications

Ensuring transparency, explainability, and intelligibility

Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology

Fostering responsibility and accountability

It is the responsibility of stakeholders to ensure that AI is used under appropriate conditions and by appropriately trained people

Ensuring inclusiveness and equity

Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability, or other characteristics protected under human rights codes

Promoting AI that is responsive and sustainable

Designers, developers, and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements

Table 1 – Principles of AI Ethics in Healthcare (Worth Health Organization)


U.S. Veterans Administration - Principles for Adopting (or Rejecting) AI Solutions Framework


Institutions such as the Veteran’s Administration are establishing principles for adoption of AI: Relevant, Usability, Risk, Regulatory, Technical Requirements, and Financial. 

Principles

Descriptions

Relevance

  • What specific medical problem is the AI application intended to solve?

  • Is this intended for research or clinical purposes?

  • Who is the end user?

Usability

  • How will the AI application be integrated into the clinical workflow?

  • How was the model trained?

  • Was the model trained with a well-balanced data set?

  • Was organizational patient data used in model training?

  • Does the AI tool have interpretability?

Risk

  • What are the risks associated with implementation?

  • What are the benefits of clinical adoption?

  • Is there potential for bias?

  • How will potential errors be addressed?

Regulatory

  • Does the AI tool comply with agency and VISN/facility regulations?

  • Is the AI tool compliant with data protection and privacy standards for the VA?

  • How will quality assurance be maintained?

Technical requirements

  • Will the AI tool meet the IT requirements (i.e., hardware specifications, cloud vs host, and network security issues)?Who will provide technical oversight and maintenance?

Financial

  • Does the clinical benefit justify cost?

  • What is the licensing model?

  • Is there an appropriate sole source justification?

  • What are maintenance costs?

Table 2 – Principles of AI Ethics in Healthcare (Veterans Administration)


Strategic Considerations by Healthcare Organizations for Resiliency 


A series of common strategic considerations faced by health systems leaders utilize some common considerations when approving which AI solutions are adopted: 

  • Is AI Aligned with Mission and Vision? Integrating AI in healthcare should align with the core mission and vision of the organization. AI's role should complement and enhance the delivery of patient-centered care, improving outcomes, and fostering innovation while aligning with the healthcare provider's overarching goals.

  • Does AI empower Data-Driven Decision Making? Faced with an increasing volume of complex decisions based on divergent data sources, health systems look to knowledge-driven analytics and intelligence teams to make decisions. These analytics teams do so by creating and documenting knowledge from datasets, uncovering patterns, assessing trends, identifying correlations, and other valuable insights that inform healthcare decision making and operations at both the individualized patient and broad-based geographic population levels. AI solution vendors should empower leaders with actionable knowledge and insights, including improved patient outcomes, enhanced market opportunities, optimized clinical operations, enhanced ability to respond to crises and advancing medical research. 

  • Are Operational Workflows Enhanced? AI implementation should greatly enhance clinical operational workflows. There is a strong need to overcome latent status quo barriers to the AI adoption decision, necessary to overcome AI activation otherwise held back by perceptions of risk and uncertainty. As such, providing tangible benefits to clinicians well above and beyond the status quo is paramount. Clinical and operational workflow optimization must be documented, be tangible, and be meaningful to teams. Often this involves education, training and learning for resiliency. 

  • Can the AI Scale Across the Organization? AI tools need to be able to handle immense volumes of data efficiently while maintaining accuracy and performance to meet the demands of a large-scale healthcare environment. AI solutions that have multiple end user types may be more resilient because multiple solution champions and budgets can be accessed simultaneously. Additionally, shared learning is possible.


Recommendations for Clinical Staff using Health AI Solutions


Considering these factors, care and attention should be provided to medical professionals and health-care workers using AI so that they can properly capitalize on opportunities and assess actual risk. 


These teams should receive sufficient technical, managerial, and administrative support, capacity-building, regulatory protection (when appropriate) and training in the many uses of AI technologies and their advantages and in navigating the ethical challenges of AI. AI curricula should be seamlessly integrated into current programs and workflows. 


Teams will need to be formed so that the team is proficient in AI foundations. Physicians, nurses, providers and support teams will require a wider range of competence to apply AI in clinical practice, including better understanding of mathematical concepts, the fundamentals of AI, data science, health data provenance, curation, integration, and governance , and of the ethical and legal issues associated with the use of AI for health. Such training and contextual learning will help teams separate the hype from reality and understand opportunities and limitations, thereby enhancing trust in AI. Good support and training will ensure that health-care workers and physicians, for example, can avoid common pitfalls such as automation bias when using AI technologies. This approach also helps the customer resilience for the AI vendor, as AI tools are adopted, understood, and used. 

AI vendor-customer relationships that factor in these recommendations are positioned for success. Health organizations and AI vendors that promote these recommendations will ultimately be successful, while those that ignore these factors will likely result in misalignment, leading to failure to use, frustration, and lack of adoption.


Healthcare professionals must be equipped with the necessary knowledge and skills to interact effectively with AI systems. Recommended actions include:

  • Providing training on AI fundamentals, bias mitigation, and interpretability

  • Ensuring AI tools complement clinical judgment rather than replacing it

  • Establishing AI competency teams to evaluate AI system performance in real-world settings

  • Encouraging cross-disciplinary collaboration between AI developers and healthcare providers


END OF ARTICLE


1.      Beam A., Drazen J., Kohane I., Leong T., et. al., Artificial Intelligence in Medicine, N Engl J Med 2023; 388:1220-1221

2.      Acosta, J., Falcone, G., Raipurkar P., Topol E., Multimodal biomedical AI, Nature Medicine volume 28, pp. 1773–1784 (2022)

3.      Finlayson SG, Subbaswamy A, Singh K, et al. The clinician and dataset shift in artificial intelligence, N Engl J Med 2021;385:283-286.

4.      J Clin Med. 2025 Feb 27;14(5):1605

8.      Front Artif Intell. 2022; 5: 879603. Published online 2022 May 30. doi: 10.3389/frai.2022.879603

9.      Deep 6 AI Cohort Builder, www.deep6.ai

11. Topol E., Deep Medicine, 2019

13. Can Healthcare IT Save Babies?, 3 Jan 2008

14. Galasso A., Luo H., Punishing Robots: Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence, 2019, http://www.nber.org/books/agra-1, http://www.nber.org/chapters/c14035

16. London A., Artificial intelligence in medicine: Overcoming or recapitulating structural challenges to improving patient care?, Cell, https://doi.org/10.1016/j.xcrm.2022.100622

17. Goldhahn J., Rampton V., Spinas G., Could artificial intelligence make doctors obsolete?, BMJ. 2018; 363 https://doi.org/10.1136/bmj.k4563

18. Roberts M,. Driggs D., Thorpe M., Gilbey J., Yeung M., Ursprung S., Schönlieb C., Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat. Mach. Intell. 2021; 3: 199-217

19. Nagendran M., Chen Y., Lovejoy C.A., Gordon A.C., Komorowski M., Harvey H., Topol E.J., Ioannidis J.P., Collins G.S., Maruthappu M., Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020; 368: m689

20. O’Leary L., How IBM’s Watson went from the future of health care to sold off for parts. Slate., https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html, Date: 2022

21. Ethics and governance of artificial intelligence for health, World Health Organization, 28 June 2021https://www.who.int/publications/i/item/9789240029200

23. Reddy, S., The Growing Importance of Data Analytics in Health Informatics, HIMSS, https://www.himss.org/resources/growing-importance-data-analytics-health-informatics

24. Paranjape K, Schinkel M, Panday RN, Car J, Nanayakkara P. Introducing artificial intelligence training

25. in medical education. JMIR Med Educ. 2019;5(2):e16048

26. The Topol review: Preparing the healthcare workforce to deliver the digital future. London: National

27. Health Service; 2019 (https://topol.hee.nhs.uk/, accessed 23 August 2020).


 
 
 

Comments


bottom of page