🥼 The Primer on the State of Clinical AI: Progress Without Integration
How AI is entering hospitals, and why clinicians need to be part of the conversation
Welcome back to your weekly dose of AI news for Life Science!
What’s your biggest time sink in early drug discovery process?
Artificial intelligence is sweeping through healthcare. Investment is rising, publications are being released daily, and new tools are starting to enter real clinical settings. As the pace accelerates, one question keeps resurfacing: what’s the role of clinicians in all of this? And more importantly, how involved do they need to be for AI to actually improve care?
This piece, by medical student Maria Aquilue Dies, looks at the gap between clinicians and AI development, how current tools are being adopted, and what needs to happen next to build a truly collaborative, clinically grounded AI ecosystem.
📚 The Knowledge Gap
Clinicians and AI developers do collaborate. Large hospitals with research units host multidisciplinary teams, and medical AI companies often call on clinicians as consultants. However, significant differences between the clinical and technological domains persist, and they often continue to operate alongside one another rather than in close integration.
When clinicians participate, their role tends to be more advisory than hands-on, in part because many of the underlying concepts (machine learning fundamentals, deep learning architectures, neural networks, transformers) remain unfamiliar territory. Without a solid understanding of how these systems work, it’s harder to contribute meaningfully to design decisions, evaluation criteria, or safe deployment strategies.
This gap has some meaningful consequences:
Promising applications remain unrealised because clinicians, the people closest to the problems, are not always fully aware of the extent of what AI can offer.
Tools under development progress without vital clinical insights, leading to solutions that miss workflow realities or fail to account for common pitfalls.
Validated tools struggle to reach patients because many clinicians aren’t aware they exist, and therefore may not request them, or don’t feel confident using them.
And importantly: clinicians are ultimately responsible for the ethical use of AI in patient care. Without literacy in how these systems behave, informed oversight becomes impossible.
To illustrate these points, consider the daily burden of “data dredging” through fragmented medical records to synthesise a complex patient’s history. An LLM-based chatbot integrated directly into the Electronic Health Record (EHR) interface could instantly streamline this, yet many hospitals have still not implemented it simply because clinicians are unaware that such a dynamic interaction is even possible. When these tools are developed in isolation, they often fail to adapt to the specific nuances of different medical specialties or feature cumbersome, poorly integrated interfaces for the average user, leading to immediate abandonment. Furthermore, without a foundational understanding of the technology, clinicians may either reject the tool out of “black box” skepticism or, conversely, over-rely on its output without realizing the system might prioritize repetitive data over clinically significant outliers.
Yet there’s a positive side to all this. Despite limited formal training, clinicians are generally optimistic about AI. Many want to try new tools and collaborate with developers. They simply lack the pathways, education, and institutional structures to do it.
This is why medical training needs to evolve. First and foremost in medical school, but also throughout residency and continuing professional development. When clinicians understand the technology shaping their field, they push for safer, more useful systems, and adoption becomes smoother and more equitable.
🧭 Illustrating the Transition
The Present Landscape
Today’s AI adoption in hospitals is uneven. Most clinicians use only a narrow set of tools, often as end‑users rather than co‑designers. But even these early steps show how AI can shift workflows in meaningful ways.
Take medical chatbots, arguably the most widely adopted tool.
Before these LLM-based tools were adopted, reviewing evidence in difficult cases was a slow and repetitive process of diving into large repositories and manually filtering papers in search of a specific nugget of information. Moreover, the result was usually poorly tailored to the specific patient’s age, comorbidities, or context.
Now clinicians can ask a question with all relevant context and receive a focused, case‑specific answer grounded in the literature. The process is not only faster in a setting where acting quickly can save lives, but often more individualized, drawing on diverse sources and integrating nuances that would be cumbersome to search manually. That said, these systems are not infallible: outputs still require clinical verification, institutional oversight, and a clear understanding of their limitations, particularly around hallucinations, data privacy, and regulatory accountability.
Additionally, while chatbots may be the most widespread application, a broader range of AI tools is being slowly but steadily deployed in hospitals. According to the U.S. FDA, more than 1,000 AI/ML-enabled medical devices have now received regulatory authorisation, the majority concentrated in radiology and imaging.
Some examples that stand out are:
AI medical imaging systems: widely used in breast cancer screening and in the detection of pathologies in chest X‑rays, such as pneumonia or lung cancer. Some hospitals use AI to automatically calculate brain tumor volume from MRI scans, replacing error‑prone manual measurements.
Risk‑stratification models, ICU monitoring systems, tools for surgery planning and supervision (e.g., back surgery planning or neurosurgery phase recognition), and automatic EHR generation from clinician–patient conversations.
Nonetheless, an important challenge remains: what gets adopted first depends less on clinical need and more on hospital culture, peer influence, and the enthusiasm of early adopters.
What a Clinically Grounded AI System Looks Like
These applications are encouraging, but far from the end goal. A truly effective AI ecosystem requires two fundamental transformations.
First, clinicians must play an active role in guiding AI development, backed by an education that emphasizes AI literacy. This is essential to address two of the most relevant challenges in AI applications: ethical adoption and creating tools that address real-world clinical needs
Second, AI must be embedded into workflows, not stacked awkwardly on top of existing systems. The future shouldn’t be a patchwork of isolated apps scattered across hospitals. It needs:
Seamless integration with electronic health records
Interconnected tools across healthcare facilities
Equitable deployment and sustainable funding
This creates real-time, longitudinal support across the entire patient journey.
Once this ecosystem exists, current applications can evolve dramatically. Medical chatbots could link directly to patient records for pre-visit summaries, inpatient follow-up, and real-time guidance. AI imaging tools could feed into multidisciplinary diagnostic pipelines, track changes over time, or predict disease progression and treatment response.
These are not futuristic dreams, but natural extensions of what we already have. Clinicians taking a clear step forward and advocating for the necessary infrastructure, education, and collaborations will be a key element in this transition.
💡How to Promote Safe and Effective AI Adoption
We now know that the implementation of AI in medical settings has both incredible potential to help and a clear need for informed clinical supervision. In this context, the question becomes how the main actors in this change, clinicians and institutions (be they public or private), can actually take the necessary steps to make the transition as smooth as possible.
The first step: make AI literacy a standard part of medical training
AI-driven systems will only become more prevalent in clinical settings, so it is the responsibility of medical school administrators to update curricula to include practical, hands-on AI training and safety principles. Hospitals should also provide programs ensuring that already graduated physicians have access to this education through residency and continuing education.
Even clinicians without formal positions in these institutions can contribute by providing mentorship to students and colleagues, and by advocating for the integration of AI training into clinical sessions, hospital programs, and conferences.
I have had the opportunity to witness the reality of this situation firsthand during my medical school years. In my faculty, as it stands, the only structured exposure to AI in medicine is relegated to a couple of elective courses. While many of us have encountered the topic through clinical sessions or specific research initiatives (such as a final year project), it remains absent from the formal, core curriculum. Looking ahead, it is essential that these concepts move beyond optional seminars and become a foundational part of our training, whether by updating existing modules or, ideally, establishing dedicated courses within the standard medical program.
Time to work: create structured spaces for collaboration
Research teams and companies developing medical AI applications should build formal multidisciplinary teams and establish fast, effective feedback mechanisms to ensure smooth development and integration. Hospital coordinators and chiefs of service can further support this process by offering incentives to departments that adopt AI responsibly and safely.
Clinicians, for their part, can actively contribute by seeking opportunities to participate in AI projects and initiating conversations within their departments, sharing insights on workflow, safety, and usability.
A way to ensure safety: build clearer frameworks for evaluating clinical AI
Regulatory bodies and hospital administrations should develop clear institutional guidelines, create accessible evaluation benchmarks and standardise performance reporting in clinical research. Clinicians can then build on this foundation by learning to critically assess AI tool reports and real‑world validation, and by participating in committees that help shape how these tools are evaluated and adopted.
Inspire adoption: highlight successful real‑world use cases
Sharing insights about new tools and the development process in multidisciplinary teams is essential to encourage implementation in an equitable and consistent manner.
This can happen on both large and small scales: professional societies should consider including clear, clinician-friendly examples of successful implementation in practice in conferences and journals, while clinicians can contribute by discussing new tools and personal experiences with colleagues. A key way to stay updated on the state of the art of AI applications in medicine is through trusted newsletters and the AI-in-healthcare sections of major journals.
✨ The Future is Bright
Despite the fears sparked by early developments in AI, recent reports show that these applications can excel at assisting doctors and patients. They can reduce burdens and improve outcomes across healthcare; but only if they are thoughtfully developed, safely regulated, and responsibly deployed.
Fortunately, positive medical AI applications continue to advance at an extraordinary pace. But its true impact depends on healthcare professionals: their knowledge, their involvement, and their willingness to guide this technology responsibly.
The next generation of medical AI shouldn’t be built just for clinicians. It should be built with and by them. And if we get that right, the future of medicine won’t just be faster or more efficient. It will be smarter, safer, and far more humane.
Thanks for reading Kiin Bio Weekly!
💬 Get involved
We’re always looking to grow our community. If you’d like to get involved, contribute ideas or share something you’re building, fill out this form or reach out to me directly.
Subscribe now to stay at the forefront of AI in Life Science and keep up with this upcoming season of deep dives.
Connect With Us
Have questions on this or suggestions for our next deep dive? We’d love to hear from you!
📧 Email Us | 📲 Follow on LinkedIn | 🌐 Visit Our Website




Very interesting reflexion, thank you for the insights!!!