The session content below is arranged as a series of visuals with Socratic cues and links to complex, messy, conversational, TLDR answers that may have been the speaker's thoughts in the past. The speaker is likely to just use those visuals to convey his thoughts extempore from whatever comes to his mind at that moment, likely influenced by his past thoughts but the session is unlikely to be exactly the same as what is represented here. These have been shared in advance to reduce lecture time and increase interaction time with this learning session participants.
Never doubt that a small group of thoughtful committed individuals can change the world. In fact, it's the only thing that ever has."
Session learning Goals:
Short term?
Long term?
Objectives?
Creativity
Human centred management
Hands on interactive
pre and post AI
What is cognition?
What is dual processing theory of cognition?
What is decision?
Image with CC licence: https://commons.m.
And the image of the sickle and science is contained in an important writing tool for science as a question mark symbol , which is a very important cognitive cutting instrument of scientific scepticism:
Creative commons license: https://en.m.
What is intelligence?
Animal intelligence vs plant cognition?
Complex rhetorical TLDR : https://medicinedepartment.blogspot.com/2025/11/visual-4-what-is-intelligence-gim.html?m=1
What was clinical decision making like in the pre AI LLM era just few years back?
Video demo of our patient centered, clinical decision making lab:
Recent re-upload:
https://youtu.be/ZKoljY2UBHI? si=UYUfpTD7JGOgoQhA
Original upload:
https://youtu.be/xvE5b8Xk3vM? si=dqDlPQgA_EP2L7zT
Video demo of a single patient's decision making:
https://youtu.be/csF8VQbOYRo? si=mlbHXIyD5A-29uqf
What is it like now?
Hands on demonstration of human clinical decision making with AI in the loop:
Is it AI in the loop or humans in the loop?
Image CC licence: https://commons.m. wikimedia.org/wiki/File:Rock_ Shelter_8,_Bhimbetka_02.jpg# mw-jump-to-license
Rhetoric: Human animals invented AI beginning with asynchronous intelligence through their ability to use cave painting tech to convert multidimensional real life data into two dimensional data in an xy axis cave wall that later evolved to paper and electronic media so that they could eventually manage their lives better as artistic modelling was easier in a two dimensional virtual plane than a multi dimensional real plane?
Unquote: https:// userdrivenhealthcare.blogspot. com/2025/08/udlco-crh- reducing-multidimensional. html?m=1
More complex TLDR rhetoric: https://medicinedepartment.blogspot.com/2025/11/visual-5-then-and-now-what-was-clinical.html?m=1
A layered approach to clinical decision making:
Video demo of our patient centered, clinical decision making lab:
Recent re-upload:
https://youtu.be/ZKoljY2UBHI?
Original upload:
https://youtu.be/xvE5b8Xk3vM?
Video demo of a single patient's decision making:
https://youtu.be/csF8VQbOYRo?
What is it like now?
Image CC licence: https://commons.m.
Rhetoric: Human animals invented AI beginning with asynchronous intelligence through their ability to use cave painting tech to convert multidimensional real life data into two dimensional data in an xy axis cave wall that later evolved to paper and electronic media so that they could eventually manage their lives better as artistic modelling was easier in a two dimensional virtual plane than a multi dimensional real plane?
Unquote: https://
We are all apprentices in a craft where no one ever becomes a master.
Ernest Hemingway, The Wild Years
We are all apprentices in a craft where no one ever becomes a master.
Human, Scientific and Machine layers :
Anatomy of cognitive layers:
Physiology of cognitive layers in clinical decision making: enter Bloom's taxonomy!
RUAAECApRUAECAp
More complex TLDR rhetoric along with team member attribution for the decision tree diagram as well as copyright attribution for the Bloom's diagram here: https://medicinedepartment.blogspot.com/2025/11/visual-6layered-approach-to-clinical.html?m=1
Human clinical decision making with AI in the loop:
The human layer and Ux interface
- "Sometimes the smallest things take the most room in your heart." —
- Winnie the Pooh
- Above was Winnie the Pooh translating the Chandogya Upanishad:
- छान्दोग्य उपनिषद् ८.१.३*
अथ य एषोऽणिमैतदात्म्यमिदं सर्वम्।तत् सत्यम्। स आत्मा। तत् त्वम् असि श्वेतकेतो इति।
How do we deidentify as per HIPAA, the entire data that is captured into our system 2 healthcare data processing ecosystem?
Can missing the smallest things sometimes take up the most room in our workflow?
Are the smallest things, sometimes the smallest pieces in the puzzle, most rewarding in terms of learning and illness outcomes?
- "Sometimes the smallest things take the most room in your heart." —
- Winnie the Pooh
- Above was Winnie the Pooh translating the Chandogya Upanishad:
- छान्दोग्य उपनिषद् ८.१.३*अथ य एषोऽणिमैतदात्म्यमिदं सर्वम्।तत् सत्यम्। स आत्मा। तत् त्वम् असि श्वेतकेतो इति।
Is the work of AI LLMs as just a machine translator in our multilingual workflow small enough?
Consent form: Machine translation provides an added feature to our informed patient consent form that allows a single click translation to any global language!
Let me know if the konkani seems right!
In case it's not we have a manual back up here used routinely for majority of our patients:
The above is one layer of explainability and raising awareness about patient rights including right to privacy.
Assignment: Get your LLMs to go through the consent forms linked above and check if they are DPDP compliant and if not ask for a better draft of the above consent form to make it DPDP compliant.
Daily events in clinical decision making
and
visual data capture and representation
to
generate quick human insights and prevent TLDR
In a human centered learning ecosystem, with AI in the loop, manual translation is more common?
Above is a layer of manual human to human translation as well as intermittent problems in an otherwise complex patient with comorbidities (will discuss again in the next layer of AI driven analysis)
Again this patient does have comorbidities related to his metabolic syndrome such as heart failure but then intermittent simple human requirements of explainability manifest in his daily sharing through his advocate such as the one here that manifests in his sleep and meta AI helps not just to translate it but also explain it well.The role of AI driven infographics in explainability:
Speaker's thoughts: A picture speaks more than a thousand words?
A video can be time consuming though!
Assignment: Ask your LLMs to gather all the patient data from the case report linked above and rearrange it using AI driven removal of exactly dated time stamps and replacement with unidentifiable event timelines comprising labels such as Day 1,n season of year 1,n.
This patient is an example how human simple explainability backed by scientific evidence can provide a new lease of life to a patient of myocardial infarction who travelled the long distance to our college just for that explainability to strengthen his prior trust in us!
Past published work on similar patient:
LLM textual explanation followed by translation and then text to voice file for the patient's advocate who like most of us also suffers from TLDR:
Above demonstrates AI driven support for insulin dose calculation through human learning around carb counting, accounting for insulin correction or sensitivity factor and insulin to carb ratios to decide the total insulin pre meal dose with scientific accuracy.
Daily events in clinical decision making
and
visual data capture and representation
to
generate quick human insights and prevent TLDR
The role of AI driven infographics in explainability:
The Scientific analytical cutting layer:
You write for meropenem. Again.
Here's what that voice doesn't tell you, that, in doing so, you've just contributed to a crisis that's killing more people than you might save."
Unquoted above from the link below:
https://www.linkedin.com/
Explainability, trust and layers of clinical decision making in pre and current AI LLM era:
EBM layer: This layer is the one our clinical decision making lab is largely engaged in although the other two layers are no less important.
We have already shared something around those in our previous demos particularly our two video links shared above.
Human layer: This is the most important layer where clinical decision making actually happens at multiple human stakeholder levels:
Below are recent examples of the limits of scientific explainability and it's effect on human trust.
How much Trust building can one achieve through Human clinical decision making with AI in the loop?
Human mistrust due to persistent uncertainty due to scientifically limited explainability ?
Images of subclinical hypothyroidism patient data:

Human full trust inspite of persistent uncertainty due to scientifically limited explainability
Can AI act as a guard rail for human mistrust due to lack of communication and explainability?
Rhetoric: https://medicinedepartment.blogspot.com/2025/11/visual-10-explainability-trust-and.html?m=1
And last but not the least!
Machine layers:
The machine algorithm will see you now?
Amazon "Help me Decide"!
👆 Quantitative AI driven clinical decision making is currently here?
Is this analogous to clinical decision making:
Key takeaways:
Amazon "Help Me Decide" uses AI to analyze your browsing history (patient's clinical history) and preferences (check out the word preferences in Sackett's classic definition of EBM) to recommend the right product (diagnostic and therapeutic, lab or imaging as well as pharmacological or non pharmacological therapy) for you with just one tap.
The tool helps customers pick the right product, quickly.
(System 2 decision making fast tracked to system 1 and closer to tech singularity)?
Personalized recommendations include clear explanations of why a product is right for you based on your specific needs and preferences.
Personalized precision medicine with explainability to gain trust!
Who owns the data that trains thesealgorithms?
Did patients consent to its use?
Can we trace how a prediction was made, or who’s responsible when it’s wrong?
Unquoted from below:
DPDP Act is — a national trust charter?
The Act’s intent isn’t to burden innovation; it’s to humanize it,?
It recognizes that in a connected nation, trust is infrastructure.
Unquoted from below:
Rhetoric: https://medicinedepartment.blogspot.com/2025/11/visual-11-and-last-but-not-least.html?m=1
Is synthetic intelligence SI scarier than AI?
Is decision making a cyclical process?
“Language is needed because we don’t know how to communicate. When we know how to, by and by, language is not needed.”
EBM layer: This layer is the one our clinical decision making lab is largely engaged in although the other two layers are no less important.
We have already shared something around those in our previous demos particularly our two video links shared above.
Human layer: This is the most important layer where clinical decision making actually happens at multiple human stakeholder levels:
Below are recent examples of the limits of scientific explainability and it's effect on human trust.
And last but not the least!
DPDP Act is — a national trust charter?
The Act’s intent isn’t to burden innovation; it’s to humanize it,?
It recognizes that in a connected nation, trust is infrastructure.
Unquoted from below:
Rhetoric: https://medicinedepartment.blogspot.com/2025/11/visual-11-and-last-but-not-least.html?m=1
Is synthetic intelligence SI scarier than AI?



































