Conversational Transcripts through various Web 2.0 groups in response to the journal club link with quote posted in the groups:
"What is the **essential bridge** between AI’s computational power and the uncharted territory of individual patient care.
Until AI can *truly* grapple with uncertainty (not just rare cases), physicians won’t be replaced—because **medicine’s hardest problems aren’t puzzles to solve, but human experiences to navigate.**
The hype around "85% accuracy" obscures a deeper truth: **A diagnosis and treatment isn’t the destination—it’s one step in a uniquely human journey.*
[17/07, 10:42]cm: Follow up to the above by users (human agents) in the AIHI group:
[17/07, 09:33]hu1: India has missed many buses. We look only at constraints and problems and not the opportunities or benefits. We are missing the point that AI has the potential for *more for less* and Sustainable Universal Health Care (SUHC).
[17/07, 10:09]hu2: Probably the buses have restricted mobility due to too many missing bridges
[17/07, 10:17]hu1: Once a doctor told me - we doctors don't treat a patient. We remove all hurdles for a person to heal and get back to his/her life. The question we should ask whether AI can help remove these hurdles?
[17/07, 10:25] hu2: Yes continuing with the previous analogy of restricted buses, the hurdles here are the absence of bridges and there's no reason why AI can't be that bridge
[17/07, 10:29]hu3: There is definitely some merit in your argument but with all due respect, AI will answer as per it's dataset and training.
We need right materials for that bridge to be useful and sustainable
[17/07, 10:30]hu3: Else, we have seen too many bridge collapse. With higer reach and impact, the devastation multiplies
[17/07, 10:37]hu2: In essence, to summarise drastically, one needs perfect engineering solutions to deliver ideal medical solutions?
[17/07, 10:55]hu4 : We love the bottom of the 2x2 matrix - that's our DNA.We profess to be in the top right corner - supported cheer and rhetoric!
[17/07, 11:26]cm: More flow of rhetoric from the top right of the 2x2 values, action and execution matrix 👇
[17/07, 10:51]hu3: In essence, it would be a marriage between the two wherein doctors should resist the urge to become engineers and engineers should resist the urge to play doctor.
It may be imperfect like most marriages but that's the only way I see it
[17/07, 11:13]hu2: Integrative medicine in the true sense!
[17/07, 11:15]hu1: I thought AI (LLMs like Gemini or ChatGPT) use the same data (literature, books etc) as humans do. The difference is humans use very limited content whereas these LLMs use all that is known to humans. 😊
[17/07, 11:22]hu3:
This takes me back to 1948 novel
The count of Monte Christo
If you've read it
[17/07, 11:23]hu2: Slight modification:
Humans use a lot of data they capture for themselves and never share with any LLMs or textbooks (deep dark web of medical cognition) and this personal data capture and processing (in the past aka clinical experience: o baba uni khub experienced daaktar!...) that sometimes may overtrump current gen LLMs (starved of patient particular data that some bahut experienced daaktars have)!
[17/07, 11:25]hu3: Agreed 💯%
[17/07, 11:25]hu3: The famous dictom
The eyes see what mind knows
[17/07, 11:26]hu3: Its not just about knowledge as we have seen meritorious student but an average doctor
Average student but awesome doctor
[17/07, 11:27]hu3 : The trick lies somewhere in between
[17/07, 11:29]hu5: The ability to get the best out of generative ai solutions such as ChatGPT, Gemini, Claude, Perplexity, etc is critically dependent on crafting prompts that are *fit for purpose*. Until now the human mind is needed to determine for example when and which type of thinking should be used for crafting the prompts step by step to get the best out of a particular LLM while knowing the constraints and limitations of the particular LLM. For example when to use the Theory of Constraints, or TRIZ or Tree of Thought, etc in combination with say Mind Maps and/or Fishbone diagrams for this purpose. The art and science of prompt engineering is a new challenge or constraint that doctors (and all other human users) need to learn and master. The LLM is a paintbrush, the user is the painter, and all users have access to all the different types of paints and colours.
[17/07, 11:31]hu2: The trick for AI engineers now would be to devise a snare that can capture what the average awesome experienced daaktar knows by feeding more and more patient particular experiential data to the average LLMs. Currently they have just some average empirical data from randomised controlled trials at best that may not be able to match the precision medicine patient particular data in the average awesome experienced doctor's head!
Again must admit that the average awesome experienced daaktars may all become extinct soon so AI can then rule with no human competition at all.
17/07, 11:53]hu1: This hypothesis holds for only few limited individuals who possess unique unpublished *tacit knowledge*. Whereas LLMs are democratizing both *explicit & tacit knowledge*.
[17/07, 11:43]hu1: Let me repeat what I tell my students.
1) Humans are good at asking questions, whereas machines are good at answering questions.
2) If you ask the right question, there is no guarantee the answer will be right. However, if the question is the wrong answer will always be wrong. 😊
[17/07, 11:44]hu6: 1948? Nope. Early nineteenth century. Alexander Dumas. Incidentally, his father was black and he of mixed descent. He faced a lot of criticism. Also, the character Abbe Faria is based on a real abbot of Goanese descent. He first made elaborate observations about the power of mind and its abilities to influence situations.
[17/07, 11:48]hu3: 1846
[17/07, 11:48]hu3: My bad
[17/07, 11:49]hu3: I used to have a 1948 print of it. During shifting in 2008 floods, it washed away somewhere
[17/07, 11:51] GJ: *"Humans are good at asking questions, whereas machines are good at answering questions."*
This aligns directly with the idea that *the human is the strategist and the LLM is the tool*. The *quality, clarity, and intent* of the *question* (i.e., prompt) are everything. Generative AI systems like ChatGPT, Gemini, or Claude do not originate curiosity or purpose—they *respond*. So the *burden of direction* lies squarely with the human.
In the earlier analogy:
* The *LLM is the paintbrush*.
* The *human is the painter*.
* And *the prompt is the brushstroke*, shaped by the question you ask.
So, *prompt engineering is essentially the craft of asking intelligent questions in the right structure and sequence*.
2. *"If you ask the right question, there is no guarantee the answer will be right. But if the question is wrong, the answer will always be wrong."*
This cuts to the heart of both the *limitations of LLMs* and the *responsibility of the human*.
* Even a well-formed prompt might produce an imperfect or hallucinated answer—due to the LLM’s *inherent limitations* (knowledge cut-off, no understanding, no reasoning in a human sense).
* But if the prompt is ambiguous, misleading, poorly scoped, or based on incorrect assumptions, *the AI will still answer—confidently—just wrongly*. This is especially risky in high-stakes domains (medicine, law, policy).
This echoes the "fit-for-purpose" principle discussed earlier: *effective interaction with AI begins with correctly framing the problem*. The *wrong prompt can derail the entire output*, while a good one at least gives you a chance to iterate and refine.
*So Why Is This Profoundly Important?*
Your two rules highlight:
* *Why humans remain essential.*
* *Why asking good questions is a critical 21st-century skill.*
* *Why LLMs are not decision-makers—but amplifiers of human intent.*
Together with the previous response, your quote *elegantly summarizes* why prompt engineering isn’t just a technical skill—it’s a thinking skill. A *thinking in questions* skill.
17/07, 11:58]hu2 : Great paintbrush analogy! 👏
Here's another one with a spin to promote "agentic hybrid human and AI user driven healthcare":
In the iron age there were great developers who developed the sword and yet it was the best sword user who became king!
[17/07, 11:52]hu3: Surprisingly, I have noted an aversion among students from asking questions
[17/07, 11:53]hu3: Which I feel is critical to knowledge gain since ages. Gurukuls used to have debates for entry and graduation
[17/07, 11:55]hu7: Loving the discussion. May chip in, in some time, with my 2 paisa.
[17/07, 11:57]hu3: I used to start my lecture with
Less knowledge
Less confusions
More knowledge
More confusions
(Pause)
No knowledge
No confusion
So, if you're not confused, ask yourself, did I learn anything
[17/07, 12:02]hu1: Let me state the process of learning and knowledge.
1) Unconscious Incompetence
2) Conscious Incompetence
3) Conscious Competence
4) Unconscious Competence
Point 4 above relates to *System 1 thinking*
[17/07, 12:02] hu1: I must stop here and sorry for lecture on psychology, philosophy, and AI. 🙏
[17/07, 12:03]hu3: Not at all
[17/07, 12:04]hu3: Isn't that the purpose of this group?
[17/07, 12:17]hu2: Just to share some examples (from our current real time regular workflow) of LLM limitations in knowing due to it's inability to capture data or human inability to capture all data, which currently gives the impression that our current medical cognition cosmologic space is largely dark matter!
[16/07, 23:58] Patient Advocate 3F Diabetes1: 1.30pm blood sugar 250
1.30pm 4.5u lispro insulin
1.35pm lunch with same previous dish except egg and include fish
6.00pm sour curd
6.30pm cucumber
9.15pm 6u lantus insulin
9.30pm blood sugar 196
9.30pm 2.5u lispro insulin
9.35pm dinner with as same as lunch dish
[17/07, 12:07]pm: How much lantus left currently? @Meta AI what are the chances of reduced effectiveness of lantus dose due to long term storage over a prolonged period?
[17/07, 12:07] Meta AI: To determine the amount of Lantus left, we'd need more information about the patient's current prescription and usage. However, I can guide you on how to calculate it.
*Calculating Remaining Lantus Dose:*
1. Check the prescription label or consult with the patient's healthcare provider to determine the prescribed dosage.
2. Calculate the total amount of Lantus used since the last refill.
3. Subtract the total amount used from the initial amount prescribed.
Regarding the effectiveness of Lantus due to long-term storage:
*Storage and Handling:*
Lantus should be stored in the refrigerator at 36°F to 46°F (2°C to 8°C) until the expiration date on the label. Once in use, it can be stored at room temperature (up to 86°F or 30°C) for up to 28 days, away from direct heat and light.¹
*Reduced Effectiveness:*
The effectiveness of Lantus may be reduced if:
- *Expired*: Using Lantus past its expiration date.
- *Improper Storage*: Exposing Lantus to extreme temperatures, direct sunlight, or freezing.
- *Contamination*: Not following proper injection techniques or using contaminated needles.
To minimize risks, it's essential to follow proper storage and handling procedures and consult with a healthcare provider if there are concerns about the effectiveness of Lantus.
*Factors Affecting Lantus Effectiveness:*
- *Dosage*: Higher dosages may result in a greater blood sugar-lowering effect.
- *Timing of Administration*: Taking Lantus at the same time each day can help maintain consistent blood sugar control.
- *Food Intake*: Eating a balanced diet and monitoring food intake can impact blood sugar levels.
- *Physical Activity*: Regular physical activity can lower blood sugar levels and improve overall health.
- *Other Medications*: Taking other medications can impact blood sugar levels and the effectiveness of Lantus.
- *Medical Conditions*: Certain medical conditions can impact blood sugar levels and the effectiveness of Lantus.
[17/07, 12:10]pm: @Meta AI can you support this hypothesis about lantus insulin efficacy reduction due to prolonged storage with some scientific literature such as long term observational data with a comparator?
[17/07, 12:10] Meta AI: Lantus insulin efficacy can indeed be affected by storage conditions. Here's what you need to know:
*Storage Guidelines:*
- Unopened Lantus vials or pens should be refrigerated at 36°F to 46°F (2°C to 8°C).
- Opened Lantus vials or pens can be stored at room temperature, below 86°F (30°C), for up to 28 days, away from direct heat and light.
*Factors Affecting Efficacy:*
- *Temperature Exposure:* Storing Lantus above 86°F (30°C) or freezing it can impact its effectiveness.
- *Expiration Date:* Using Lantus past its expiration date can lead to reduced efficacy.
- *Contamination:* Improper handling or using contaminated needles can compromise Lantus effectiveness.
*Signs of Compromised Lantus:*
- Visible changes, such as cloudiness, discoloration, or particles
- Unexpected fluctuations in blood sugar levels
While there's no direct link to specific long-term observational studies, it's clear that proper storage is crucial for maintaining Lantus efficacy. Healthcare providers recommend adhering to storage guidelines and monitoring insulin for any visible changes.¹ ²
To confirm whether Lantus storage has impacted its effectiveness in a specific situation, consult with a healthcare provider for personalized guidance. They can assess the insulin's storage conditions and provide recommendations on usage or replacement.³
[17/07, 12:29]hu5: Prompting starts with but goes far beyond asking the right questions.