Sunday, December 14, 2025

UDLCO CRH: Getting the DPDP act together into a regular WhatsApp driven workflow and avoiding DPDP induced dapadapi

 Summary:

The conversation revolves around the Digital Personal Data Protection (DPDP) Act in India and its implications on healthcare, particularly regarding the use of WhatsApp for sharing patient data. The discussion highlights concerns about data security, patient confidentiality, and the need for compliance with the DPDP Act. Participants share experiences, suggestions, and concerns about implementing secure practices, such as de-identifying patient data, obtaining informed consent, and not demonising WhatsApp but treating it as a communication tool that needs to be handled with good clinical practice such as data deidentification at source.

*Key Words:*


- DPDP Act

- Healthcare data security

- WhatsApp

- Patient confidentiality

- Data protection

- Compliance






Conversational learning Transcripts:




[10/12, 20:19]hu1: Looking for someone to  talk on DPDP in the upcoming fhir meet on 20th. A 30-45 min session on the topic, to spread awareness in context of digita health and data exchange.


[10/12, 20:30]hu2: My vote would be for @⁨⁩ ๐Ÿ‘

He has been an immense help in helping to make us DPDP compliant and inspite of the initial fear of handling all the seemingly impossible challenges we are now relaxed and actually have started enjoying our DPDP consent taking regular workflow after all the help we received from everyone in this group to redesign it as here ๐Ÿ‘‡



[11/12, 08:26]hu3: GDPR Breach in Healthcare

One Wrong Email. One Open Mailing List. Two GDPR Fines in Italy.

In 2025, Italian regulators penalized two healthcare institutions for simple mistakes:
Sending patient data to the wrong email recipient (€18,000 fine)
Using an open email distribution list that exposed patient identities (€8,400 fine)

No hackers. No malware.
Just everyday communication errors — yet serious enough for GDPR penalties.

Read full case: https://lnkd.in/d8dMYrTK


[11/12, 08:35]hu2: It helps to deidentify all data at source. I make it a point regularly to address the patient in a deidentified third person even while talking to the same patient in pm


[11/12, 08:48]hu4: That breaks emotional bond. I rather hear from my mom or Dr saying: ... you are eating too many rasgolas.

If I get anonymous message saying eating rasgola is harmful to health. I will happily ignore it.

Disclaimer: Being a foodie I explore world with tongue ๐Ÿ‘…


[11/12, 08:56]hu2: We are only anonymous to our current Orwellian machine systems but otherwise we do humanly recognise each other in our triadic doctor patient agentic relationship 



[11/12, 19:04]hu4: New Aadhaar cards for everyone from December

UIDAI new Aadhaar card redesign.

๐Ÿ‘‰Major changes in December!
๐Ÿ‘‰Why is this?
๐Ÿ‘‰How is it going to be?

Full details…
๐Ÿ‘‡
The Unique Identification Authority of India (UIDAI) is all set to completely redesign the Aadhaar card..

The new Aadhaar card will be released across the country from December.

This new Aadhaar card is being designed with privacy & security as its main objective.

Old Aadhaar vs New Aadhaar — Key Differences

What will the new Aadhaar card look like?
Only:
✔️ Photo
✔️ QR code
will be visible on the card.

These details that were present in the old Aadhaar will no longer be visible
❌ Name
❌ Aadhaar number
❌ Address
❌ Date of birth
❌ Gender
That is, no personal information will be printed on the card.

What is in the QR code?

The QR code in the new Aadhaar will contain…

➡️ Name
➡️ Aadhaar Number
➡️ DOB
➡️ Address
➡️ Gender
➡️ Biometric Verification Data (in encrypted form)
All will be encrypted.

This QR code can be decoded only by
✔️ Government Authorized Scanners
✔️ UIDAI Official Apps
✔️ Verification Devices
.
Hotels, Event Managers, Offices, anyone cannot view the data by taking a photocopy.

Why such a big change has been made?
According to UIDAI, the purpose of the new Aadhaar redesign is…

๐Ÿ”ธ 1. Privacy Protection: Currently, hotels, event organizers, private offices— take photocopies of Aadhaar and store names, addresses, DOB, Aadhaar numbers. This is a big risk of data leakage. This is impossible in the new Aadhaar.

๐Ÿ”ธ 2. Stopping Aadhaar data misuse
Thousands of frauds are being committed through Aadhaar photocopies.

Since the new card does not have printed information:

➡️ Data misuse
➡️ Cloning
➡️ Fraud activities
The problems will be reduced significantly.

๐Ÿ”ธ 3. Upgrade to meet Digital India standards

UIDAI's goal: To make the entire identity system secure, clean and tamper-proof.
The new QR code-based system is designed to meet world-class security standards.

๐Ÿ”ป What will change in the future?

✔️ It is safe to give Aadhaar card to anyone
Because there are no printed details.

✔️ Even if hotels / gated communities / event entries / banks take a photocopy of your card —
. None of your personal data will be visible.

✔️ Digital verification only
Data is confirmed by a single QR code scan.

When will the new Aadhaar be available?

➡️ Release from December
➡️ Distribution across the country in phases
➡️ Old Aadhaar will remain valid, but it is possible to recommend switching to the new Aadhaar.

This is the biggest privacy reform in India's digital identity system!
With this decision, UIDAI is entering a new phase in the data security of Indian citizens.

[11/12, 20:07]hu1: Thank you Sir!! @⁨has graciously agreed.


[12/12, 08:45]hu5: True?


[12/12, 08:48]hu2: Very likely!

The DPDP act needs to get it's act together ASAP.

My own data as a current voter in my village near Ramoji film City is available open access here:

@everyone

Image: 

[12/12, 09:00]hu3: WhatsApp is everywhere in healthcare. Doctors forward X rays, nurses share wound photos, labs push reports, call centres confirm appointments and families receive discharge notes by message. For clinicians this is convenient and immediate. For the law it is chaos.

Using WhatsApp to handle patient information may seem practical, but it is structurally unsafe, legally fragile, and now potentially ruinous under India’s Digital Personal Data Protection regime. What frontline staff treat as routine will be treated by regulators as unlawful processing, avoidable negligence, and in many cases an offence that invites crippling penalties and public naming.

This is not alarmism. It is a warning based on how global regulators and law are already treating instant messaging in clinical practice. This article explains how and why this practice must stop, what can happen if it continues, and how to replace WhatsApp with lawful, practical alternatives.

Read full article: https://lnkd.in/dbPG7HTQ


[12/12, 09:31]hu2: Modified  short, urgent checklist to implement this week from the full article here:https://lnkd.in/dbPG7HTQ

For your comments @

1) Issue an immediate moratorium on sharing identifiable patient data on personal WhatsApp accounts. 

Modified addendum: Ensure that all patient data is deidentified and signed informed consent is gathered before sharing from a form that can be downloaded here: https://userdrivenhealthcare.blogspot.com/2025/10/pajr-consent-form-modified-for-dpdp.html?m=1

2) 
Identify approved secure messaging platform and begin rapid roll-out to all clinical teams.

Addendum caveat: WhatsApp is a communication platform and all communication platforms are unsafe and hackable as long as we keep freely sharing sensitive personal or patient data there. What is more important and urgent is to initiate roll out good clinical practices in obtaining signed informed consent and data deidentification before engaging through any communication interface.


3) Provide work phones or configure enterprise containers so data is separable and remotely wipe-able.

Addendum caveat: That's a huge investment with lots of market players and hardware and software sellers likely to profit that makes it a difficult proposition 


3) Update consent forms and patient communication preferences to record if any patients insist on non-standard channels.
Document every such request.

Addendum: Make this a routine workflow regardless of which end the request originates. For most humans privacy is a trade off to benefit from transparency and accountability!


4) Audit last three months of messaging flows for incidents and prepare breach notifications if required.

Very important: any takers to audit our deidentified consented data for any identifiable breaches here? : https://pajrcasereporter.blogspot.com/?m=1

Potential paper: @

Require processor agreements, audit rights and deletion clauses from any third-party messaging or automation vendor.


Refer Addendum caveat 2 and 3:

Train clinicians and managers on legal risks and report any policy breaches immediately.

Absolutely necessary as a part of the regular workflow


[12/12, 09:39]hu10: Any good recommendations of low cost alternatives to WhatsApp for clinical customer notes?

[12/12, 09:41]hu11: Great question and leverage of community! Members should really ask what tools others are leveraging and how it is going, the good and the not so good etc!

[12/12, 09:42]hu12: Signal

[12/12, 09:43]hu13: Arattai.


[12/12, 09:54]hu14: Need more details. WhatsApp helps multiple people communicate.  You could try my Google Doc and Shared folder option. A document is primarily better for clinical documentation while WhatsApp will give you a bunch of messages.

[12/12, 10:34]hu10: This is not prescription related

[12/12, 10:38]hu10: The issue is a lot of hospitals with clinicians share case studies over WhatsApp where they are stumped (or want to share good news) to internal groups of clinicians and experts (including HoDs).  Most are anonymised poorly eg name and ID scribbled over.  We want to reduce the risks to all parties (including the institutions).

I am the Chairperson of AHPI's Digital Committee - so this is likely to get significant uptake


[12/12, 10:41]hu14: Great. Since the comms is internal, maybe they will be better off with Teams. An organisation level communication system . Even better would be a similar comms channel integrated with their EMR system.

[12/12, 10:41]hu14: Whatsapp wins due to its simplicity and almost ubiquitous presence. But what is the security, and how easy is it to tie up relevant info

[12/12, 10:46]hu14: Key point is WhatsApp based exchange has no restriction. Soon the record can be floating in the general WhatsApp universe. How does one stop that

[12/12, 10:50]hu2: [12/12, 09:31]: Modified  short, urgent checklist to implement this week from the full article here:https://lnkd.in/dbPG7HTQ

For your comments @⁨

1) Issue an immediate moratorium on sharing identifiable patient data on personal WhatsApp accounts. 

Modified addendum: Ensure that all patient data is deidentified and signed informed consent is gathered before sharing from a form that can be downloaded here:


2) Identify approved secure messaging platform and begin rapid roll-out to all clinical teams.

Addendum caveat: 

WhatsApp is a communication platform and all communication platforms are unsafe and hackable as long as we keep freely sharing sensitive personal or patient data there. What is more important and urgent is to initiate roll out good clinical practices in obtaining signed informed consent and data deidentification before engaging through any communication interface.


3) Provide work phones or configure enterprise containers so data is separable and remotely wipe-able.

Addendum caveat: That's a huge investment with lots of market players and hardware and software sellers likely to profit that makes it a difficult proposition 


3) Update consent forms and patient communication preferences to record if any patients insist on non-standard channels.
Document every such request.

Addendum: Make this a routine workflow regardless of which end the request originates. For most humans privacy is a trade off to benefit from transparency and accountability!


4) Audit last three months of messaging flows for incidents and prepare breach notifications if required.

Very important: any takers to audit our deidentified consented data for any identifiable breaches here? : https://pajrcasereporter.blogspot.com/?m=1

Require processor agreements, audit rights and deletion clauses from any third-party messaging or automation vendor.


Refer Addendum caveat 2 and 3:

Train clinicians and managers on legal risks and report any policy breaches immediately.

Absolutely necessary as a part of the regular workflow


[12/12, 10:03]hu3: Ah, governments interfering with free markets! Didn't expect to see that.

As long as two parties are consenting with full volition - the only purpose of the government is to honor the contract agreed by those 2 parties.

'structurally unsafe and legally fragile' are hollow words and sound very LLM written to me.


[12/12, 10:54]hu2: Security lies in the user's desire for security. Think of how humans have solved their security and privacy issues throughout evolution! Their first step toward this was to get a room (tree house or cave) and cover themselves with earthly materials distinct from their other animal peers who were okay to remain naked!


[12/12, 10:55]hu3: In this case ,also the patients desire. The user is talking about the patient .


[12/12, 12:43]hu4: I think WhatsApp can still be used with explicit consent from the patient asking the provider to send information by WhatsApp.  A consent needs to drafted for *the proposed way out* from clutches of DPDP..!!

[12/12, 13:05]hu2: We have already shared the link to our drafted consent form that we are using regularly but I'm sharing it here again for further inputs ๐Ÿ‘‡



[12/12, 13:14]hu2: Think of identifiable potent patient data as these left over atracurium vials!๐Ÿ‘‡

⚖️ MEDICOLEGAL CASE STUDY ๐Ÿ’‰๐Ÿฉธ๐Ÿ“ฟ

*“Atracurium Injection Tragedy: Chain of Negligence Leading to Two deaths"* “Unsecured Anaesthetic Drug → Diversion → Two Deaths-- *Neurosurgeon with other 4 booked for negligence*


1. Background ๐Ÿฉบ๐Ÿ’‰

- Two adults found dead in an autorickshaw after self-injecting Atranium 25 mg, a powerful anaesthetic muscle relaxant.
- A third person survived with dizziness.
- Police traced the drug back to a hospital OT, where leftover vials were not secured after surgery.

2. Key Facts ๐Ÿ“Œ
- Hospital purchased 25 vials for neurosurgery. Only 4 vials used; remaining vials were left unattended in OT.
- A staff member stole the vials due to easy access.
- Stolen vials were illegally sold in the local drug network.
- Three individuals injected themselves → 2 deaths due to respiratory paralysis.

3. Negligence Identified ⚠️
A. Hospital / Doctor-Related ?
- Lack of secure storage for controlled anaesthetic drugs.
- No drug register, inventory log, or pharmacy billing.
- Untrained personnel had free access to OT drugs.
- No post-procedure drug reconciliation (used vs. unused vs. returned).

B. Staff-Related ๐Ÿ‘จ‍๐Ÿ”ง
- Theft of vials due to poor supervision.
- Illegal distribution and street-level drug misuse.

4. Legal Implications ⚖️๐Ÿšจ
- IPC 304 – Culpable homicide (negligence enabling death).
- Drugs & Cosmetics Act – Improper storage, no documentation, diversion of controlled drug.
- Hospital vicarious liability – Failure in governance & supervision.
- NMC professional misconduct – Unsafe drug handling practices.

5. Medicolegal Analysis ๐Ÿง
- Even though the injections were taken outside the hospital, the chain of causation began with:

๐Ÿ‘‰ Unsecured drugs inside the OT
This foreseeable risk makes it gross negligence of omission.

6. TAKE-HOME MEDICOLEGAL MESSAGES ๐Ÿง ๐Ÿ’ก
✔ Secure Anaesthetic & Critical Drugs ๐Ÿ”
- Double-lock storage
- Limited key access
- Authorised staff only

✔ No Unused Vials Left in OT ๐Ÿšซ๐Ÿ’‰
- Return immediately to pharmacy
- Document received–used–returned–wasted

✔ Restrict Access ๐Ÿ‘ฎ‍♀️
- Ward boys / cleaners must never access drugs
- Controlled drug cupboards under CCTV

✔ Maintain Proper Records ๐Ÿ“‘
- Drug issue–return register
- Pharmacy indent & billing
- OT logbook
- Daily reconciliation

✔ Criminal Liability Is Real ๐Ÿš”
- Hospital negligence → drug theft → misuse → death = Charges against doctor, staff, and hospital administration

๐Ÿ”Ž One-Line Summary
 ๐Ÿฉบ
“A drug left unsecured in OT can become a death outside the hospital — and a criminal case against you.”

๐Ÿฉบ⚖️Dr.
Medicolegal Consultant
Nashik


[12/12, 16:30]hu5: Hospital Management needs to learn from this street vendor... what is enough!


[12/12, 16:37]hu2: Hospital management or the owners don't have an alternate source of revenue but hospital employees who get enough salary do continue to go out of their way regularly to provide neem juice services to their patients

[13/12, 07:50]hu6: What will tweaking DPDPA help? The rules and guidelines can be given. However, if they aren't implemented, the consequences are all out there for all to witness.

[13/12, 07:51]hu7: Isn't DPDP bad enough with an axe of 250 crore on our Dr neck?

[13/12, 07:52]hu6: The axe will fall only if folks wilfully disregard the provisions. Like using WhatsApp like mad.


[13/12, 07:53]hu7 : Even without it the Act is draconian. True case of throwing baby out with bath water


[13/12, 07:58]hu7: เคฌเคกॆ เคฒोเค— เค•ो เคฎाเคฐเคจे เคจिเค•เคฒे, เค†เค˜ เคฒเค—ा เคฆी เคœंเค—เคฒ เคฎे

[13/12, 07:59]hu6: _Au contraire mon ami_, it actually is super duper for healthcare. Just follow the rules and there will be that hint of spring in the air as the gentle fragrances of newly bloomed flowers wafts through the air like a zephyr to calm our souls and delight our hearts.

[13/12, 08:02]hu7: A la silence of the lambs? We go willingly to the slaughter house bobbing our heads.

Nay sire. DPDP2023 is unfit for Healthcare. And Education.

Needs to be chopped up and redone. One size can not fit all. Never


[13/12, 08:03]hu6: Perhaps you can justify your statement?

[13/12, 08:07]hu7: Start with the penalty. In my A&E I am not 100% sure of even the straight forward cases. Leave alone the red herrings.

How does one practise medicine? With a knife held to neck.


[13/12, 08:17]hu2: One just needs to be careful to pick up the leftover atracurium vials that's all ๐Ÿ‘

[13/12, 08:24]hu6: The act makes it explicit. Wilful disregard, non-reporting of breaches. If due care is exercised, ABDM Compliance with robust security features implemented, then no penalty. Yes, there is a good possibility that there will be legal challenges, but adherents will remain unmolested. Unjustified fear is not good. And note that the act makes it legal to use data for research provided one takes consent. In the absence of consent, anonymised data can be used. I would not recommend it as there is always a chance that someone will find all sorts of legal contortions to try prove that the data principal intended otherwise.

[13/12, 08:49]hu2: Unmolested! ๐Ÿ˜…

[13/12, 09:07]hu7: R u responding to the earlier chain on atracurium

[13/12, 09:09]hu7: Sahi pakde

[13/12, 09:10]hu2 : I would also suggest a few more members from this community here for your DPDP meeting on 20th. I'm sure you would have approached many of them already.

@⁨ @⁨ @⁨ who will bring effective triangulation on this topic to the table.

Popular speakers such as @⁨ @⁨ also come to mind.

@⁨ can bring mainstream medicine expertise from his current NHS and his past India experienced 

 @⁨ can bring her global experiences on grounded research around usage models 

 @⁨⁩ is a very young and amazing person who brings years of maturity to the table with astounding insights born out of his grounded regular immersion in hospitals for his start-up 


@ a PhD in law , she has been instrumental in designing DPDP consent forms and would bring a lot of value

@⁨⁩ a young mainstream medicine specialist with an interest in informatics is the right person who will grow into your panel

I'm sure you have asked @⁨⁩ who is the most balanced global mentor on DPDP available

[13/12, 09:10]hu2: The left over chain

[13/12, 09:12]hu7: I and SBB were ๐Ÿคบ on DPDP - dapidapi

[13/12, 09:12]hu2: In Bengali it's called dapadapi! ๐Ÿ˜…


[13/12, 09:14]hu7: Dapi in Tamil = small container. It is also a derogatory word for silly


[13/12, 09:15]hu2: ๐Ÿ‘†the emoji that you have shared here is all about a swordsman doing dapa dapi in Bengali


[13/12, 09:21]hu7: Sword fight?


[13/12, 09:26]hu8: The DPDP ACT & THE 250 Crore bomb

It is one more case of lack of coordination in policy pronouncements and Acts. 

There are no coordinated steps taken from the health ministry, NABH, NMC, MeITY to build awareness and capabilities of addressing the DPDP act's smooth implementation in healthcare. 

I see more fear mongering from so-called experts. 

When the EMR implementation rate in Indian hospitals is 10 percent ( as per FICCI Study), the whole scare-mongering will push back much needed Digital Health mission in India. 

How will every doctor in cities, towns and villages be tracked? 

Like the Truckers strike against the draconian laws; which compelled the government to roll back the law a couple of years ago. 

Patient Data Safety is important, but how to build awareness, capabilities and capacities in Indian healthcare is the moot question. 

We are seeing only one red flag - 250 crore fine. 

Such poorly thought of acts should not become tools of harassment for the medical fraternity.

Equally important, the healthcare community also need to raise itself by being ready for DPDP ACT and not caught unaware as far as possible.

Shortcuts won't help us

The DPDP implementation and Execution - Keep watching this space


[13/12, 09:26]hu2: AI to the rescue:

เฆฆাเฆชাเฆฆাเฆชি" (dapadapi) in Bengali means noisy running around, romping, bustling, or showing off/bullying with authority, often involving loud footsteps or a show of dominance, like children playing wildly or a powerful person acting imperiously. It conveys a sense of energetic, sometimes disruptive, presence or a repeated display of power.


[13/12, 09:31]hu2: Will be posting more from deidentified healthcare institutional grounds and different physician groups the amount of dapadapi disruption DPDP has currently created which is much more than accidentally disturbing wild bee hives but in the end as @⁨SBB⁩ said one can look forward to beneficial outcomes


[13/12, 09:32]hu7: Oh man. Nasha uthaar dhee

[13/12, 09:32]hu7: I thought we were like bulls in a Spanish ring


[13/12, 09:33]hu8: The act per se is not wrong. 

The implementation and Execution challenges for the stakeholders ( medical fraternity) are a big challenge.


[13/12, 09:33]hu2: Bulls in a Spanish ring fighting each other would be labeled in Bengali as doing dapadapi


[13/12, 09:33]hu2: The end will come after acceptance comes to an end. Hai na


[13/12, 09:34]hu2: And I'm sure we shall all rise to the challenge

[13/12, 09:34]hu2: Yes end is always after dapadapi ends and everyone relaxes and goes to sleep

[13/12, 09:35]hu7: Those of us on ground know what a losing battle we fight when it comes to eHealth.

This is one more loadstone round my neck now


[13/12, 09:35]hu7: Always look for opportunities. Problems are omnipresent. Finding avenues to effectively address them is the real trick. The entire profession of law is built around it.


[13/12, 09:36]hu7: Who in their right mind will come up with such a fine?

[13/12, 09:36]hu7: Aha..so we remain heroes


[13/12, 09:37]hu6: Those who can do, do. The rest fret, howl, shriek.


[13/12, 09:38]hu7: I am not so confident being the doubting thomas having been beaten up black and blue by loving customers


[13/12, 09:39]hu7: Like anbe sivam dialogue: 2to2to2




[13/12, 09:42]hu2: Golden words of wisdom ๐Ÿ‘๐Ÿ‘


[13/12, 09:42]hu2: Translating again in Bengali:

Those who can do DPDP do DPDP.

The rest do dapadapi!


[13/12, 09:43]hu7: Had a good dapadapi yday night with one of your students. In the end we smoked peace pipe


[13/12, 09:43]hu7: U r full on weekend mood


[13/12, 09:44]hu2: With a busy OPD to boot


[13/12, 09:45]hu2: Lucky to have all the physical healthcare dapadapi done by our students while I take care of the WhatsApp dapadapi front.




*Thematic Analysis:*


1. *DPDP Act and Healthcare:* The conversation emphasizes the importance of understanding and complying with the DPDP Act in healthcare settings to protect patient data.

2. *Data Security Concerns:* Participants express concerns about the risks associated with sharing patient data on WhatsApp and the need for secure alternatives.

3. *Compliance and Implementation:* The discussion highlights the challenges of implementing DPDP compliance in healthcare, including obtaining informed consent, deidentifying patient data, and training staff.

4. *Alternatives to WhatsApp:* Participants suggest exploring alternative secure messaging platforms, such as Signal or Teams, to ensure compliance with the DPDP Act.

5. *Awareness and Education:* The conversation emphasizes the need for awareness and education among healthcare professionals about the DPDP Act and its implications on their practice.

UDLCO CRH: DPDP, an ethical healthcare disruptive innovation and it's resulting impact causing pan India dapadapi on WhatsApp

Summary: Aftermath of the DPDP act implementation news seeping into institutions:



Sample pan India institutional health administration notices (deidentified )

 

to all faculty to delete WhatsApp groups:

[12/12, 13:18]: Department of Biochemistry official group deleted sir.

[12/12, 13:25]: Good afternoon sir Department of Pathology official group deleted

[12/12, 13:29]: Good afternoon sir. Microbiology department whatsapp group deleted.

[12/12, 13:33]: Good afternoon sir. DVL department whatsapp group deleted.


[12/12, 14:18]: Physiology department faculty WhatsApp group dissolved.


[12/12, 14:49]: Good afternoon sir. Forensic Medicine department whatsapp group deleted.

[12/12, 16:00]: Anaesthesia group deleted


[12/12, 17:32]: Good evening air.We don't have pg faculty group or any ug faculty group.we had only ENT faculty group. We deleted that today sir.thank you
 Department of ENT


[12/12, 19:34]: Ophthalmology WhatsApp group deleted sir

Other groups: 

[12/12, 20:42]hu1: Deidentified Foundation
Welcomes you on a webinar on
Doctor’s ... Special on DPDP Act & Medical AI Compliance
on 12th Dec 2025 at 9 pm to 10.30 pm

Delegate Registration and Joining Link 



[13/12, 08:09]hu3: It was a wonderful session.. Eye opener .. 

New laws, its additional  compliance, complexity involving day to day simple data handling at routine consultation, more legal burden on clinician, more requirements in documentation,  additional data security liabilty, and more and more cost all looks like  scary future, especially when we are shifting to digital practice.

But its inevitable,  we have to atleast know the laws perfectly and start implementing, for betterment society at large. 

Can we get the recording please, it will be very helpful.

Big thanks to *deidentified* ๐Ÿ™


[13/12, 08:22]hu2: Here's a modified  short, urgent DPDP checklist to implement in every physician workflow that has been modified from the full article shared yesterday  here in another group :https://lnkd.in/dbPG7HTQ

1) Issue an immediate moratorium on sharing identifiable patient data on personal WhatsApp accounts. 

Modified addendum: Ensure that all patient data is deidentified and signed informed consent is gathered before sharing from a form that can be downloaded here: https://userdrivenhealthcare.blogspot.com/2025/10/pajr-consent-form-modified-for-dpdp.html?m=1

2) Identify approved secure messaging platform and begin rapid roll-out to all clinical teams.

Addendum caveat: WhatsApp is a communication platform and all communication platforms are unsafe and hackable as long as we keep freely sharing sensitive personal or patient data there. What is more important and urgent is to initiate roll out good clinical practices in obtaining signed informed consent and data deidentification before engaging through any communication interface.


3) Provide work phones or configure enterprise containers so data is separable and remotely wipe-able.

Addendum caveat: That's a huge investment with lots of market players and hardware and software sellers likely to profit that makes it a difficult proposition 


3) Update consent forms and patient communication preferences to record if any patients insist on non-standard channels.
Document every such request.

Addendum: Make this a routine workflow regardless of which end the request originates. For most humans privacy is a trade off to benefit from transparency and accountability!


4) Audit last three months of messaging flows for incidents and prepare breach notifications if required.

Very important: any takers to audit our deidentified consented data for any identifiable breaches here? : https://pajrcasereporter.blogspot.com/?m=1

Require processor agreements, audit rights and deletion clauses from any third-party messaging or automation vendor.


Refer Addendum caveat 2 and 3:

Train clinicians and managers on legal risks and report any policy breaches immediately.

Absolutely necessary as a part of the regular workflow


[13/12, 08:32]hu4: The webinar explains how the Digital Personal Data Protection (DPDP) Act tightens rules on collecting, using, and sharing health data in India while still allowing well-governed use for care, research, and public health.

*What counts as health data*
• Health records, diagnostics, prescriptions, insurance info, app-generated vitals
• Treated as sensitive data
• Anything beyond direct care needs a clear legal basis and usually explicit consent

*Key duties for healthcare entities*
• Hospitals, clinics, labs, TPAs, insurers, health-tech firms act as data fiduciaries
• Need strong privacy design, security, breach reporting, and purpose limits
• Must give clear notices, take and record consent, allow withdrawal, and support access, correction, and deletion

*Impact on daily workflow*
• Registration, OPD/IPD, labs, teleconsults, discharge all need consent and purpose clarity
• Staff training needed to stop sharing reports via WhatsApp, email, or unsecured drives
• Standard processes needed for sharing reports, images, and referrals

*Research, AI, and secondary use*
• Allowed for research and public interest if data is de-identified and governed well
• AI training, registries, and analytics need ethics checks and strong safeguards
• Commercial reuse needs granular consent, opt-outs, and full transparency

*Patient rights and provider risks*
• Patients can ask how their data is used, request correction or deletion, or complain to the Board
• Breaches, unlawful sharing, poor consent records, or ignoring requests can trigger heavy penalties and reputational harm



[13/12, 08:33]hu4: Another useful webinar conducted by FICCI on the same topic that I attended recently, they are planning an entire further deep dive series on this I guess ๐Ÿ‘


[13/12, 08:33]hu3: Immediate fallout will be our WhatsApp groups involving residents, care coordinator and nursing staff.

Like we have a chest pain triage group where in our ER  residents post clinical information and ECG, lab reports for expert opinion from cardiologist and intensivist. Later if patient referred to particular cardiologist,  referral letter is shared with cardiologist as well their hospital ER , coordinator so that cathlab is mobilized and by the time patient reaches , including precath investigation are ready, and taken for primary angioplasty will be done without any delay. 

It will be difficult to hide patient details in lab reports, referral letters, at the same time it may lead to errors if anonymity done. 

How do we go further .. any other compliant platform available


[13/12, 08:36]hu3: May be doctors only to envision and start a platform where in patient identity will be masked automatically and so sharing the data becomes safe.. 

Big brothers to look into this๐Ÿ™


[13/12, 08:38]hu2: The solution is to train everyone in existing groups on good clinical practices in data handling by training on how to obtain consent and for those who can't read and write, how to obtain video consent and then how to deidentify all patient data even at the point of data capture as per HIPAA guidelines. We have been doing this for a decade long before DPDP etc


[13/12, 08:42]hu3: Wonderful Sir.. 

We are still navie in this field. 

Will like to have some structured practical program or course on this issue.  Especially getting digital consent and deidentify the data. 

If any reference or article will be very helpful.  ๐Ÿ™๐Ÿ™


[13/12, 09:14]hu2: Here could be a starting point ๐Ÿ‘‡



Tuesday, December 9, 2025

UDLCO CRH: Harnessing agentic AI into the traditional dyadic doctor patient relationship

 Summary:*


The conversation revolves around the integration of Artificial Intelligence (AI) in healthcare, specifically in the doctor-patient relationship. The discussion touches on the use of AI-powered tools, such as Google Workspace, Gemini, and other Large Language Models (LLMs), to enhance patient care, streamline medical data management, and support clinical decision-making. The participants share their experiences and ideas on leveraging AI to improve healthcare outcomes, while also addressing concerns around data security, HIPAA compliance, and the limitations of current AI models in medical decision support.

*Key Words:*

- AI in healthcare
- Doctor-patient relationship
- Agentic AI
- Medical data management
- Clinical decision support
- LLMs
- Google Workspace
- Gemini
- HIPAA compliance
- Data security










Conversational transcripts:

One group:



[10/12, 08:08]hu2: Thanks. Can you share the full text?

It's interesting how we broke the dyadic doctor patient relationship through PaJR CBBLE driven team based web 2.0 learning but couldn't call it triadic as there were multiple human agents in the team!๐Ÿ‘‡



Another group:

[09/12, 18:28]hu3: Can Google Sheets or Docs be used to store medical data , say patient records?

[09/12, 18:28]hu3: Many practioners use Spreadsheets to store some basic appointment info , some even as rudimentary EMR.

[10/12, 08:58]hu4: Short answer is yes. Long answer is that you need to ensure that Access Controls are strict and the ability to copy/share data is pretty much locked down. Depending on the country you are doing this for HIPAA compliance you will need a BAA from google. The only way to get that is to have a workspace account whoch add to the auditing requirements etc

[10/12, 09:01]hu4: @⁨Group Moderator⁩ I think this group is really useful. My only challenge is the volume of messages — if I fall behind, it takes me a while to catch up because I like to at least browse through everything. The information people share here is impressive, and the discussions are genuinely helpful. I know you ran a poll, but I wanted to make sure you had my reasoning behind voting yes.

[10/12, 09:03]hu5: NYT reports a growing number of patients are uploading records to ChatGPT to clarify diagnoses and understand care plans!

#healthcare #UShealthcare

[10/12, 09:25]hu3: Thanks. I think a Workspace account should help.A pro account would also help get the pro version of Gemini. I intend to use the Workspace studio to make a rudimentary EMR and demo it to this group. Maybe on one of the Wednesday.
I plan to take more of the unstructured data route( more document than spreadsheet) as I believe AI will be able to handle that . Also because medical history is more unstructured than structured. No form data entry is better ,where possible.

[10/12, 09:27] Group Moderator: thank you very much for your feedback! It is very helpful

[10/12, 09:27]hu3: Gemini states very clearly that it won't answer medical questions, so one will have to live with that for this experiment. Maybe other Ai tools can take care of that. Maybe it's also the right way to go


[10/12, 09:34]hu2: Here's an attempt at unstructured EMR data archival  in different individual patient slots showcasing AI in the loop clinical decision making support๐Ÿ‘‡


[10/12, 09:35]hu3: I was thinking of these. We will put PaJR on steroids with this new approach ( that's the intention)

[10/12, 09:36]hu3: Which AI are you using

[10/12, 09:37]hu2: Multiple LLMs

[10/12, 09:37]hu3: The workspace studio is more of agents ie. Work being done on the record e.g sending automated alerts, creating other documents etc.

[10/12, 09:38]hu3: Gemini decided not to be a medical decision support system

[10/12, 09:41]hu3: Maybe after the initial jugaad experiment with Google workspace ,one can try the Open AI Business suite. Open AI is still keen to work with medical decisions

[10/12, 09:41]hu2: Yes we need some agentic AI help to analyse all the archived data toward this project:

 
Distributed Health Citizenship in a Crowdsourced patient journey record PaJR Ecosystem

We could do with some agentic AI help in the next Steps for looking at our results: 

1.  **Formalize Data Extraction:**

 Systematically code the case reports and chats for quotes illustrating key themes: delegation, conflict, platform use, advocacy, data loss from all the raw narrative data available here: https://pajrcasereporter.blogspot.com/?m=1


2.  **Strengthen Theoretical Linkage:**

 Explicitly connect findings to literature on digital labor, health informatics, and participatory democracy.

3.  **Develop the Methods Section:**

 Detail the digital ethnography approach, ensuring it meets rigorous standards for qualitative research.

4.  **Create Visualizations:**

 Diagrams mapping the different network structures and data flows for the two cases would powerfully illustrate the argument.

Can anyone give this prompt to workstudio and share what they find?


[10/12, 09:42]hu3: These should be possible if they are just manipulations of a data . One or more Ai agents with good prompts should take care


[10/12, 09:42]hu2: We use open AI, copilot, perplexity, deepseek, August AI etc

Mostly deepseek, rarely August AI


[10/12, 09:43]hu2: Yes they are simply required to perform thematic analysis on data available in multiple URLs


[10/12, 09:44]hu3: NotebookLM would also do a neat job. Add all the URLs as sources and get wonderful outputs. Infographics,ppt, video,audio, reports of any sort


[10/12, 09:45]hu3: Especially useful for your students to learn from all the data you have now archived.

[10/12, 09:45]hu2: That would still need manual adding of 400 URLs there in that one URL!

[10/12, 09:45]hu2: Notebook LM used to work for me earlier but in recent times it just doesn't respond!

[10/12, 09:45]hu3: The researcher will select the ones really required

[10/12, 09:46]hu3: Go pro

[10/12, 09:46]hu6: Go Jio then Go Pro

[10/12, 09:52] hu4 : I am curious.. We can chat privately if you want. I may be able to help


[10/12, 09:53]hu3: Sure. In a day or two I will be ready with my first shot.

[10/12, 09:54]hu3: I was looking at the snippet feature of  Google docs. Just to make repetitive data entry like prescriptions easier. E.g just type fever and related prescription will get filled

[10/12, 09:55]hu3: Template docs and snippets can do half the job. Ai agents to help further. Plus we have multimodal data entry.


[10/12, 09:56]hu3: Only live decision support is a challenge due to Gemini' stance.


[10/12, 09:57]hu7: Any experience with medgemma then


[10/12, 09:58]hu3: Good idea. Need to figure out how to make it work with the Google workspace

[10/12, 13:17] hu3: Intial thoughts are done. Have to begin creation of first prototype. After that will demo to this group for feedback


Image above:

[10/12, 13:18]hu3: My initial thoughts about usage of Google Workspace and its tools


[10/12, 13:34]hu3: @⁨ kindly reserve a Wednesday for a demo of this. I hope to build a working prototype soon.


[10/12, 13:46]hu8: Anytime, there is a Google sheet link, where you can add yourself, set up by @⁨
[10/12, 14:57] hu2: Can I borrow the image with full attribution to the source prompter?

Here's a textual summary of the morning conversations archived:

The conversation highlights critical challenges for this new model:

*   **Agency & Responsibility:** If an AI agent suggests an analysis or alert, who is responsible? The human who prompted it? The developer?
*   **Data Sovereignty & Security:** Using tools like Google Sheets (`hu4`'s HIPAA/BAA warning) highlights the tension between accessible, powerful tools and robust health data governance.
*   **The "Black Box" in the Team:** An AI's reasoning is often opaque. How does the human team "trust" or "collaborate" with an opaque agent?
*   **Complementary Roles:** The group is pragmatically figuring out which AI is good for what—Gemini for workflow but not diagnosis, Open AI for medical analysis, NotebookLM for synthesis—mirroring how a human team has specialists.

### Conclusion: From a Broken Dyad to a Augmented Network
Your insight frames the evolution perfectly:
1.  **Past (Traditional):** Closed **Dyad** (Doctor ↔ Patient)
2.  **Present (Web 2.0 Era):** Open **Human Polyad/Network** (Patient + Doctor + Multiple Human Agents)
3.  **Future (AI-Integrated):** **Augmented Polyad/Network** (Human Network + Multiple Agentic AI Actors)

The goal is no longer to have a single AI "replacing" the doctor in a dyad. Instead, it's about **integrating multiple AIs as active participants into the existing, distributed human care network,**


https://userdrivenhealthcare.blogspot.com/2025/12/udlco-crh-harnessing-agentic-ai-into.html?m=1


[10/12, 15:00]hu3: In the Workspace EHR I am considering ,Ai is just an agent who does a given task. I.e if x data is entered send message to y. 

Ai does assigned work. That's all


[10/12, 19:42]hu3: Prototyping started. Failure. Gemini says "We are at capacity. We will be back soon"!!!


[10/12, 19:42]hu3: Hope it works in a few hours


[10/12, 20:40]hu3: Removed Gemini and the rest of the workflow worked


[10/12, 21:41]hu9: Haha!


[10/12, 21:43]hu3: it continues to say the same things. Other inadequacies found with respect to my plans, but lets see. Will try my best. Its a rudimentary agent builder. Google always tries to be minimalistic.  It creates draft mails, but cant send. It can assign tasks only to self ( the user), not others.


[10/12, 21:44]hu3: i created another workflow, where Ai has to decide whether medications have been prescribed. If yes, pharmacy has to be mailed the prescription. But once again , AI is down/busy message . So the workflow failed!


[10/12, 21:45]hu3: I will try tomorrow. Else try this whole thing with the MS Office suite.  ( I want something which is easy to build. Not requiring N8N or code)


[10/12, 21:54]hu9: Net net - Gemini is a weird shell script! ๐Ÿ™‚


[10/12, 21:55]hu3: I am on pro version and this is its state. Lets hope its a a temp problem


[10/12, 21:55]hu3: All these guys are busy distributing free plans, to market and survive.


[10/12, 21:56]hu10: Maybe it recognised you as a human…๐Ÿ˜‰, sorry another machine ๐Ÿ˜„๐Ÿ˜„


[10/12, 22:00]hu10: https://www.reddit.com/r/googleworkspace/comments/1pf7d50/we_are_at_capacity_well_be_back_soon_error/


[10/12, 22:01]hu10: Looks like many are facing this issue sirji


[10/12, 22:24]hu3: Oops since last 5 days it seems.

[10/12, 10:55]huai: This is a fascinating observation that gets to the heart of how technology reshapes relationships and power structures in healthcare.

Your analysis is spot-on. Let's break down the key conceptual shift:

### 1. The Historical Break: From Dyadic to Polyadic (Not Triadic)
You correctly identified the core idea: **Web 2.0 and team-based learning broke the traditional one-on-one (dyadic) doctor-patient model.** However, it didn't create a simple three-party (triadic) model. Instead, it introduced **multiple human agents**:
*   The patient
*   The primary doctor
*   Specialists
*   Nurses, physiotherapists, other allied health professionals

*   **Critically: The patient's social network (family, caregivers) and peer patients from online communities.**

This created a **polyadic or networked relationship**. The "User Driven Healthcare" and "PaJR" (Patient Journey Record) models you linked formalized this, viewing the patient as the center of a learning network where care and knowledge are co-created by multiple human stakeholders. The power dynamic shifted from a paternalistic dyad to a (theoretically) more democratic, distributed network.

### 2. The New Frontier: Introducing Agentic AI as a New "Actor"

The conversation in your transcript is exploring the next logical step: **inserting Agentic AI into this already polyadic human network.**

This isn't just about AI as a passive tool (like an MRI machine). "Agentic AI" implies an active, semi-autonomous entity that can:

*   Analyze unstructured data (patient narratives, records).

*   Perform thematic analysis across hundreds of documents (as `hu2` requests).

*   Manage workflows (sending alerts, creating documents).

*   Potentially offer clinical decision support (though cautiously, as noted with Gemini's limitations).

### 3. The Emerging Model: A Polyadic Human-AI Ecosystem

The vision described (`hu3`: "We will put PaJR on steroids") is of a **hybrid ecosystem**:

*   **Human Agents:** Patient, care team, family, peer network.

*   **AI Agents:** LLMs (DeepSeek, OpenAI), specialized models (MedGemma), and workflow automators (Google Workspace agents) that act as force multipliers for both the patient and the clinicians.

This creates a complex system where:
*   **The patient's narrative and data** (archived in blogs, chats, records) become the central "source truth."
*   **AI agents** are harnessed to synthesize, analyze, and manage information flows within this network.
*   **The human team** focuses on interpretation, empathy, ethical judgment, and physical care, guided by insights surfaced by AI.

### 4. Key Tensions and Considerations from the Transcript

The conversation highlights critical challenges for this new model:

*   **Agency & Responsibility:** If an AI agent suggests an analysis or alert, who is responsible? The human who prompted it? The developer?
*   **Data Sovereignty & Security:** Using tools like Google Sheets (`hu4`'s HIPAA/BAA warning) highlights the tension between accessible, powerful tools and robust health data governance.
*   **The "Black Box" in the Team:** An AI's reasoning is often opaque. How does the human team "trust" or "collaborate" with an opaque agent?
*   **Complementary Roles:** The group is pragmatically figuring out which AI is good for what—Gemini for workflow but not diagnosis, Open AI for medical analysis, NotebookLM for synthesis—mirroring how a human team has specialists.

### Conclusion: From a Broken Dyad to a Augmented Network
Your insight frames the evolution perfectly:
1.  **Past (Traditional):** Closed **Dyad** (Doctor ↔ Patient)
2.  **Present (Web 2.0 Era):** Open **Human Polyad/Network** (Patient + Doctor + Multiple Human Agents)
3.  **Future (AI-Integrated):** **Augmented Polyad/Network** (Human Network + Multiple Agentic AI Actors)

The goal is no longer to have a single AI "replacing" the doctor in a dyad. Instead, it's about **integrating multiple AIs as active participants into the existing, distributed human care network,** supercharging the "User Driven Healthcare" vision. The relationship is becoming exponentially more complex, moving from a simple dyad to a **crowdsourced, AI-augmented ecosystem for health citizenship,** as hinted at in the final project title.

This is a profound shift from managing a relationship to orchestrating an ecosystem.

[10/12, 13:22]hu1: I will share full article today

12/12, 07:30]hu1: The biggest challenge in evidencing AI-driven outcomes is establishing reliable baseline metrics. 

AI solution providers often lack the domain knowledge to define these baselines effectively, and customers, such as hospitals, often lack the resources to undertake the task. Despite this, both sides end up content with the AI implementation ๐Ÿ™


[12/12, 07:39]hu3: Excellent point


[12/12, 07:40]hu3: In such a scenario, especially in healthcare, how does one determine the value in money terms?


[12/12, 07:43]hu9: Can we consider "baseline metrics" equivalent to "statutory regulator" like NMC?

12/12, 08:11]hu2: Yes aka minimum standard requirements in NMC parlance and most of the nation's 700 medical colleges struggle to meet it although it's way below global standards


[12/12, 07:45]hu9: Unfortunately, NMC itself says that it issues "regulations/advisory etc." but implementing them is the responsibility of the State Authorities I wonder what kind of *regulator* is that๐Ÿค”


[12/12, 08:12]hu2: Same as central government which keeps harping "Healthcare is a state subject!"

[12/12, 07:48]hu9: I used to see "LPG regulator" do both actions viz. open and close the LPG for safety.


[12/12, 07:49]hu9: However, the NMC appears as just an "opening" regulator that has brought flood of Medical Colleges jeopardizing safety to the limit that Faridabad incident occurred ๐Ÿ˜ฑ


[12/12, 07:50]hu9: Is there any such regulator on AI? Or it is just like the unregulated Press and Media that modifies opinion from Medical Diagnosis to Police investigation to Judiciary Decisions๐Ÿค”

[12/12, 08:12]hu2: Like the internet of we the people for the internet by the people?


[12/12, 08:09]hu3: I tried using its readymade ,examples which Google workspace has provided. Even those are not working currently. Actually,quite surprising that Google has not fixed them yet.


[12/12, 08:11]hu3: How ever,some useful stuff could still be learnt from this initial failure:

- to create quick prescription or medical record Google docs has a blocks feature ( like snippets in Word). 

    E.g if you regularly write a similar prescription for Sinusitis,save it as a block and re-use.


[12/12, 08:13]hu3: . Several members of the clinic can use a shared folder and alert mails or chat messages can be created if some change happens there . ( Each person will have to create his own alert agent,since it can't be sent to others. You can only say alert me)

[12/12, 08:14]hu3: You can create a task for yourself.You can label them


[11/12, 16:31]hu8: one of the challenges I have with the AI (or non-AI crowd) these days is - what I call "all (black), or nothing (white) syndrome" -- basically we talk in absolutes, sort of booleans, that AI (today's AI at least) is the be all, end all, and on the other extreme that it is nothing, naught, zilch. Reality is always somewhere in the middle?


[12/12, 08:14]hu2: We need an algorithm to collect all our fragmented patient data strewn in space to get a better picture of healthcare?

To quote:

She stared at her computer screen at 3 AM, tears streaming down her face—not because the internet was attacking her, but because she'd just done something humanity thought was impossible: she'd photographed a ghost 55 million light-years away. The attacks would come later. First came the miracle. The Impossible ProblemIn 2013, Katie Bouman was a graduate student at MIT studying computational imaging when she heard astrophysicist Shep Doeleman give a talk about the Event Horizon Telescope project. They wanted to photograph a black hole. Most people in the room understood this was borderline science fiction. Katie Bouman—who knew nothing about black holes at the time—saw it as the ultimate computational puzzle. "I begged my PhD advisor to let me work on it," she later said.Here's why it seemed impossible: The black hole they wanted to photograph was in the M87 galaxy, 55 million light-years away. To capture an image of something that far away with the necessary resolution, you'd need a telescope the size of Earth. Obviously, you can't build an Earth-sized telescope. So they built a computational one instead. Building an Earth-Sized Eye The Event Horizon Telescope wasn't a single telescope—it was eight radio telescopes scattered across the planet: from Antarctica to Chile to Spain to Arizona to Mexico. For 10 days in April 2017, these eight telescopes pointed at the same spot in space and collected data simultaneously. But here's the problem: even working together, these telescopes couldn't actually "see" a black hole the way a camera sees a face. They captured fragments—incomplete, scattered pieces of radio wave data that, on their own, meant nothing. Someone needed to write an algorithm that could take these fragments and reconstruct them into an image. An algorithm that wouldn't just guess what the black hole looked like, but would mathematically determine which image was most likely to be true. That someone was Katie Bouman. The Algorithm That Sees the Unseeable From 2013 to 2017, Katie led the development of CHIRP (Continuous High-resolution Image Reconstruction using Patch priors)—an algorithm designed to do what seemed impossible: create a clear image from incomplete data. But there was a massive problem: human bias. If Katie's algorithm produced an image that looked like a ring, and she showed it to other team members, they might subconsciously make their algorithms produce ring-like images too. So the team did something radical: they split into four independent groups. Each group developed different algorithms based on different assumptions. They didn't communicate with each other during the entire imaging process. Then, after months of work, all four teams gathered in Cambridge, Massachusetts, and ran their algorithms simultaneously in the same room. And they waited. April 10, 2019The photo that went viral shows Katie Bouman sitting at her laptop, hands over her mouth, eyes wide with disbelief. This was the moment all four algorithms converged on the same image. A bright ring of superheated gas and dust surrounding a dark center—the shadow of a black hole. They'd done it. After six years of work, after processing 5 petabytes of data (so much data it had to be transported on half a ton of hard drives because it was faster than internet transfer), after coordinating eight telescopes across four continents—they had captured the first photograph of a black hole. Proof of Einstein's theory of general relativity. Visual confirmation that black holes—objects so dense that not even light can escape them—actually exist. "Watching in disbelief as the first image I ever made of a black hole was in the process of being reconstructed," Katie wrote on Facebook. The image made headlines around the world. So did Katie's photo. And then came the backlash. The Internet Turns UglyWithin 24 hours of the image's release, online trolls launched a coordinated attack on Katie Bouman. They claimed she didn't actually write the code. They counted lines of code in GitHub repositories and claimed other team members (specifically male team members) had done all the real work. They called her a "diversity hire." They said she was taking credit for other people's achievements. Some went further, with misogynistic attacks and harassment so severe that her colleagues felt compelled to speak out. Andrew Chael, another member of the Event Horizon Telescope team, posted a thread on Twitter that went viral: "While I wrote much of the code for one of these pipelines, Katie was a huge contributor to the overall effort, and showed forth wonderful leadership as we worked to test the robustness of the code. "He continued: "I'm thrilled Katie is getting recognition for her work and hope it inspires young women to pursue science. While I appreciate congratulations on a result I worked hard on for years, if you are congratulating me because you have a sexist vendetta against Katie, please go away and reconsider your priorities in life. "Vincent Fish, a research scientist at MIT's Haystack Observatory, was equally clear: "Katie was a major part of one of the imaging subteams."The entire Event Horizon Telescope team rallied around her. Press releases emphasized her crucial role. Colleagues publicly defended her contributions. Something Rare Happened The internet backlash was vicious—but it didn't win. Scientists, journalists, and everyday people pushed back against the trolls. Major media outlets published articles explaining Katie's actual contributions. Her TEDx talk from 2016 (where she explained the algorithm before the image was even captured) went viral. The story became bigger than one scientist—it became about recognizing the systemic devaluation of women's work in STEM fields. Katie herself maintained grace throughout. She repeatedly emphasized that this was a team effort: "No one algorithm or person made this image. It required the amazing talent of a team of scientists from around the globe and years of hard work to develop the instrument, data processing, imaging methods, and analysis techniques that were necessary to pull off this seemingly impossible feat. "She credited her ~200 colleagues. She shared the spotlight willingly. But she also didn't downplay her own work—because it was real, documented, and essential. 


What Actually Happened

Let's be clear about Katie Bouman's contributions (because facts matter):
She led the development of the CHIRP algorithm starting in 2013 as part of her PhD work

She gave a TEDx talk in 2016 explaining how to photograph a black hole (two years before the image was captured)
She was co-leader of the EHT Imaging Working Group

Her verification methods were crucial to ensuring the final image wasn't an artifact or error

She designed the framework that allowed multiple independent algorithms to validate each other
Was she the only person responsible? Absolutely not—this was a 200+ person collaboration. Was her work essential and leadership pivotal? Yes. Unquestionably yes. 

The Aftermath:

Katie Bouman joined Caltech as an assistant professor of computing and mathematical sciences in June 2019, shortly after the image's release. She was 29 years old. By 2024, she'd been promoted to associate professor. She continues to lead imaging efforts for the Event Horizon Telescope, which has since captured another image: the black hole at the center of our own Milky Way galaxy (Sagittarius A*), released in May 2022.She's received the Royal Photographic Society Progress Medal and Honorary Fellowship. She was recognized as one of BBC's 100 Women of 2019.But perhaps most importantly, her story inspired thousands of young women to pursue STEM fields. 

The Deeper Truth: Katie Bouman's story isn't really about a black hole.It's about what happens when women do groundbreaking work—the dual reality of celebration and doubt that so many women in science face. It's about the exhausting requirement to prove not just competence but excellence, and then have that excellence questioned anyway. It's about the automatic assumption that a young woman couldn't possibly be the brilliant mind behind a historic achievement—that surely a man did the "real" work. But it's also about resistance. About colleagues who stood up. About an internet community that, for once, chose truth over toxicity. About a young woman who maintained her grace while refusing to be erased. Katie Bouman never wanted to be famous. She just wanted to solve an impossible problem. She wanted to see what no human had ever seen before. And she did. Six Years, Eight Telescopes, 200 Scientists, One Image On April 10, 2019, humanity photographed a black hole for the first time in history. We saw the unseeable. We proved Einstein right. We looked 55 million light-years into space and captured light from the edge of a gravity well so powerful that nothing—not even light itself—can escape. Katie Bouman was 29 years old. She'd spent six years developing algorithms that could see in the dark. Six years turning fragments of data into a coherent picture. Six years making the impossible possible. Her code didn't just capture a black hole. It captured humanity's potential—what we can achieve when we combine brilliant minds from across the globe, when we refuse to let bias define who gets to make history, when we choose collaboration over competition. 


The black hole image is now one of the most famous photographs in scientific history, up there with the first image of Earth from space and the DNA double helix. 

And whenever you see that image—that fiery ring of light surrounding profound darkness—remember: A 29-year-old woman helped humanity see it. Katie Bouman: Born 1989The computer scientist who taught us that the impossible just requires better algorithms—and the courage to try. May her story inspire every young girl who looks up at the night sky and wonders what's out there. The answer might be waiting for her to invent the algorithm that reveals it.

Unquote:



Thematic Analysis:*

1. *AI-powered healthcare:* The conversation highlights the growing interest in using AI to enhance healthcare services, including patient care, data management, and clinical decision support.

2. *Challenges and limitations:* The participants discuss the challenges and limitations of current AI models, such as Gemini's stance on medical decision support, and the need for careful consideration of data security and HIPAA compliance.

3. *Agentic AI:* The concept of agentic AI is introduced, highlighting its potential to analyze and support complex healthcare data and decision-making processes.

4. *Collaboration and knowledge sharing:* The conversation demonstrates a collaborative effort among healthcare professionals to share knowledge, ideas, and experiences on leveraging AI in healthcare, with a focus on improving patient outcomes.

5. *Future directions:* The discussion touches on the potential future directions of AI in healthcare, including the use of multimodal data entry, template documents, and snippets to streamline medical data management and support clinical decision-making.