Saturday, August 3, 2024

UDLCO: Patient privacy, signed informed consent, litigation, depersonalization, ideal science and the singularity of Benhur effect

 UDLCO summary:


The first WhatsApp group discussion transcripts begin with a news-piece about legal penalty for a few doctors sharing their patient's OT videos over WhatsApp. Various aspects of the evolution of patient privacy confidentiality protections are discussed. The second WhatsApp group discussion transcripts begin with the first group discussion getting shared as an interjection when a similar reference to patient privacy is made. Valuable links related to Indian patient privacy act and open source software are discussed. The third WhatsApp group discussion among global medical educators begins with a clinician sharing the medical education benefits of case sharing and there are opposing voices around the ethics of case sharing. It appears that patient privacy and confidentiality are significant trade offs to transparency and accountability in medical education and practice. Past published work on similars here: https://userdrivenhealthcare.blogspot.com/2016/02/the-indian-national-ehr-standards-and.html?m=1

Keyword UDLCO glossary:

http://userdrivenhealthcare.blogspot.com/2023/11/glossary-of-user-driven-healthcare.html?m=1

UDLCO transcripts:


[02/08, 12:15] RB: This article describes and formally defines the Benhur effect derived from a 1950s movie in terms of achieving perfect healthcare outcomes through various means. 

In many ways it follows as a logically contradictory and paradoxical solution to a much more famous effect from another movie in the 1950s called the Rashomon effect.

The actual manifestation of the Benhur effect (as felt in the movie and elucidated above) can happen to all stakeholders in a patient centred, team based learning group and is described in a quote below:

Read the full article 
hereπŸ‘‡




[03/08, 10:25] R : The Kerala High Court has dismissed a petition filed by a doctor and hospital staff who approached the Court to quash proceedings initiated against them for allegedly taking and sharing videos and images of a woman undergoing a cesarean operation to deliver three children, through WhatsApp.

The crime was registered against the doctors under Section 354(C) (Voyeurism) of IPC, Sections 66(E) (Punishment for violation of Privacy) and 67 (Punishment for publishing or transmitting obscene material in electronic form) of the Information Technology Act.

Justice A. Badharudeen observed that a prima facie case was made out against petitioners involving serious allegations. The Court thus declined to quash the proceedings against them.

“That apart, the WhatsApp videographs and photographs of the cesarean procedure sent by the accused were collected during investigation to justify the involvement of the petitioners in this crime, prima facie. Thus the matter shall go for trial, and, therefore, the quashment sought for cannot be considered. In such a case, involving very serious offences, quashment of the proceedings could not be resorted to. Hence this petition fails and is accordingly dismissed.”

The first petitioner is an Anesthesiologist and the second petitioner is a hospital staff and both of them were employed at the Government Taluk Hospital, Payyannur in Kannur district.

The defacto complainant underwent a cesarean operation in the year 2014 and delivered triplets. The specific allegation was that the first petitioner videographed and the second petitioner took images of the cesarean operation. It is alleged that the petitioners shared these videos and images through WhatsApp.

The petitioners submitted that the identity of the defacto complainant could not be ascertained from the videos or images and thus sought to quash the proceedings.

The Public Prosecutor opposed quashing the proceedings and stated that the petitioners outraged the modesty of the de facto complainant by sharing images and videos of her undergoing a cesarean operation.

The Court noted that the videos and images were recovered by the police from the mobile phones of the petitioners during the investigation.

The Court stated that the matter has to go to trial and cannot be quashed.

As such, the Court dismissed the petition.

Counsel for Petitioners: Advocates I.V.Pramod, K.V.Sasidharan, Saira Souraj P.

Counsel for Respondents: Advocates M.Baiju Noel, T.S.Likhitha, Shinto Sabastian, Public Prosecutor M.P.Prasanth

Citation: 2024 LiveLaw (Ker) 502

Case Title: Sunil P P v State of Kerala

Case Number: CRL.MC NO. 4223 OF 2022

Click here to read/download Order


[03/08, 10:29] RNR: As I always maintain, never ever share patient photos, especially for corporate promotional reasons!

[03/08, 10:30] PS: A welcome move.

[03/08, 10:33] RNR: The rule existed ever since Indian Medical Council Act was formulated.

Violation is a fashionable corporate trend.

Just like politicians, doctors too are taking liberties with confidentiality.


[03/08, 10:41] PS: Some things we understand intuitively.

In field of plastic surgery there are some in India and abroad who make videos from OT to post on social media . 
Consent of patient - status of that is unknown 

A Faciomax surgeon from south daily posts videos of his consults , anaesthesia and surgery on you tube . It's hard to believe  that so many patients are okay to be used like this with consent . 

Lastly there are professional groups where photos are shared for academic discussion or self promotion - but Whatapp can leak too as it's evident from this case


[03/08, 10:47] RB: Deidentified patient data capture is an integral part of any clinical project and publication can only proceed after there is signed informed consent.

However currently due to improved big data analytics, patient data is becoming more and more granular and every individual patient's data becomes deidentified into horcruxes and gets into a central case based reasoning engine toward data driven patient centred precision medicine.

Here's one regular workflow toward that end πŸ‘‡



[03/08, 11:31] CE: Kerala is known for litigation


[03/08, 11:32] PS: Thank God for Gods own country


[03/08, 11:45] RNR: Under the new personal healthcare data protection acts, patients reserve the right to request removal of their data from any database at anytime. This is now a legal requirement, even if the data were de-identified.  It is even more important with genomic data that can be commercially exploited.


[03/08, 11:48] RB: They can request removal of their deidentified data only if someone can identify with reasonable certainty that the data is theirs else we risk removing someone else's data!


[03/08, 11:50] RNR : It has been pointed out that with block chain technology, personal data can always be identified…

.
[03/08, 11:51] RB: Yes but has there been a demonstration?


[03/08, 11:52] RNR: Since the new legal requirements need block chain technology to be implemented at every step, it is a bit of a problem to de-identify patient data…


[03/08, 11:54] RB : Legal issues need optimising


[03/08, 12:00] RNR : By optimising, you mean, science as a business should be given an edge over the right to personal data? I am aware of that this is the predominant mood with Digitisers, Big Data Crunchers and AI developers… Many governments, fortunately, take the opposite view…


[03/08, 12:04] RB: No I meant science as a right for every citizen, which means they also need to contribute right!

Ideally for science they may need to contribute not just their life events trajectory (external medicine) data but also their internal medicine (lab and imaging) data and finally their autopsy data!


[03/08, 12:13] RNR: Profiling citizens has been the dream of every police department...

It has served those in power very well

Both can perhaps be justified, but can be contested in a court of law..

Moving towards depersonalisation as the goal of science was Hitler's dream...

He was a strong proponent of eugenics... 

He even allowed the so-called Aryan doctors to irradiate the ovaries and testes of healthy Jews in the name of science....

Closer to our times are the Canadian Compulsory Sterilization Act of mentally subnormal citizens and the US Tuskegee Study of syphilis - striking examples of barbarity in health sciences...

One should tread cautiously even when serving an evil master, albeit 'Science'!


[03/08, 12:32] RB: Depersonalization and profiling may not be the same as deidentified patient data as science need not be interested in the political affiliation of the patient. All it needs to know are the life event factors (external medicine) that led to the patient's past, current illness and will influence his her future outcomes!

This would not only enable science to develop better solutions to patient problems but also understand and predict how they evolve.

There is no doubt that one has to tread carefully and carry every one along inclusively in this journey and hence the greater need for transparency and accountability in Science's endeavours such that it collectively realises it's follies in time!

πŸ™‚πŸ™

[03/08, 12:34] RNR: What an altruism, Dr. Biswas!

Time and again, it has been proved that business thrives in the altruism of _others!_


[03/08, 12:36] RNR: The belief that science has an answer to every malady is called _blind faith!_

Sounds familiar, doesn't it!


[03/08, 12:42] RB: Agree!

It's just that it satisfies some of our human curious desires to know and we often call it science if it can demonstrate components of a design that experiments and provides a measured comparator but yes at the end of the day its just a tool fed by duality!

At singularity all these may fall apart or even come together as a gestalt, aka Benhur effect, linked above.

Group discussion 2:

Summary: The breakthrough insight for Ux UI enthusiasts here is that if all hospitals start doing this regularly for each and every patient then it will become very difficult for even patients to identify themselves. At best even if they do they will never be able to prove it and will simply be left with a lingering uncertainty


Transcript;

[07/08, 10:37] +1 (805): This is an old study. But, it didn't find EHRs to help with improved patient outcomes except in specific settings. Don't recollect EHRs changing much in last 10 years


[07/08, 10:38] RB: Yes it's time to change our EMR design!

[07/08, 10:39] SJ: Rethink EMR design radically

[07/08, 10:40] AK: Yes, radical disruption is needed, to bring about any real change in the inert system.

[07/08, 10:40] RB : We are now actioning our rethought EMRs everyday


[07/08, 10:43] +1 (805): Ya no form is the way to go. With this much summarization and content extraction possibilities of LLMs we don't need to impose structure on data. We can parse it out into structure


[07/08, 10:45] +1 (805): We've been working with 20 - 50 page medical documents, and the content extraction rate and quality is surprising. If anyone has access to data both in text and structured format, we could build a dataset and demonstrate how accurate models are in extracting structure from text

07/08, 10:50] RB: Please feel free to utilise more than 2000 medical records from each one of our 1000 health professional student's online learning portfolios, here in our dashboard πŸ‘‡


07/08, 10:54] +1 (805): Thanks a lot, Dr. Biswas!

[07/08, 10:49] VG: So, who is going to enter lot of text? Something like voice to text?

[07/08, 10:49] VG: Every doctor would have a decent mic to speak into, for every patient?


[07/08, 10:49] +1 (805): Ambient listening is already happening in healthcare


[07/08, 10:50] +1 (805): The quality of mics available today is stunning. Also blind source separation (isolating desired speaker from background) has been beaten to death over the decades.

[07/08, 10:52] RB: And that should come with her mobile phone

[07/08, 11:05] VG: Great idea.

[07/08, 11:15] VG: One question in that. Phone is typically a personal device, while EHR is running on a hospital network behind a firewall. How could this operationally win? Should we build an app on the phones that has the same authentication over internet as EHR, and move voice note from phone to EHR servers?

[07/08, 11:24] RB: One upside of allowing the doctor to take personal ownership of his her patient's data makes for a stronger commitment to the team based patient centred learning relationship an optimal EMR is supposed to facilitate

[07/08, 11:25]Aye: Privacy advocates won't be happy


[07/08, 11:26] Rakesh Biswas: More masala hereπŸ‘‡



[07/08, 11:34] Aye: We must not comment on a matter subjudice. 

From academic interests, if the data was anonymized, then how did the party get knowledge of it?


[07/08, 11:39] RB: Are you talking about one particular legal episode in those conversational transcripts? They are freely available online in newspaper links I'm sure


[07/08, 11:40]Ay: The c section case on top



[07/08, 11:45] Ay: From the link

The petitioners submitted that the identity of the defacto complainant could not be ascertained from the videos or images and thus sought to quash the proceedings.

... 

Hence, how did the party get to know


[07/08, 11:49] RB: The patient can always identify herself himself.

Hence the need for a strict signed informed consent before any kind of patient data sharing albeit deidentified

[07/08, 11:53 Aye: Consent needed yes, but about patient identifying themselves - I disagree. 

Anonymized information should not be identifiable by the patient. It can then be identified by others too. 

If needed, there should be a "cooling off period" if there are unique demographics that the patient can identify with. 

For rarest of the rare cases, data usage should be indemnified by party. 

In this case, it was a c section.


[07/08, 11:58] RB : The only way for the patient to not be able to identify her own information would be to hide it from her! There's no other way. Once a patient is shown her data invariably she would be able to identify unless she has language  barriers or if the images are microscopic and she's not a trained pathologist


[07/08, 12:00]Aye: Not really. 

Will share with actual examples to understand your perspective.


[07/08, 12:09] Aye: It will be worthwhile to illustrate the counter points of view with data, and that will take a while. 

However, here's a c section video. How would a patient identify herself from this beyond reasonable doubt? 


Caveat: Even in this video there could have been some post editing done to improve anonymization further, to mask identifiable patterns


[07/08, 12:21] RB: Easily if it's shown to her with the information that it was filmed in the hospital she had her own caeserian on a date that was close to her operation date.

However the breakthrough insight for Ux UI enthusiasts here is that if all hospitals start doing this regularly for each and every patient then it will become very difficult for even patients to identify themselves. At best even if they do they will never be able to prove it and will simply be left with a lingering uncertainty


[07/08, 12:24] Aye: Exactly, hence the cooling off period may also be needed. 

The  English Court of Appeal has an often cited ruling on usage of anonymized data. 


[10/08, 00:44] RS +1 (904): Seems some of you are interested in Generative AI Software solution - give Amazon Q (It used to be called Code Whisperer) a try (https://aws.amazon.com/q/) -  I have compared it with Microsoft's Copilot, OpenAIs Codex, Meta's CodeLLama and few others - while each has their strength and accuracy varies, Amazon Q is probably the most user friendly and intuitive


[10/08, 00:45] RS +1 (904) : Just a quick disclaimer though, I am with Amazon AGI and AWS org and it's one of our primary product.. so you may say i am biased.. but we have compared our product with all the competitors very thoroughly to give the proper picture to the customer .. and as you know, in amazon we are known for customer obsession πŸ™‚


[10/08, 00:49] RS +1 (904): Btw, just my 2 cents from the above communication on No-Code development and Gen AI based software code - These two are very different paradigm as rightly pointed by Abhishek - No-Code dev tools are existing for a while in the form of GUI based code dev - from 3GL to 4GL to 5GL to 6GL (as GL increases the code dev become more intuitive) - probably Microsoft was and still the leader in such tools.. Gen AI based code development is purely LLM based and Generative/Creative in nature



[10/08, 04:27]Ay: Of course biased... But whats wrong with that πŸ™‚

Thanks for the list of code genAI


[10/08, 09:53] SJ: What would you like to get NABH to do for digital health?

[10/08, 10:22] ATP: Asian customers

[10/08, 10:22] ATP: my experience as CD - Clinical Director GE HealthcareIT, South Asia

[10/08, 10:22] ATP: India, Nepal, Bhutan,
 Sri Lanka, Bangladesh and Maldives


[10/08, 10:23] ATP: they want everything. but free of cost

[10/08, 10:29] JG: That’s quite interesting, thanks for sharing. Does the coding assistant support multiple languages? Writing comments then having code completion akin to Intellisense on steroids should be quite a useful function of generative AI.


[10/08, 10:33] T: Thanks Dr Oommen

I will like NABH to raise the bar on their Achievement & Excellence category for their final  Standards on HIS/EMR. This will, at least, bring some parity with the global standards

1. Under the Care of Patients category (COP), the EMR should be based on the clinical protocols. CDSS, CDM & AI will need a good base of protocol based data to be effective. 
The draft standards mentions CDSS, but how can we use CDSS without a good clinical protocols based EMR/EHR

2. Under MOM - The software should provide drugs related alerts.  COPE's physician drug order sets must be compliant to clinical protocols and drug alerts as far as possible
3. Under FPM - The draft standards only talk about vendors payments. It is totally silent on Amount Receivables. Delayed insurance payouts are a major reason for sickness in hospital industry and hence there is a need for Revenue Cycle MGT. module

Also, mature MNC HIS have strong Finance modules. At least, some thoughts can be given on this.
ERP, BI & BSC applications must be encouraged

4. Telemedicine - For the home health care module, the beneficiary identification standards must be stringent to prevent billing frauds & misuse.
5. For DOM & IMS - Technical experts opinion is required. 
       
But use of clinical informatics, health informatics, ERP, BI +/-Visual BI, BSC & Blockchain etc must be promoted here


[10/08, 10:44] RB: This is our nabh accredited hospital πŸ‘‡

And this is our showcased EMR after deidentification πŸ‘‡



[10/08, 10:54] SB: Now you see why I am so vehemently opposed to “free” stuff like FOSS? You pay peanuts, you get monkeys. And a one-way ticket to massive, fines, prison time, loss of business and a good life ruined. It’s your life anyway, so I can only shake my head in disbelief and walk away. VoilΓ !


[10/08, 11:07] AB: I am guessing all those who have used foss solutions like gnu health, Bahmni, openmrs, openehr all went to jail


[10/08, 11:28] AB: Chamberlain forceps! 150 years of keeping simple tech, inaccessible and  proprietary away from people - nobody went to jail.


[10/08, 11:45] SB: They might soon have to. Didn’t you attend the DPDP Act webinar by NRCeS last evening?


[10/08, 11:45] SB: You’re missing the point

[10/08, 11:51]AyI: DPDT : FOSS

didn't get the connect Sir


[10/08, 11:52] AB: Yes. quite familiar with DPDP. In fact, was part of a team from IDHN that sent out a comprehensive response https://www.idhnet.org/publications/responses-to-the-government-of-indias-joint-parliamentary-committees-consultation-on-the-personal-data-protection-bill/ .. and I wonder why DPDP will only effect FOSS. If at all, FOSS will only bring transparency about DPDP compliances


[10/08, 11:58] AB: the PDF link seems to be broken - you can see the doc from here - https://mittalsouthasiainstitute.harvard.edu/wp-content/uploads/2020/03/IDHN-Comments-on-PDP-Bill.pdf


[10/08, 11:59] AB: and this is the Health data management policy from ABDM - https://abdm.gov.in:8081/uploads/health_management_policy_bac9429a79.pdf .. notice the alignment with PDP


[10/08, 13:04] SB: Tell me, are you a medical doctor?


[10/08, 13:06] AB: no. why?


[10/08, 13:19] SB: Then you have absolutely no clue as to what you’re talking about.


[10/08, 13:20] AB: in that response? Sir, please check the list of people who responded. It was collaborative effort and peer reviewed first. Am sure you recognize some of them! I am guessing they have no clue as well?


[10/08, 13:23] AB: If you can point out anything that I have said without a clue, maybe I can learn and educate myself. if not, should I assume its more of "ego" thing? I don't go lecture people on clinical protocol. but I do know about a little bit about technology! Anyways if its about "ego" - then I see no point to respond. Apologies in advance


[10/08, 13:29] SB: I was expecting this reply. How utterly predictable! As I said… you can never understand. Only a medical doctor who deals or has dealt with having a person’s inner most secrets confided to him/her and has taken decisions in a split second that made the difference between life and death, it’s just not possible to understand the criticality of ethical principles and practices. We have a very important duty _primum non nocere_. Unless compelled, no medical professional in their right senses will ever touch any FOSS solution for their day to day practice. And when anyone gets into any legal entanglements and suffers the consequences, bye bye any business from that solution.


[10/08, 13:53] AB: No sir, I don't understand. I know I said I will not respond if you can't provide me evidence, as I would assume it be for some other reason. Having said that - the list of limited FOSS that I shared earlier, is used across thousands of facilities across 100s of countries, in some of the biggest in underserved regions, used in clinical trials, used by the largest humanitarian healthcare services in the world ... I am guessing you have very low opinions about them, or think they were compelled/forced/biased. And in many many cases, they have super competent legal counsels. (although I still can't understand relation of DPDP or legal entanglements specific to FOSS and absolved by COTS). I rest my case Sir. (Since you knew what my reply was to be, I am guessing you are "antaryami" and probably also know in advance this reponse)


[10/08, 14:57] SB: I will discuss this and hopefully lots more face to face. If you’re in Delhi-NCR, ping me. We’ll have a jaw over lunch. πŸ™‚


[10/08, 15:03] Ay: Can we bet on the match? πŸ™‚


[10/08, 15:03] AB: Happy to have rational/ logical discussion. But I love my jaw πŸ™‚


[10/08, 16:19] SB: Having “a jaw over a meal” means having a conversation over meal.

[10/08, 16:24] SB: Duel? Nope. It will be a rational debate, much more civilised than that goes on in the various legislatures of the country.

Group discussion 3:


Transcripts:

[07/08, 18:24] ORL: I've been using AI in medical education and research since very early of it's establishment,  so I feel very satisfied,  indeed it makes academic work very fruitful and in a very incredibly short time , I can say a work which takes  a week of hard work can be achieved in one hour with AI.

[07/08, 18:44] ORL: Of course, being an expert in the field, you'll definitely catch up the odds

[07/08, 19:44] ORL: Being an ... I use AI to comment on patient’s and endoscopic videotape , and to prepare materials for our students on the subjects of concern as materials of assessment and as stimulation for further reading to find out areas of deficiency in their knowledge . Sometimes through AI I ask (him) to share them with similar stories on the net and prepare different scenarios on the some subject for future teaching for undergraduate and postgraduate students to simulate reality.

[07/08, 19:53] ORL: Patient’s photos and endoscopic videotapes


[07/08, 20:05] ME1: Is it ethical to upload patient materials like photos and video tapes on an AI tool which is still a black box?
[07/08, 20:12] ME2 : It's unethical as per the principals of bioethics

[07/08, 20:13] ORL: Of course after removal of all ID marks and after taking patients' consent to spread the material for teaching purposes. Otherwise Dear how can we enrich the environment of AI box if you and me don't share our experiences, how can we access the vast knowledge through AI if we don't supply,  do you think it generating these knowledge itself.


[07/08, 20:16] ME3: Whenever we take consent, we tell the patient or participant that they can take consent back at anytime. So Consent is ongoing - can you take the data back from AI if after 1 month or year patient takes the consent back? We can only supply what’s ours - Also does your institutional policy allow that? Considering there are confidentiality and other institutional policies related to patient data? Just raising these questions for my knowledge and not a critique.


[07/08, 20:18] ME2 : I would not trust something I donot know anything about. Also someone told me that if you are not paying for the product then you are the product. Many AI tools are free and we still dont know much about them


[07/08, 20:19] ME2: So just a word of caution ⚠️

[07/08, 20:25] +964 750 136 1306: AI in healthcare and medical research can indeed raise bioethical concerns, but it doesn't inherently break the principles of bioethics. The impact of AI on bioethics largely depends on how it is designed, implemented, and regulated. Here are some key bioethical principles and how AI can align with or challenge them:

1. **Autonomy**: AI can support patient autonomy by providing personalized health information and decision-making tools. However, it can also undermine autonomy if patients are not fully informed about how AI systems work or if they feel pressured to follow AI recommendations.

2. **Beneficence**: AI has the potential to greatly benefit patients by improving diagnostic accuracy, predicting disease outbreaks, and personalizing treatments. The challenge is ensuring that AI systems are designed to maximize patient well-being and are free from biases that could harm certain groups.

3. **Non-maleficence**: To avoid harm, AI systems must be rigorously tested and validated. There are concerns about the potential for AI to make errors or perpetuate biases, which could lead to harmful outcomes. Continuous monitoring and updating of AI systems are crucial to mitigate these risks.

4. **Justice**: AI can promote justice by making healthcare more accessible and affordable. However, there is a risk of exacerbating existing inequalities if AI systems are only available to certain populations or if they are biased against marginalized groups.

5. **Privacy and Confidentiality**: AI systems often require large amounts of data, raising concerns about patient privacy and data security. Ensuring robust data protection measures and obtaining informed consent for data use are essential to uphold these principles.

In summary, while AI has the potential to enhance healthcare and medical research, it must be carefully managed to ensure it aligns with bioethical principles. Ongoing dialogue among technologists, ethicists, healthcare professionals, and patients is crucial to navigate these challenges.

Do you have any specific concerns or scenarios in mind where AI might conflict with bioethical principles?


[07/08, 20:29] ORL: Almost all well established inventions were criticized at the beginning , people tend to reject what they don't know.


[07/08, 20:54] ORL: Life is never either white or black


[07/08, 20:57] ME2: The article is saying that as well: “In the bioethical approach to new technologies, it is essential to associate the
principle of precaution with the principle of hope (PatrΓ£o-Neves 2021)”


[08/08, 16:04] ME4: Being vigilant with using AI in our everyday careers: Besides taking a clear consent from any person that we want to upload their details to any LLM, and make sure no “patient identifiers” are attached. Also, to improve data privacy, I suggest to use the “Temporary Chat” mode so no “memory” is kept on the AI after the chat is closed, as described in this editorial:



[09/08, 21:24] ME5: Similar issues discussed here πŸ‘‡



[09/08, 22:50] ME6: This is worrying because the debate regarding de-identification of patient data is likely to still be traceable to individuals, especially considering the progressing technology.

Discussion of blockchain and data protection suggests a worrisome tendency that research and "scientific progress" gain supremacy over the right of people to privacy.


[10/08, 08:15] ME5: Another interesting debate brewing here. 
πŸ™‚πŸ™

Let me update the above link with more discussions around this same topic in other groups. 

Yes the only way to maintain privacy would be to resist technology from touching our lives and the first step would be to throw away this mobile phone where one could be reading this message. πŸ˜…




[11/08, 19:49] ME5: Thanks for sharing this interesting piece! πŸ™‚πŸ™

Well if only patient data were just numbers alone this could have been feasible?

But then patient data are largely textual stories! Can they be godelized? I doubt!

Nevertheless it was an interesting read and I was particularly intrigued by the 1985 paper mentioned and I quote from your link shared above:

"The Knowledge Complexity of Interactive Proof-Systems".This paper introduced... and conceived the concept of knowledge complexity, a measurement of the amount of knowledge about the proof transferred from the prover to the verifier."

Will try to find out more about it.



[11/08, 20:49] ME5: Absolute agree with this!

I'm assuming it's chatGPT or Claude talking. 

I guess we simply need to look at the age old, deidentified case report model of data sharing over the last 500 years and how that has eventually scaled to become the current foundation of all our current medical knowledge?

More about it in a different flavour 
hereπŸ‘‡

No comments: