Summary:*
The conversation revolves around the integration of Artificial Intelligence (AI) in healthcare, specifically in the doctor-patient relationship. The discussion touches on the use of AI-powered tools, such as Google Workspace, Gemini, and other Large Language Models (LLMs), to enhance patient care, streamline medical data management, and support clinical decision-making. The participants share their experiences and ideas on leveraging AI to improve healthcare outcomes, while also addressing concerns around data security, HIPAA compliance, and the limitations of current AI models in medical decision support.
*Key Words:*
- AI in healthcare
- Doctor-patient relationship
- Agentic AI
- Medical data management
- Clinical decision support
- LLMs
- Google Workspace
- Gemini
- HIPAA compliance
- Data security
Image CC licence:https://commons. wikimedia.org/wiki/File:Model_ based_utility_based.png#mw- jump-to-license
Conversational transcripts:
One group:
[09/12, 22:59]hu1: https://aisel.aisnet.org/jais/ vol26/iss6/5/
[10/12, 08:08]hu2: Thanks. Can you share the full text?
It's interesting how we broke the dyadic doctor patient relationship through PaJR CBBLE driven team based web 2.0 learning but couldn't call it triadic as there were multiple human agents in the team!๐
Another group:
[09/12, 18:28]hu3: Can Google Sheets or Docs be used to store medical data , say patient records?
[09/12, 18:28]hu3: Many practioners use Spreadsheets to store some basic appointment info , some even as rudimentary EMR.
[10/12, 08:58]hu4: Short answer is yes. Long answer is that you need to ensure that Access Controls are strict and the ability to copy/share data is pretty much locked down. Depending on the country you are doing this for HIPAA compliance you will need a BAA from google. The only way to get that is to have a workspace account whoch add to the auditing requirements etc
[10/12, 09:01]hu4: @Group Moderator I think this group is really useful. My only challenge is the volume of messages — if I fall behind, it takes me a while to catch up because I like to at least browse through everything. The information people share here is impressive, and the discussions are genuinely helpful. I know you ran a poll, but I wanted to make sure you had my reasoning behind voting yes.
[10/12, 09:03]hu5: NYT reports a growing number of patients are uploading records to ChatGPT to clarify diagnoses and understand care plans!
#healthcare #UShealthcare
[10/12, 09:25]hu3: Thanks. I think a Workspace account should help.A pro account would also help get the pro version of Gemini. I intend to use the Workspace studio to make a rudimentary EMR and demo it to this group. Maybe on one of the Wednesday.
I plan to take more of the unstructured data route( more document than spreadsheet) as I believe AI will be able to handle that . Also because medical history is more unstructured than structured. No form data entry is better ,where possible.
[10/12, 09:27] Group Moderator: thank you very much for your feedback! It is very helpful
[10/12, 09:27]hu3: Gemini states very clearly that it won't answer medical questions, so one will have to live with that for this experiment. Maybe other Ai tools can take care of that. Maybe it's also the right way to go
[10/12, 09:34]hu2: Here's an attempt at unstructured EMR data archival in different individual patient slots showcasing AI in the loop clinical decision making support๐
[10/12, 09:35]hu3: I was thinking of these. We will put PaJR on steroids with this new approach ( that's the intention)
[10/12, 09:36]hu3: Which AI are you using
[10/12, 09:37]hu2: Multiple LLMs
[10/12, 09:37]hu3: The workspace studio is more of agents ie. Work being done on the record e.g sending automated alerts, creating other documents etc.
[10/12, 09:38]hu3: Gemini decided not to be a medical decision support system
[10/12, 09:41]hu3: Maybe after the initial jugaad experiment with Google workspace ,one can try the Open AI Business suite. Open AI is still keen to work with medical decisions
[10/12, 09:41]hu2: Yes we need some agentic AI help to analyse all the archived data toward this project:
Distributed Health Citizenship in a Crowdsourced patient journey record PaJR Ecosystem
We could do with some agentic AI help in the next Steps for looking at our results:
1. **Formalize Data Extraction:**
Systematically code the case reports and chats for quotes illustrating key themes: delegation, conflict, platform use, advocacy, data loss from all the raw narrative data available here: https://pajrcasereporter. blogspot.com/?m=1
2. **Strengthen Theoretical Linkage:**
Explicitly connect findings to literature on digital labor, health informatics, and participatory democracy.
3. **Develop the Methods Section:**
Detail the digital ethnography approach, ensuring it meets rigorous standards for qualitative research.
4. **Create Visualizations:**
Diagrams mapping the different network structures and data flows for the two cases would powerfully illustrate the argument.
Can anyone give this prompt to workstudio and share what they find?
[10/12, 09:42]hu3: These should be possible if they are just manipulations of a data . One or more Ai agents with good prompts should take care
[10/12, 09:42]hu2: We use open AI, copilot, perplexity, deepseek, August AI etc
Mostly deepseek, rarely August AI
[10/12, 09:43]hu2: Yes they are simply required to perform thematic analysis on data available in multiple URLs
[10/12, 09:44]hu3: NotebookLM would also do a neat job. Add all the URLs as sources and get wonderful outputs. Infographics,ppt, video,audio, reports of any sort
[10/12, 09:45]hu3: Especially useful for your students to learn from all the data you have now archived.
[10/12, 09:45]hu2: That would still need manual adding of 400 URLs there in that one URL!
[10/12, 09:45]hu2: Notebook LM used to work for me earlier but in recent times it just doesn't respond!
[10/12, 09:45]hu3: The researcher will select the ones really required
[10/12, 09:46]hu3: Go pro
[10/12, 09:46]hu6: Go Jio then Go Pro
[10/12, 09:52] hu4 : I am curious.. We can chat privately if you want. I may be able to help
[10/12, 09:53]hu3: Sure. In a day or two I will be ready with my first shot.
[10/12, 09:54]hu3: I was looking at the snippet feature of Google docs. Just to make repetitive data entry like prescriptions easier. E.g just type fever and related prescription will get filled
[10/12, 09:55]hu3: Template docs and snippets can do half the job. Ai agents to help further. Plus we have multimodal data entry.
[10/12, 09:56]hu3: Only live decision support is a challenge due to Gemini' stance.
[10/12, 09:57]hu7: Any experience with medgemma then
[10/12, 09:58]hu3: Good idea. Need to figure out how to make it work with the Google workspace
[10/12, 13:17] hu3: Intial thoughts are done. Have to begin creation of first prototype. After that will demo to this group for feedback
Image above:
[10/12, 13:18]hu3: My initial thoughts about usage of Google Workspace and its tools
[10/12, 13:34]hu3: @ kindly reserve a Wednesday for a demo of this. I hope to build a working prototype soon.
[10/12, 13:46]hu8: Anytime, there is a Google sheet link, where you can add yourself, set up by @
[10/12, 14:57] hu2: Can I borrow the image with full attribution to the source prompter?
Here's a textual summary of the morning conversations archived:
The conversation highlights critical challenges for this new model:
* **Agency & Responsibility:** If an AI agent suggests an analysis or alert, who is responsible? The human who prompted it? The developer?
* **Data Sovereignty & Security:** Using tools like Google Sheets (`hu4`'s HIPAA/BAA warning) highlights the tension between accessible, powerful tools and robust health data governance.
* **The "Black Box" in the Team:** An AI's reasoning is often opaque. How does the human team "trust" or "collaborate" with an opaque agent?
* **Complementary Roles:** The group is pragmatically figuring out which AI is good for what—Gemini for workflow but not diagnosis, Open AI for medical analysis, NotebookLM for synthesis—mirroring how a human team has specialists.
### Conclusion: From a Broken Dyad to a Augmented Network
Your insight frames the evolution perfectly:
1. **Past (Traditional):** Closed **Dyad** (Doctor ↔ Patient)
2. **Present (Web 2.0 Era):** Open **Human Polyad/Network** (Patient + Doctor + Multiple Human Agents)
3. **Future (AI-Integrated):** **Augmented Polyad/Network** (Human Network + Multiple Agentic AI Actors)
The goal is no longer to have a single AI "replacing" the doctor in a dyad. Instead, it's about **integrating multiple AIs as active participants into the existing, distributed human care network,**
https://userdrivenhealthcare.blogspot.com/2025/12/udlco-crh-harnessing-agentic-ai-into.html?m=1
[10/12, 15:00]hu3: In the Workspace EHR I am considering ,Ai is just an agent who does a given task. I.e if x data is entered send message to y.
Ai does assigned work. That's all
[10/12, 19:42]hu3: Prototyping started. Failure. Gemini says "We are at capacity. We will be back soon"!!!
[10/12, 19:42]hu3: Hope it works in a few hours
[10/12, 20:40]hu3: Removed Gemini and the rest of the workflow worked
[10/12, 21:41]hu9: Haha!
[10/12, 21:43]hu3: it continues to say the same things. Other inadequacies found with respect to my plans, but lets see. Will try my best. Its a rudimentary agent builder. Google always tries to be minimalistic. It creates draft mails, but cant send. It can assign tasks only to self ( the user), not others.
[10/12, 21:44]hu3: i created another workflow, where Ai has to decide whether medications have been prescribed. If yes, pharmacy has to be mailed the prescription. But once again , AI is down/busy message . So the workflow failed!
[10/12, 21:45]hu3: I will try tomorrow. Else try this whole thing with the MS Office suite. ( I want something which is easy to build. Not requiring N8N or code)
[10/12, 21:54]hu9: Net net - Gemini is a weird shell script! ๐
[10/12, 21:55]hu3: I am on pro version and this is its state. Lets hope its a a temp problem
[10/12, 21:55]hu3: All these guys are busy distributing free plans, to market and survive.
[10/12, 21:56]hu10: Maybe it recognised you as a human…๐, sorry another machine ๐๐
[10/12, 22:00]hu10: https://www.reddit.com/r/googleworkspace/comments/1pf7d50/we_are_at_capacity_well_be_back_soon_error/
[10/12, 22:01]hu10: Looks like many are facing this issue sirji
[10/12, 22:24]hu3: Oops since last 5 days it seems.
[10/12, 10:55]huai: This is a fascinating observation that gets to the heart of how technology reshapes relationships and power structures in healthcare.
Your analysis is spot-on. Let's break down the key conceptual shift:
### 1. The Historical Break: From Dyadic to Polyadic (Not Triadic)
You correctly identified the core idea: **Web 2.0 and team-based learning broke the traditional one-on-one (dyadic) doctor-patient model.** However, it didn't create a simple three-party (triadic) model. Instead, it introduced **multiple human agents**:
* The patient
* The primary doctor
* Specialists
* Nurses, physiotherapists, other allied health professionals
* **Critically: The patient's social network (family, caregivers) and peer patients from online communities.**
This created a **polyadic or networked relationship**. The "User Driven Healthcare" and "PaJR" (Patient Journey Record) models you linked formalized this, viewing the patient as the center of a learning network where care and knowledge are co-created by multiple human stakeholders. The power dynamic shifted from a paternalistic dyad to a (theoretically) more democratic, distributed network.
### 2. The New Frontier: Introducing Agentic AI as a New "Actor"
The conversation in your transcript is exploring the next logical step: **inserting Agentic AI into this already polyadic human network.**
This isn't just about AI as a passive tool (like an MRI machine). "Agentic AI" implies an active, semi-autonomous entity that can:
* Analyze unstructured data (patient narratives, records).
* Perform thematic analysis across hundreds of documents (as `hu2` requests).
* Manage workflows (sending alerts, creating documents).
* Potentially offer clinical decision support (though cautiously, as noted with Gemini's limitations).
### 3. The Emerging Model: A Polyadic Human-AI Ecosystem
The vision described (`hu3`: "We will put PaJR on steroids") is of a **hybrid ecosystem**:
* **Human Agents:** Patient, care team, family, peer network.
* **AI Agents:** LLMs (DeepSeek, OpenAI), specialized models (MedGemma), and workflow automators (Google Workspace agents) that act as force multipliers for both the patient and the clinicians.
This creates a complex system where:
* **The patient's narrative and data** (archived in blogs, chats, records) become the central "source truth."
* **AI agents** are harnessed to synthesize, analyze, and manage information flows within this network.
* **The human team** focuses on interpretation, empathy, ethical judgment, and physical care, guided by insights surfaced by AI.
### 4. Key Tensions and Considerations from the Transcript
The conversation highlights critical challenges for this new model:
* **Agency & Responsibility:** If an AI agent suggests an analysis or alert, who is responsible? The human who prompted it? The developer?
* **Data Sovereignty & Security:** Using tools like Google Sheets (`hu4`'s HIPAA/BAA warning) highlights the tension between accessible, powerful tools and robust health data governance.
* **The "Black Box" in the Team:** An AI's reasoning is often opaque. How does the human team "trust" or "collaborate" with an opaque agent?
* **Complementary Roles:** The group is pragmatically figuring out which AI is good for what—Gemini for workflow but not diagnosis, Open AI for medical analysis, NotebookLM for synthesis—mirroring how a human team has specialists.
### Conclusion: From a Broken Dyad to a Augmented Network
Your insight frames the evolution perfectly:
1. **Past (Traditional):** Closed **Dyad** (Doctor ↔ Patient)
2. **Present (Web 2.0 Era):** Open **Human Polyad/Network** (Patient + Doctor + Multiple Human Agents)
3. **Future (AI-Integrated):** **Augmented Polyad/Network** (Human Network + Multiple Agentic AI Actors)
The goal is no longer to have a single AI "replacing" the doctor in a dyad. Instead, it's about **integrating multiple AIs as active participants into the existing, distributed human care network,** supercharging the "User Driven Healthcare" vision. The relationship is becoming exponentially more complex, moving from a simple dyad to a **crowdsourced, AI-augmented ecosystem for health citizenship,** as hinted at in the final project title.
This is a profound shift from managing a relationship to orchestrating an ecosystem.
[10/12, 13:22]hu1: I will share full article today
12/12, 07:30]hu1: The biggest challenge in evidencing AI-driven outcomes is establishing reliable baseline metrics.
AI solution providers often lack the domain knowledge to define these baselines effectively, and customers, such as hospitals, often lack the resources to undertake the task. Despite this, both sides end up content with the AI implementation ๐
[12/12, 07:39]hu3: Excellent point
[12/12, 07:40]hu3: In such a scenario, especially in healthcare, how does one determine the value in money terms?
[12/12, 07:43]hu9: Can we consider "baseline metrics" equivalent to "statutory regulator" like NMC?
12/12, 08:11]hu2: Yes aka minimum standard requirements in NMC parlance and most of the nation's 700 medical colleges struggle to meet it although it's way below global standards
[12/12, 07:45]hu9: Unfortunately, NMC itself says that it issues "regulations/advisory etc." but implementing them is the responsibility of the State Authorities I wonder what kind of *regulator* is that๐ค
[12/12, 08:12]hu2: Same as central government which keeps harping "Healthcare is a state subject!"
[12/12, 07:48]hu9: I used to see "LPG regulator" do both actions viz. open and close the LPG for safety.
[12/12, 07:49]hu9: However, the NMC appears as just an "opening" regulator that has brought flood of Medical Colleges jeopardizing safety to the limit that Faridabad incident occurred ๐ฑ
[12/12, 07:50]hu9: Is there any such regulator on AI? Or it is just like the unregulated Press and Media that modifies opinion from Medical Diagnosis to Police investigation to Judiciary Decisions๐ค
[12/12, 08:12]hu2: Like the internet of we the people for the internet by the people?
[12/12, 08:09]hu3: I tried using its readymade ,examples which Google workspace has provided. Even those are not working currently. Actually,quite surprising that Google has not fixed them yet.
[12/12, 08:11]hu3: How ever,some useful stuff could still be learnt from this initial failure:
- to create quick prescription or medical record Google docs has a blocks feature ( like snippets in Word).
E.g if you regularly write a similar prescription for Sinusitis,save it as a block and re-use.
[12/12, 08:13]hu3: . Several members of the clinic can use a shared folder and alert mails or chat messages can be created if some change happens there . ( Each person will have to create his own alert agent,since it can't be sent to others. You can only say alert me)
[12/12, 08:14]hu3: You can create a task for yourself.You can label them
[11/12, 16:31]hu8: one of the challenges I have with the AI (or non-AI crowd) these days is - what I call "all (black), or nothing (white) syndrome" -- basically we talk in absolutes, sort of booleans, that AI (today's AI at least) is the be all, end all, and on the other extreme that it is nothing, naught, zilch. Reality is always somewhere in the middle?
[12/12, 08:14]hu2: We need an algorithm to collect all our fragmented patient data strewn in space to get a better picture of healthcare?
To quote:
She stared at her computer screen at 3 AM, tears streaming down her face—not because the internet was attacking her, but because she'd just done something humanity thought was impossible: she'd photographed a ghost 55 million light-years away. The attacks would come later. First came the miracle. The Impossible ProblemIn 2013, Katie Bouman was a graduate student at MIT studying computational imaging when she heard astrophysicist Shep Doeleman give a talk about the Event Horizon Telescope project. They wanted to photograph a black hole. Most people in the room understood this was borderline science fiction. Katie Bouman—who knew nothing about black holes at the time—saw it as the ultimate computational puzzle. "I begged my PhD advisor to let me work on it," she later said.Here's why it seemed impossible: The black hole they wanted to photograph was in the M87 galaxy, 55 million light-years away. To capture an image of something that far away with the necessary resolution, you'd need a telescope the size of Earth. Obviously, you can't build an Earth-sized telescope. So they built a computational one instead. Building an Earth-Sized Eye The Event Horizon Telescope wasn't a single telescope—it was eight radio telescopes scattered across the planet: from Antarctica to Chile to Spain to Arizona to Mexico. For 10 days in April 2017, these eight telescopes pointed at the same spot in space and collected data simultaneously. But here's the problem: even working together, these telescopes couldn't actually "see" a black hole the way a camera sees a face. They captured fragments—incomplete, scattered pieces of radio wave data that, on their own, meant nothing. Someone needed to write an algorithm that could take these fragments and reconstruct them into an image. An algorithm that wouldn't just guess what the black hole looked like, but would mathematically determine which image was most likely to be true. That someone was Katie Bouman. The Algorithm That Sees the Unseeable From 2013 to 2017, Katie led the development of CHIRP (Continuous High-resolution Image Reconstruction using Patch priors)—an algorithm designed to do what seemed impossible: create a clear image from incomplete data. But there was a massive problem: human bias. If Katie's algorithm produced an image that looked like a ring, and she showed it to other team members, they might subconsciously make their algorithms produce ring-like images too. So the team did something radical: they split into four independent groups. Each group developed different algorithms based on different assumptions. They didn't communicate with each other during the entire imaging process. Then, after months of work, all four teams gathered in Cambridge, Massachusetts, and ran their algorithms simultaneously in the same room. And they waited. April 10, 2019The photo that went viral shows Katie Bouman sitting at her laptop, hands over her mouth, eyes wide with disbelief. This was the moment all four algorithms converged on the same image. A bright ring of superheated gas and dust surrounding a dark center—the shadow of a black hole. They'd done it. After six years of work, after processing 5 petabytes of data (so much data it had to be transported on half a ton of hard drives because it was faster than internet transfer), after coordinating eight telescopes across four continents—they had captured the first photograph of a black hole. Proof of Einstein's theory of general relativity. Visual confirmation that black holes—objects so dense that not even light can escape them—actually exist. "Watching in disbelief as the first image I ever made of a black hole was in the process of being reconstructed," Katie wrote on Facebook. The image made headlines around the world. So did Katie's photo. And then came the backlash. The Internet Turns UglyWithin 24 hours of the image's release, online trolls launched a coordinated attack on Katie Bouman. They claimed she didn't actually write the code. They counted lines of code in GitHub repositories and claimed other team members (specifically male team members) had done all the real work. They called her a "diversity hire." They said she was taking credit for other people's achievements. Some went further, with misogynistic attacks and harassment so severe that her colleagues felt compelled to speak out. Andrew Chael, another member of the Event Horizon Telescope team, posted a thread on Twitter that went viral: "While I wrote much of the code for one of these pipelines, Katie was a huge contributor to the overall effort, and showed forth wonderful leadership as we worked to test the robustness of the code. "He continued: "I'm thrilled Katie is getting recognition for her work and hope it inspires young women to pursue science. While I appreciate congratulations on a result I worked hard on for years, if you are congratulating me because you have a sexist vendetta against Katie, please go away and reconsider your priorities in life. "Vincent Fish, a research scientist at MIT's Haystack Observatory, was equally clear: "Katie was a major part of one of the imaging subteams."The entire Event Horizon Telescope team rallied around her. Press releases emphasized her crucial role. Colleagues publicly defended her contributions. Something Rare Happened The internet backlash was vicious—but it didn't win. Scientists, journalists, and everyday people pushed back against the trolls. Major media outlets published articles explaining Katie's actual contributions. Her TEDx talk from 2016 (where she explained the algorithm before the image was even captured) went viral. The story became bigger than one scientist—it became about recognizing the systemic devaluation of women's work in STEM fields. Katie herself maintained grace throughout. She repeatedly emphasized that this was a team effort: "No one algorithm or person made this image. It required the amazing talent of a team of scientists from around the globe and years of hard work to develop the instrument, data processing, imaging methods, and analysis techniques that were necessary to pull off this seemingly impossible feat. "She credited her ~200 colleagues. She shared the spotlight willingly. But she also didn't downplay her own work—because it was real, documented, and essential.
What Actually Happened
Let's be clear about Katie Bouman's contributions (because facts matter):
She led the development of the CHIRP algorithm starting in 2013 as part of her PhD work
She gave a TEDx talk in 2016 explaining how to photograph a black hole (two years before the image was captured)
She was co-leader of the EHT Imaging Working Group
Her verification methods were crucial to ensuring the final image wasn't an artifact or error
She designed the framework that allowed multiple independent algorithms to validate each other
Was she the only person responsible? Absolutely not—this was a 200+ person collaboration. Was her work essential and leadership pivotal? Yes. Unquestionably yes.
The Aftermath:
Katie Bouman joined Caltech as an assistant professor of computing and mathematical sciences in June 2019, shortly after the image's release. She was 29 years old. By 2024, she'd been promoted to associate professor. She continues to lead imaging efforts for the Event Horizon Telescope, which has since captured another image: the black hole at the center of our own Milky Way galaxy (Sagittarius A*), released in May 2022.She's received the Royal Photographic Society Progress Medal and Honorary Fellowship. She was recognized as one of BBC's 100 Women of 2019.But perhaps most importantly, her story inspired thousands of young women to pursue STEM fields.
The Deeper Truth: Katie Bouman's story isn't really about a black hole.It's about what happens when women do groundbreaking work—the dual reality of celebration and doubt that so many women in science face. It's about the exhausting requirement to prove not just competence but excellence, and then have that excellence questioned anyway. It's about the automatic assumption that a young woman couldn't possibly be the brilliant mind behind a historic achievement—that surely a man did the "real" work. But it's also about resistance. About colleagues who stood up. About an internet community that, for once, chose truth over toxicity. About a young woman who maintained her grace while refusing to be erased. Katie Bouman never wanted to be famous. She just wanted to solve an impossible problem. She wanted to see what no human had ever seen before. And she did. Six Years, Eight Telescopes, 200 Scientists, One Image On April 10, 2019, humanity photographed a black hole for the first time in history. We saw the unseeable. We proved Einstein right. We looked 55 million light-years into space and captured light from the edge of a gravity well so powerful that nothing—not even light itself—can escape. Katie Bouman was 29 years old. She'd spent six years developing algorithms that could see in the dark. Six years turning fragments of data into a coherent picture. Six years making the impossible possible. Her code didn't just capture a black hole. It captured humanity's potential—what we can achieve when we combine brilliant minds from across the globe, when we refuse to let bias define who gets to make history, when we choose collaboration over competition.
The black hole image is now one of the most famous photographs in scientific history, up there with the first image of Earth from space and the DNA double helix.
And whenever you see that image—that fiery ring of light surrounding profound darkness—remember: A 29-year-old woman helped humanity see it. Katie Bouman: Born 1989The computer scientist who taught us that the impossible just requires better algorithms—and the courage to try. May her story inspire every young girl who looks up at the night sky and wonders what's out there. The answer might be waiting for her to invent the algorithm that reveals it.
Unquote:
Thematic Analysis:*
1. *AI-powered healthcare:* The conversation highlights the growing interest in using AI to enhance healthcare services, including patient care, data management, and clinical decision support.
2. *Challenges and limitations:* The participants discuss the challenges and limitations of current AI models, such as Gemini's stance on medical decision support, and the need for careful consideration of data security and HIPAA compliance.
3. *Agentic AI:* The concept of agentic AI is introduced, highlighting its potential to analyze and support complex healthcare data and decision-making processes.
4. *Collaboration and knowledge sharing:* The conversation demonstrates a collaborative effort among healthcare professionals to share knowledge, ideas, and experiences on leveraging AI in healthcare, with a focus on improving patient outcomes.
5. *Future directions:* The discussion touches on the potential future directions of AI in healthcare, including the use of multimodal data entry, template documents, and snippets to streamline medical data management and support clinical decision-making.


No comments:
Post a Comment