Every AI feature you enable is a governance decision you’re making — whether you know it or not
This post is part of the Telehealth & Healthcare Security Series. It builds on the foundational concepts in Why HIPAA Compliance Isn’t Enough: A NIST CSF 2.0 Guide for Telehealth, The Compliance Trap, and The AI Governance Lifecycle Framework — and focuses on the specific governance challenges that AI-powered telehealth features introduce into healthcare environments.
The ambient scribe was saving providers 90 minutes a day. It was also sending every word spoken in every patient encounter to a third-party AI service that the practice had never evaluated, the BAA didn’t cover, and the patients didn’t know about.
The practice administrator discovered it by accident — reviewing a new invoice from a vendor she didn’t recognize. Three months of audio transcription charges, billed per minute, for every telehealth visit conducted on their platform. The telehealth vendor had quietly enabled an “AI-enhanced documentation” feature in a routine platform update. No notification. No consent workflow. No BAA amendment. Just a toggle switched on by default, funneling patient conversations through an AI pipeline that the practice had never mapped, vetted, or disclosed to a single patient.
This is what AI governance failure looks like in healthcare. Not a dramatic breach headline — a silent expansion of who has access to your patients’ most sensitive conversations, hidden behind a feature that providers love because it genuinely makes their work easier.
A single telehealth visit with AI features enabled creates a data flow that most practices have never documented, never evaluated, and never disclosed to patients:
Step 1: Patient Speaks
Audio captured by telehealth platform. The patient describes symptoms, medication history, mental health concerns — everything said in the visit is recorded as raw audio.
Step 2: Audio Sent to AI Transcription Service (Vendor B)
Patient audio leaves the telehealth platform. Raw audio is transmitted to a third-party transcription service the practice may not have evaluated. Data crosses a system boundary.
Step 3: Processed by LLM (Vendor C’s Infrastructure)
Transcribed text is sent to a large language model hosted on yet another vendor’s cloud infrastructure. The model processes the full clinical conversation to generate structured output.
Step 4: Transcript Returned to Platform
AI-generated clinical note returned. The structured transcript and draft note are sent back to the telehealth platform. The provider sees a polished summary — but doesn’t see the pipeline behind it.
Step 5: Auto-Summary Generated & Pushed to EHR
Summary automatically written to the medical record. The AI-generated note is pushed into the EHR, potentially without human review. It becomes part of the permanent clinical record.
Step 6: Copy Retained for “Model Improvement”
AI vendor retains a copy of patient data. Buried in the terms of service: the vendor retains transcripts, audio, or both for “service improvement” and model training. Your patients’ conversations are now training data.
Most practices don’t adopt AI deliberately. They discover it’s already running. The telehealth platform they signed a BAA with two years ago has since added AI capabilities that fundamentally change where patient data goes, who processes it, and what it’s used for. Here are the features most likely already active — or one update away from activation — in your environment.
What it does: Listens to the entire patient-provider conversation in real time, then generates a structured clinical note — SOAP format, assessment, plan, and all. Providers love it because it eliminates hours of documentation.
The governance gap: The scribe captures everything said during the visit. That audio is transmitted to an AI vendor — often a sub-processor the practice has never evaluated. Many ambient scribe vendors retain conversation data and may use it for model training. The BAA with your telehealth platform almost certainly does not cover this downstream AI vendor.
What it does: Handles pre-visit triage, symptom collection, appointment scheduling, and patient FAQ responses. Engages patients before they ever speak with a provider.
The governance gap: These chatbots collect PHI before the visit even starts. Patients disclose symptoms, medication lists, and mental health concerns to an AI system that may store and process this data independently of the EHR. Patients often don’t realize they’re interacting with AI, and the data collected may not be subject to the same access controls as the clinical record.
What it does: Creates a plain-language summary of the visit for patients — what was discussed, what was decided, next steps. Sent to the patient portal or via email after the appointment.
The governance gap: Convenient for patients, but who reviews these summaries for clinical accuracy before they’re sent? Where are they stored? If the AI hallucinates a diagnosis or medication instruction, that misinformation is now in the patient’s hands — and potentially in the medical record. The liability implications are uncharted.
What it does: Analyzes clinical notes to suggest CPT and ICD-10 billing codes. Promises to reduce coding errors and increase reimbursement accuracy.
The governance gap: To suggest codes, the AI needs access to full clinical documentation — diagnoses, procedures, provider reasoning. This means yet another AI system with access to complete patient records. If the coding AI is hosted by a third-party vendor, that’s another data pipeline that needs a BAA, security evaluation, and data retention review.
What it does: Risk scoring, population health analysis, no-show prediction, readmission risk assessment. Uses aggregated patient data to identify patterns and predict outcomes.
The governance gap: Predictive analytics aggregate patient data in ways patients never anticipated and almost certainly didn’t consent to. A patient who shared symptoms for treatment didn’t agree to be scored by a risk algorithm. These systems can introduce bias, create discriminatory patterns, and make clinical decisions based on correlations that no human reviewed. The data aggregation itself may violate minimum necessary standards.
Before any AI feature touches patient data, five questions must be answered. Not as a compliance exercise — as a governance practice. Each question maps to a concrete risk that, left unaddressed, creates liability, regulatory exposure, and patient harm potential.
Map every data hop from patient to AI output and back. Where does the audio go? Who transcribes it? What infrastructure hosts the AI model? Where is the output stored? Does any copy remain with any vendor after processing? Most practices cannot answer this question for a single AI feature — and they have three or more active.
Action: Create a data flow diagram for every AI feature. If you can’t trace patient data from capture to final storage and deletion, you don’t have governance — you have hope.
If “yes,” your patients’ conversations may be improving a commercial AI model that serves thousands of other organizations. Is that disclosed? Consented? Legal? Many vendor agreements include broad language granting rights to use “de-identified” data for model improvement — but the de-identification methods are rarely specified, and re-identification risk is real.
Action: Review every AI vendor agreement for training data clauses. Demand explicit opt-out where available. If the vendor won’t commit to zero training data usage in writing, escalate to legal.
AI vendors often retain data for “service improvement” indefinitely. What does “retained” actually mean? Is it the raw audio? The transcript? The model weights trained on your data? What’s the actual retention period, and can you request deletion? Most vendors bury retention terms in supplementary data processing agreements that nobody reads.
Action: Obtain written retention schedules from every AI vendor. Verify deletion capabilities. Include retention limits in your BAA and data processing agreement.
Auto-generated clinical notes must be reviewed by a provider before they become part of the medical record. But “must” and “does” are different things. When the AI generates a note that looks correct and the provider is 12 patients behind schedule, how thorough is that review? What if the AI hallucinates a medication, a diagnosis, or a procedure that never happened?
Action: Establish a mandatory human review policy for all AI-generated clinical content. Define who is responsible, what constitutes adequate review, and what the liability framework looks like when AI output contains errors.
SOC 2 Type II? HIPAA BAA that explicitly covers AI processing? Sub-processor disclosure? Data processing agreement with clear terms? Many AI features are provided by sub-processors that your primary telehealth vendor has engaged — vendors you’ve never evaluated, never signed a BAA with, and may not even know exist.
Action: Demand a complete sub-processor list from your telehealth vendor. Verify BAA coverage for every entity that touches patient data. Require SOC 2 Type II reports from AI vendors and review them annually.
This is the question that separates governance-mature organizations from everyone else: Does your AI vendor use your patients’ data to train its models?
The answer is more nuanced than “yes” or “no,” and most vendor agreements are deliberately ambiguous. Understanding the difference between data usage categories is essential for any practice using AI in clinical workflows.
There are three distinct categories of how AI vendors use patient data. Each has fundamentally different governance implications:
Definition: Data is used solely to process your specific request. The patient’s audio goes in, the transcript comes out, and the data is deleted after processing. The model itself does not change or learn from your data.
Governance implication: Lowest risk category, but still requires BAA coverage, encryption in transit, and verification that deletion actually occurs. “Inference only” must be a contractual commitment, not a marketing claim.
Definition: Your data is used to adjust model parameters — improving the model’s performance for specific tasks. The model “learns” from your patients’ conversations and retains that learning permanently. Your data shapes a model that serves other customers.
Governance implication: Significant risk. Patient data is embedded in model weights that cannot be “deleted” in any meaningful sense. Once a model is fine-tuned on your data, the only way to remove its influence is to retrain the model from scratch — which no vendor will do for a single customer.
Definition: Data is stored indefinitely and may be used for training future model versions. Your patients’ conversations are sitting in a training dataset, waiting to be incorporated into the next model release.
Governance implication: Highest risk. This is the category most vendor agreements default to unless you explicitly opt out. The data is stored, potentially indefinitely, in an environment you don’t control. It may be combined with data from other organizations. And once it’s used for training, it’s irreversible. Look for this language: “to improve our services,” “for research and development,” “to enhance model performance.”
What to look for in contracts: Search every AI vendor agreement for these terms: “model improvement,” “service enhancement,” “de-identified data,” “aggregate data,” “research purposes,” and “training data.” If any of these appear without an explicit opt-out mechanism, your patients’ data is likely being used to train a commercial AI model.
No AI feature should touch patient data without completing this evaluation. The AI Feature Intake Form ensures that every new AI capability is assessed for privacy, security, and operational risk before deployment. This is not optional governance — it is the minimum standard for any practice introducing AI into clinical workflows.
| Evaluation Criteria | Details Required | Responsible Party |
|---|---|---|
| Feature Name & Vendor | What AI feature is being introduced? Which vendor provides it? Is it a primary vendor or sub-processor? | Requestor |
| Data Flow Map | Where does patient data go? List every system, vendor, and infrastructure component in the pipeline from data capture to final storage/deletion. | IT / Security |
| PHI Involvement | Does the feature process, store, or transmit PHI? What types of PHI (audio, text, clinical notes, billing data)? At what volume? | Compliance |
| Training Data Usage | Does the vendor use patient data for model training? Can it be opted out? Is the opt-out contractually binding? What data is retained post-inference? | Legal / Compliance |
| Vendor Security Posture | SOC 2 Type II report current? BAA in place that explicitly covers AI processing? Sub-processor disclosures provided? Penetration testing results available? | Security / Legal |
| Data Retention | How long does the vendor retain data? What categories (raw input, processed output, logs)? Can data be deleted on request? Is deletion verifiable? | Compliance |
| Patient Consent | Does current patient consent cover this AI feature? Is additional disclosure needed? How will patients be informed that AI is processing their visit data? | Legal / Clinical |
| Human Review Requirement | Who reviews AI output before it becomes part of the medical record? What is the review standard? What happens when errors are found? | Clinical Leadership |
| Accuracy / Hallucination Risk | Has the AI output been validated for clinical accuracy? What error rate is acceptable? How are hallucinations detected and corrected? Is there an audit trail? | Clinical Leadership |
| Operational Ownership | Who owns this feature going forward? Who monitors it for performance and compliance? Who has authority to disable it if issues arise? | Practice Manager |
| Risk Assessment Summary | Overall risk rating (Low / Medium / High) based on data sensitivity, vendor maturity, patient impact, and regulatory exposure. Conditions for approval documented. | Security / Fractional CISO |
| Approval | Sign-off by clinical lead, compliance officer, and security. No AI feature goes live without documented approval from all three parties. | All Parties |
How to use this form: Every AI feature — whether new or discovered already running — must go through this intake process. For features already in production, conduct the evaluation retroactively and document the gaps. For features that cannot pass evaluation, develop a remediation plan with a timeline or disable the feature until it can meet the standard.
Can you name every AI feature currently active in your telehealth stack?
Do you know whether your AI vendor uses patient data for model training?
Does your patient consent form disclose AI processing of their visit data?
Who reviews AI-generated clinical notes before they become part of the medical record?
Have you mapped the data flow for your ambient scribe or AI transcription service?
Does your BAA with the telehealth platform cover AI sub-processors?
If you answered “no” or “I don’t know” to any of these questions, your AI governance has gaps that create regulatory, legal, and patient safety risk.
Venuto, J. (2026, February). The compliance trap: Why “HIPAA compliant” medical groups still get hacked. Security Medic Consulting Blog. https://sm911.github.io/CFS-2.0/govern/gv-rm/hipaa_compliance_trap.html
Venuto, J. (2026, February). Why HIPAA compliance isn’t enough: A NIST CSF 2.0 guide for telehealth. Security Medic Consulting Blog. https://sm911.github.io/CFS-2.0/govern/gv-rm/telehealth_nist_csf_hipaa_guide.html
Venuto, J. (2026, March). Protecting Hudson Valley patients: Why telehealth providers are moving from “checklist” to “governance.” Security Medic Consulting Blog. https://sm911.github.io/CFS-2.0/govern/gv-rm/telehealth_hudson_valley_governance.html
National Institute of Standards and Technology. (2024). Cybersecurity framework 2.0. https://www.nist.gov/cyberframework
U.S. Department of Health and Human Services. (2024). HIPAA security rule. https://www.hhs.gov/hipaa/for-professionals/security/index.html
AI features are already in your telehealth stack. The question isn’t whether to use them — it’s whether you’re governing them before they create liability you can’t unwind.
A focused evaluation of the AI features in your telehealth environment: data flow mapping, vendor risk assessment, training data analysis, consent gap identification, and a prioritized governance roadmap — tailored to your practice’s size and risk profile.
Hudson Valley CISO
A Division of Security Medic Consulting
Fractional CISO Services | AI Governance | Healthcare Security