Last quarter I conducted a technology risk assessment for a regional accounting firm in Westchester County. Sixty employees, three partners, a respectable managed service provider handling their infrastructure. The firm was pursuing SOC 2 Type II certification to satisfy a growing number of clients who required it. They had a current risk register. It covered their servers, their backup systems, their endpoint protection, their access controls. It did not mention AI anywhere. Not once.

Over the course of a two-week assessment, we identified fourteen AI-powered tools in active use across the firm. Tax preparers were using AI document extraction to process client records. The audit team had adopted an AI-assisted workpaper review tool through a vendor integration they hadn’t flagged to IT. Three different AI writing assistants were being used for client correspondence. Two partners were using AI transcription for client meeting notes. The marketing coordinator was generating content with a generative AI platform that retained all inputs for model training.

None of these tools appeared in the vendor inventory. None had been through the firm’s third-party risk assessment process. None were covered by the firm’s data handling policies. And critically, several were processing client financial data, personally identifiable information, and in at least one case, tax return data subject to IRS Publication 4557 safeguarding requirements. The risk register that the partners had signed off on described a firm that did not use AI. The firm that existed in reality was running fourteen AI tools with client data flowing through all of them.

The risk register is the document your auditors will examine. If it doesn’t reflect what’s actually happening in your organization, it isn’t a risk management tool. It’s a liability.

01

The Invisible Technology Layer

AI adoption in mid-market organizations follows a pattern that is fundamentally different from previous technology waves. Enterprise software required procurement approval, IT installation, and budget allocation. Cloud services at least appeared on credit card statements or subscription invoices. AI tools often require none of these. An employee signs up with a personal email, uses a free tier, and integrates it into their workflow within minutes. From the organization’s perspective, nothing happened. From a risk perspective, everything changed.

Gartner’s latest forecast projects that AI-related issues will drive half of all incident response efforts by 2028. That projection encompasses data exposure through AI tools, AI-generated content errors with legal or regulatory consequences, adversarial manipulation of AI systems, and the cascading failures that occur when AI-dependent processes break in ways that weren’t anticipated because the AI dependency wasn’t documented.

The accounting firm’s situation was not unusual. It was representative. Every mid-market organization I assess in 2026 has the same gap: the risk register describes the technology the IT department manages, and the employees have adopted an entire parallel layer of AI tools that IT doesn’t know about. The gap between those two realities is where the incidents will come from.

The Visibility Problem

AI tools often leave no footprint on managed endpoints. They run in browser tabs, operate through vendor integrations, and authenticate with personal credentials. Traditional IT asset management doesn’t see them.

The Classification Problem

Employees don’t think of many AI tools as “AI.” A document extraction feature embedded in their tax software, an AI-powered search in their email client, a smart template in their CRM—these don’t register as AI adoption in the employee’s mental model.

The Data Flow Problem

Every AI tool that processes client data creates an undocumented data flow. Where does the input go? Does the vendor retain it? Is it used for model training? These questions have regulatory answers that most organizations haven’t asked.

02

What the Auditors Will Ask

The accounting firm was pursuing SOC 2 Type II. Their auditor had already signaled that AI governance would be a focus area. The AICPA’s 2025 guidance on AI in the context of SOC engagements makes clear that organizations must identify, assess, and control risks associated with AI systems that process data relevant to the trust services criteria. If an AI tool processes data that is in scope for the engagement, the tool’s risks are in scope for the assessment—whether or not the organization documented it.

For organizations subject to other frameworks, the exposure is parallel. HIPAA-covered entities must account for all systems that process PHI, including AI tools employees adopted without authorization. PCI DSS requires an inventory of all systems that store, process, or transmit cardholder data—including AI-powered analytics tools that a marketing team may have connected to payment data. NYDFS 23 NYCRR 500 requires a current asset inventory and risk assessment that covers all information systems. An AI tool processing nonpublic personal information is an information system.

AI Tools We Commonly Find Undocumented

Meeting transcription: Records and transcribes client meetings. Audio and transcripts stored on vendor servers. Client names, financial details, strategic discussions all captured. No DPA in place.
Document extraction: Reads uploaded documents (tax returns, financial statements, medical records) and extracts structured data. Document content transmitted to vendor for processing. Retention policy unknown.
Email drafting assistants: Generates and edits client correspondence. Reads the email thread for context. Client communications, including sensitive details, used as model input.
Vendor-embedded AI: Features within existing software that use AI to enhance functionality—smart search, automated categorization, predictive analytics. Often enabled by default in software updates without explicit user adoption.
AI coding/automation tools: Used by IT staff or power users to automate workflows. May process data from multiple systems, creating undocumented integration points.
03

From Invisible to Governed: A Practical Path

The accounting firm needed to move from “we don’t use AI” to “we know exactly what AI we use and how we govern it” before their SOC 2 audit. We did it in thirty days. The approach was the same one that works for any mid-market organization facing this gap: discover, classify, decide, document.

Week 1: Discovery. We ran a three-pronged discovery process. First, a technical scan: DNS logs and web proxy data for connections to known AI provider domains (OpenAI, Anthropic, Google AI, Microsoft Copilot, and approximately forty other services). Second, a department-by-department walkthrough with managers, asking not “do you use AI?” but “walk me through how you process a typical client engagement from intake to deliverable.” The AI tools surface in the workflow description even when people don’t think of them as AI. Third, a vendor inventory review: we examined every SaaS subscription and checked release notes for AI features added in the last twelve months.

Week 2: Classification. For each discovered tool, we documented: what data it processes, where that data goes, whether the vendor retains inputs, whether the tool is used in a client-facing or decision-making capacity, and what the vendor’s data handling terms say. We tiered the tools by risk: Tier 1 (processes client PII or regulated data), Tier 2 (processes internal business data), Tier 3 (no sensitive data). Tier 1 tools got immediate attention. Tier 3 tools got documented and monitored.

Week 3: Decision. For each Tier 1 tool: approve with controls, replace with an approved alternative, or prohibit. The partners made these decisions based on risk assessments we prepared—not blanket prohibition, but risk-informed governance. Two tools were approved with additional controls (DPA executed, data retention limits negotiated). One was replaced with a firm-provisioned alternative. The AI transcription service was prohibited after we discovered its terms allowed training on user inputs—a non-starter for client meeting content.

Week 4: Documentation. Updated the risk register to include all fourteen AI tools. Added AI governance controls to the firm’s information security policy. Created a lightweight AI tool intake process (one-page request form, three-day turnaround) so employees have a path to request new tools without bypassing governance. Briefed all staff on the new expectations.

The 30-Day AI Governance Sprint

04

The Competitive Advantage of Getting This Right

The accounting firm’s partners initially viewed the AI governance sprint as a compliance cost—something they had to do for the SOC 2 audit. Their perspective shifted when two things happened. First, a competing firm in their market lost a client engagement after failing to answer due diligence questions about AI data handling during a proposal process. The prospect specifically asked whether the firm’s AI tools were documented and governed. The competing firm couldn’t answer. Second, one of their own existing clients—a financial services company—sent an updated vendor questionnaire that included six new questions about AI governance. Having just completed their sprint, they answered every question definitively. The client told them they were one of three vendors out of twenty who could.

This is the shift that mid-market organizations need to understand about AI governance. It is not purely a defensive compliance exercise. Clients, prospects, and regulators are beginning to differentiate between organizations that govern their AI use and those that don’t. Being able to demonstrate a current AI inventory, documented data flows, and a functioning governance process is becoming a competitive qualifier—not just a compliance checkbox.

The organizations that treat AI governance as a competitive advantage—not a compliance burden—are the ones winning proposals, retaining clients, and building trust in a market where trust is increasingly scarce.

The fourteen AI tools the accounting firm was using weren’t a problem. Several of them delivered genuine productivity gains that the firm wanted to keep. The problem was that no one had evaluated them, documented them, or established the controls necessary to use them responsibly. Thirty days of focused work transformed “we don’t use AI” into “we use AI deliberately, with documented governance, and we can prove it.” That transformation is available to any organization willing to look honestly at what’s already happening inside its walls.

References

Gartner. (2025). Predicts 2026: AI Will Drive Half of Incident Response Efforts by 2028.

AICPA. (2025). Guidance on AI Considerations in SOC Engagements.

Infosecurity Magazine. (2026, March 23). Most Cybersecurity Staff Don’t Know How Fast They Could Stop a Cyber-Attack on AI Systems.