Imagine signing off on an update to your customer support portal and later discovering that an AI agent the development team added to the portal has been quietly sending snippets of support tickets to an unsecured LLM for processing. Or picture a mid-level analyst pasting a spreadsheet of customer emails into a consumer chat engine because they wanted a quick summary. These aren’t just random thoughts – they are the ordinary, unnerving moments that happen when organizations treat AI like another app instead of an ecosystem with its own behaviors, dependencies, and legal gravity.
I typically wear two hats when thinking about AI: a lawyer and a nascent technologist (smile). That double view makes clear a simple, uncomfortable truth: uncertainty about what AI is doing is not an abstract governance problem. It shows up in contracts, litigation, audit trails, and the emergency response war room when something goes wrong.
The technology side is prodigiously creative. Vibe coding tools turn a human brainstorm into runnable code in seconds. Agentic systems – AI that can plan, act, and follow up across APIs – promise to automate complex workflows once confined to whiteboard dreams. These advances have real business value: faster iteration, new product features, and smarter internal processes. But with that value comes opacity. Where code has visible commit history and human authors, models have prompt histories, training data provenance (when you can get it), and emergent behaviors that defy simple explanation.
From a legal and risk perspective, opacity matters because law and liability are built on causal connections: who did what, when, and why? When an AI recommends the wrong diagnosis, misallocates funds, or invents a reference and someone relies on it, courts and regulators will want the causal chain. If your engineering team can’t trace the chain because the model is a black box, you have a potentially very expensive explanation problem on your hands.
The Long List of Potential Harms
There are several flavors of potential harm to imagine. First, data leakage: consumer models, unless explicitly restricted, can absorb prompts in ways that make proprietary or personal data part of their training signals. Second, hallucination: models present falsehoods with the confidence of authority. That can be a reputational grenade for a brand or a compliance violation in regulated sectors. Third, insecure outputs: vibe-coded snippets might include copied dependencies or weak authentication logic. Fourth, autonomous action: agentic systems operating without constraints can escalate a mistake across multiple systems faster than an ops team can respond.
None of those outcomes requires bad intentions. They can, and often do, arise from misaligned incentives, hurried deployments, or simple ignorance. A product manager wants a faster UX. An engineer wants to ship. A business leader wants cost savings. These are reasonable intentions, but when the “what” and the “who” of an AI’s behavior are unknown, good intentions can produce bad outcomes.
So, what does it feel like to encounter that unknown? To a lawyer, it looks like a contract missing critical language: who can use the service, whether customer data may be used for training, and what audit rights exist. To a security officer, it looks like telemetry that stops at a vendor boundary, leaving blind spots in the company’s threat surface. To a product owner, it looks like a feature that cannot be explainably justified to customers or auditors.
Accountability is Key
The real management problem here is accountability. Namely, who owns that AI? In many organizations, ownership is diffuse. Marketing adopts a new assistant without IT knowing. The customer success team experiments with a vendor tool. Procurement signs a subscription without reading the data use provisions. Adversarial actors and regulatory regimes don’t care about internal silos. They operate on outcomes.
That’s why thinking about AI integration means thinking about lines of authority and the contours of risk as much as it means thinking about model architecture. You don’t need to be a machine-learning engineer to ask clarifying questions that matter. Ask not only what the model does, but what it is allowed to do, what data it touches, whether it learns from interactions, and how those interactions are logged. Ask whether outputs are auditable and whether the vendor will allow a third-party review under the NDA. Ask whether the system can be contained – whether there’s a clear “off” button if the AI’s behavior becomes unacceptable.
There are emotional and cultural dimensions too. Teams that celebrate speed above all else will see AI as a productivity hack and not as a durable system requiring maintenance, monitoring, and policy. Engineering teams that prize experimentation may resist constraints that feel like red tape. Legal teams that prioritize risk avoidance can be seen as brakes on innovation. None of these responses are wrong in isolation; the challenge is aligning them – so that the organization captures AI’s upside without being surprised by its downside.
The future will increasingly be shaped by hybrid realities: customers won’t tolerate opaque decisioning in some contexts and will demand flawless UX in others. Regulators are moving from curiosity to scrutiny. Insurance markets will want traceable controls before underwriting AI-driven risks. All of those forces favor visibility and intentionality.
A Call to Intentionality
This is not a call to paralysis but instead a call to intentionality and transparency. The point isn’t to slow innovation with exhaustive processes; it’s to develop a habit of understanding and questioning. Before you sign off on a new AI integration, consider three preliminary questions – not as a checklist to be gamed but as prompts to shape the conversation across legal, security, product, and procurement:
- Who, precisely, is accountable if this system behaves badly?
- What data does this system see, and what happens to that data after it leaves our environment?
- How will we know, quickly and reliably, that the AI has gone off course?
Those questions (and others) will push teams toward the kind of transparency that scales: model provenance, prompt and output logging, clear contractual terms about data use and ownership, and operational readiness for containment and remediation. They don’t require solving explainability for every neural weight, but they do require practical mechanisms that let a human tell a credible story about an AI’s lifecycle.
In the end, asking “Do you know what your AI is doing?” is less about fear and more about stewardship. We are building systems that will act on behalf of organizations, affect customers, and touch privacy and safety in ways we are only beginning to map. If you want those systems to be trustworthy – legally defensible, operationally reliable, and ethically sound – start by admitting what you do and don’t know. Curiosity, paired with disciplined inquiry, is the minimal competence any organization should ask of itself in an era when an assistant can code, an agent can act, and a single prompt can ripple through your business in minutes.
That curiosity and disciplined inquiry is the beginning of accountability. It’s also the beginning of better outcomes for our enterprises and for our customers.
Take care out there!
***
A Disclaimer – Please do not consider this to be legal advice, but the musings of someone who frequently thinks about these types of issues. All situations are different, and you should always consult your own legal counsel.