Data handling is usually the first unresolved question.
Teams need to know what is stored, what is used for training, where data goes, and who the underlying processors are.
AI tool due diligence
EvidenceOps reviews AI vendors across data handling, training use, retention, model providers, controls, workflow dependency, pricing, and unresolved verification questions.
Why AI tools feel different
The tool may look harmless until customer data, employee notes, confidential files, or operational decisions start flowing through it.
Teams need to know what is stored, what is used for training, where data goes, and who the underlying processors are.
An AI assistant for drafts has a different risk profile than an AI tool shaping customer, finance, legal, or HR workflows.
AI tools often enter through small pilots. The risk grows when informal usage becomes operational reliance.
Evidence layer
The review maps vendor claims to public documents, trust-center evidence, privacy terms, product docs, and verification questions.
Whether customer inputs, outputs, files, or logs may train models
How long data is stored and whether deletion is documented
Underlying model providers, subprocessors, hosting regions
SSO, admin permissions, audit logs, workspace settings
Human-in-the-loop needs, error exposure, reliance level
Allowed use cases, blocked use cases, vendor questions
Decision logic
The useful answer is often conditional: which use cases are allowed, which data is excluded, and which controls must be verified first.
The same vendor can be acceptable for low-risk drafts and unacceptable for sensitive customer records.
If the vendor says data is protected, the review checks where that is stated and what remains ambiguous.
The brief turns open questions into vendor checks, pilot tests, and internal policy constraints.
Starter checklist
These questions create the moment of useful friction before a promising AI tool becomes an uncontrolled process dependency.
EvidenceOps