Request scope

AI tool due diligence

Before an AI tool enters the workflow, know what risk enters with it.

EvidenceOps reviews AI vendors across data handling, training use, retention, model providers, controls, workflow dependency, pricing, and unresolved verification questions.

Why AI tools feel different

AI vendor risk is often hidden inside workflow convenience.

The tool may look harmless until customer data, employee notes, confidential files, or operational decisions start flowing through it.

Data handling is usually the first unresolved question.

Teams need to know what is stored, what is used for training, where data goes, and who the underlying processors are.

Output risk depends on the workflow.

An AI assistant for drafts has a different risk profile than an AI tool shaping customer, finance, legal, or HR workflows.

Policy fit matters before adoption spreads.

AI tools often enter through small pilots. The risk grows when informal usage becomes operational reliance.

Evidence layer

What gets checked before an AI tool rollout.

The review maps vendor claims to public documents, trust-center evidence, privacy terms, product docs, and verification questions.

Training use

Whether customer inputs, outputs, files, or logs may train models

Retention

How long data is stored and whether deletion is documented

Model chain

Underlying model providers, subprocessors, hosting regions

Controls

SSO, admin permissions, audit logs, workspace settings

Workflow risk

Human-in-the-loop needs, error exposure, reliance level

Rollout conditions

Allowed use cases, blocked use cases, vendor questions

Decision logic

A good AI decision is rarely just Yes or No.

The useful answer is often conditional: which use cases are allowed, which data is excluded, and which controls must be verified first.

Define the allowed use case

The same vendor can be acceptable for low-risk drafts and unacceptable for sensitive customer records.

Separate vendor claims from policy assumptions

If the vendor says data is protected, the review checks where that is stated and what remains ambiguous.

Translate risk into rollout conditions

The brief turns open questions into vendor checks, pilot tests, and internal policy constraints.

Starter checklist

Five questions before approving an AI tool.

These questions create the moment of useful friction before a promising AI tool becomes an uncontrolled process dependency.

DataWhat exact data categories may users enter?
TrainingAre customer inputs or outputs used to train models?
ControlsCan admins enforce usage rules and audit risky behavior?
Vendor chainWhich subprocessors or model providers receive data?
RollbackCan the team export, delete, or stop usage without disruption?

EvidenceOps

When a vendor decision needs to be defended internally, this is the moment.

Request scope