Skip to main content
Lora Vaughn | Vaughn Cyber Group
Hero image for Your AI Vendor Said Their Model Is Accurate, Explainable, and Compliant. Did They Prove It?

Your AI Vendor Said Their Model Is Accurate, Explainable, and Compliant. Did They Prove It?

5 min read

community-banks ai-governance compliance vendor-selection

Every AI vendor demo looks great. The model catches the thing it’s supposed to catch. The dashboard is clean. The ROI slide is compelling.

What you don’t see in the demo is what the model misses, what it was trained on, how it behaves in your specific environment, and what happens when a regulator asks you to explain a decision it made.

Community banks are getting pitched AI tools right now at a pace that’s hard to keep up with. Fraud detection. Loan underwriting assistance. Anomaly monitoring. Customer service automation. The use cases are real and some of the tools are genuinely good.

But standard vendor due diligence wasn’t built for AI. The questions you’d ask a SaaS vendor about uptime and SOC 2 and data retention don’t get you what you need when the thing you’re buying makes decisions you’re responsible for.

AI vendor due diligence is a different animal

When you buy a traditional security tool, the question is mostly: does it do what it says? You can test it. You can compare outputs. You can run a proof of concept and see whether it catches the threats it claims to catch.

AI is harder. The model’s behavior depends on what it was trained on, how it was validated, how often it’s updated, and how it performs specifically in your environment, with your data, against your threat profile. A model trained on fraud patterns from large regional banks may perform very differently at a $500M community bank with a different customer base and transaction mix.

The vendor’s accuracy numbers are real. They’re just not necessarily your accuracy numbers.

And unlike a firewall rule you can inspect, most AI models are not fully explainable. When it flags a transaction, you may not be able to see exactly why. That matters when a customer disputes a decision. It matters more when an examiner asks you to walk them through your model risk management program.

What community banks need to ask

Before you sit through a demo, know what you’re evaluating. Here are the questions that matter.

What was the model trained on, and does it match your environment?

Ask for specifics. Not “financial services data.” What type of institutions? What asset sizes? What transaction volumes? What geography? If they can’t tell you, the accuracy numbers they’re showing you are aspirational, not predictive.

How do you handle model drift?

AI models degrade over time as the environment changes. Fraud patterns shift. Customer behavior shifts. A model that was accurate 18 months ago may have blind spots today. Ask how often the model is retrained, what triggers a retraining, and how they notify customers when model performance changes materially.

What does explainability actually look like?

Don’t accept “explainable AI” as a checkbox. Ask them to show you. If the model flags a transaction, what can you produce to document why? Will that documentation satisfy an OCC or FDIC examiner? Will it hold up if a customer challenges the decision under fair lending requirements?

What’s your model risk management documentation?

Regulators have had guidance on model risk management since SR 11-7 came out in 2011. That guidance applies to models you buy, not just models you build. Ask the vendor what documentation they provide to support your model risk management program. If they look at you blankly, that’s your answer.

How do you handle adverse outcomes?

Every model makes mistakes. The question is what happens when it does. Ask the vendor to walk you through a real example where their model produced a harmful outcome and what they did about it. If they can’t answer this, they either haven’t had one (unlikely) or they don’t disclose them (a problem).

The regulatory angle you can’t ignore

Community banks operate under examination cycles that larger institutions sometimes don’t. Your regulators are going to ask about any AI tools you’re using. They’re going to want to see your model risk management documentation. They’re going to ask how you validated the model, how you monitor its performance, and how you ensure it isn’t producing discriminatory outcomes.

If your AI vendor can’t support those conversations with documentation, you’re taking on their liability. The vendor gets the contract. You get the MRA.

Before you deploy any AI tool in a decision-making capacity, know what your examiner will ask and confirm your vendor can help you answer it. Not after the fact. Before you sign.

Start with requirements, not demos

The community banks that get this right aren’t the ones who watched the most demos. They’re the ones who built their requirements before they talked to vendors.

What decisions will this model touch? What regulatory requirements apply to those decisions? What do you need to document to satisfy examiners? What does performance monitoring look like in practice? What’s your exit plan if the model stops performing?

Answer those questions first. Then use the demos to see whether vendors can meet your requirements, not to figure out what to buy.

The AI tools in this space are genuinely useful. But useful, compliant, and appropriate for your specific institution are three different bars. Make vendors clear all three before you write the check.


Questions about AI vendor evaluation or model risk management at your institution? This is work I do with community banks. Let’s talk.

Need Help Getting Exam-Ready?

Vaughn Cyber Group helps community banks build security programs that satisfy examiners and actually protect your institution.

Consulting services are delivered through Vaughn Cyber Group.