Security is a Vendor Assessment Problem
Why Vendor Assessments Usually Suck and How We Can Do Them Better
In this post, I cover:
How companies tend to do security assessments of the vendors and technology they use
Why these security assessments usually suck for all parties involved
How we can do better, with a lot less time and effort
Programming Note: I had my honeymoon for most of September and spent a lot of October and early November moving, which limited my writing time. We’re now resuming our usually scheduled cadence of posts every 2-ish weeks.
Vendor Security Assessments
Organizations, especially your customers, sometimes feel the need to assess the security of their vendors. Often this is a requirement of their compliance frameworks, though sometimes it's a genuine desire to reduce vendor risk. This typically involves:
Asking for your compliance certifications and reports (and, occasionally, looking at them).
Asking you for various documents like your policies, system architecture, penetration test reports, etc.
Giving you a spreadsheet (or worse, a janky website) of security questions for you to answer.
Have calls with you to discuss any issues they perceived in your responses, or just to get all the answers in a call because why be efficient about time usage?
Some companies do intense, in-depth self-audits of critical vendors, but that's probably not you, so it's not worth going into right now.
This has a lot of problems, has made a lot of people very angry, and is widely regarded as a bad move. There is plenty of nihilism in the vendor security space. The smart people at Latacora have written about this before. I'm sure some of us have seen the various posts that Daniel Miessler has written, arguing that you can't really evaluate the potential for compromise from a vendor, so you should assume they're all compromised, barely bother evaluating them, and try to reduce the blast radius.
I agree with that last step; assuming compromise should be a part of how you evaluate vendors. I also think it's limited; you can reduce the impact of many things, but you’ll always have core vendors you can’t. This approach removes the comparison of security quality between companies in your selection process, which also feels bad; otherwise, why would those companies pay me to secure their products well?
I'm not nearly as nihilistic. I think we can get some real value out of vendor assessments in much less time.
Why Vendor Assessments Suck
This could be its own series of blog posts, so quickly:
They’re Fakeable: You can say whatever you want on questionnaires or on calls. It's often faster and more effective to lie; explaining your nuanced reasons for not doing some common ask takes time, potentially several tedious calls with the prospect. Saying, “Yes, of course we do that," doesn't take any time at all. Even the audits and penetration tests are pretty gameable. Many auditors are “startup friendly”1 and want your money, so they’ll let you get by with things they shouldn’t. You control the scope you present to your auditors and testers.
They’re Low Signal: Even when you don't lie, the signal is low. Most questions are binary (yes or no), lack nuance, and often cover the same things your audit did (the entire CAIQ standard questionnaire can be answered by "Yes, we have a SOC 2").
They’re Performative: The reviewer isn’t getting signal from this, so they're probably just throwing the certificates and documentation in a folder somewhere to show their auditor that they manage vendor risk. The questions bloat because people want to be able to say they managed the risk of some scary new vuln by adding a question about it.2 This is why we still get Log4Shell questions in 2024.
They’re Time-Consuming: All that bloat and performative documentation takes a long time to write, and a long time for the buyer to analyze. Entire teams exist at larger companies to do the sales-side answering and the buy-side review. They could all be doing something more productive.
Assessment Goals
If I could have an oracle to tell me the truth about a vendor, I'd want to know:
Their security leadership is competent and driving things in the right direction, and their executive & engineering leadership is bought in on that direction
Their application and infrastructure, SDLC, and vulnerability management processes are mature enough to minimize the chance of vulnerable products
Their system design, detection, and response capabilities are enough to reduce the impact of attack on their systems
Their internal processes protect the security and privacy of my data and systems from their employees
Sadly, we don't have that oracle. In the real world, we have constraints: we need to do vendor assessments quickly, and most of what we can check is easily faked. Here's what we can do:
A Lean Assessment Process
The ideal process for most companies looks like:
A short questionnaire
Look at their bug bounty program, if they have one
Examine their architecture, threat model, and other security collateral
Okay fine. Glance at their compliance reports and policies for a second or two
Some quick research on the company
You can't learn everything the perfect oracle can. You're restricted to determining if the security team seems competent, well-resourced, and listened to. You can only do so much here, so our process should be quick.
The Questionnaire
Yes, questionnaires are bad, but hear me out. This should be short. 10 questions max.
There are two goals here:
You assume they may lie to you, so you should ask questions that tell you something even when they do
Often, they won't, so you should do as Miessler quips and ask them if they're axe murderers
This is a bit of a job interview for their security team. I can't be sure they've done the right things, but if they can at least tell me what the right solutions are, that’s something. If they're smart enough (or at least spent all night frantically Googling) to handle my questions, that weeds out many real trash vendors. I ask questions looking for significant detail about how they handle their most likely risks, like the ones covered in the Latacora blog. Since it's short (10 questions or less), I can force significant detail from the vendor.
Here's a version that a helpful but defunct company put out, and I've used something similar at past companies.
Bug Bounty Program
Having a bug bounty program is a better signal than a compliance report or penetration test. It's one of the harder signals to fake. Major providers (BugCrowd, HackerOne, Intigriti) tend to be better than smaller providers.
The payout structure and total payouts to date are strong indicators of program maturity. A company that offers $100k for high severity vulns has a great deal less of them than one that offers $1k and has better researchers on their program. Just looking at the Microsoft bug bounty programs shows the level of security maturity of each product: you can be more sure that Hyper V doesn't have any obvious critical vulns at 250k payouts than you can be in .NET at 15k. Just make sure the scope covers the products you're using.
Architecture, Threat Model, Security Collateral, and Audit Reports
Here are the common types of security collateral in rough order of usefulness:
Threat Model: If they have one documented, this is valuable. Here, I mean both the internal threat model of their product and how they feel their product fits into my environment. I want to see what dangers they know about, and what they do to make them less scary. You can compare that with what you're afraid of, and if there's a significant deviation, you can flee.
Architecture Diagram: I mean a real architecture diagram; not something that shows a TLS connection from "user laptop" to "Vendor Cloud Application". You aren't looking for any security information here, though it can lead to useful follow-up questions. You're looking for a coherent, reasonable architecture, while something overly complex or out-of-date will be full of horrors.
Pen Test Reports: Some vendors will actually include their horrifying pen test reports full of non-remediated SQL injection vulns. May as well check.
Audit Reports: I guess make sure there's nothing distressingly silly in the exceptions noted part of the SOC 2? This has never killed a vendor for me, but some vendors have gotten rather weird about me asking, and that killed them.
Collateral that’s basically never useful:
Policy documents: These are the same set of 7 different templates from the big compliance vendors/consultants. Maybe their SDLC or Change Management policy is interesting, if it actually describes their build pipelines in detail. It probably doesn't.
Risk scoring from the likes of SecurityScoreCard, BitSight, and RiskRecon. These all deliver the same thing, which is a report that’s the same as looking at passive Burp Suite scans on the vendor’s login page. They can't do any real testing without getting very sued. Instead, they'll flag that there's a barely risky cipher suite on the careers website or something else pointless.
Security whitepapers: Those are often made by product marketing, not anyone technical. They usually just say “zero trust” over and over again. I have no idea who the intended audience is, but I'm distressed that they're in charge of something important.
Research About the Company
You're looking to determine if the vendor's senior leadership cares about security and empowers their security org. Specifically:
The company's security-oriented features: Mature security features like SSO/auth, access controls, audit logs, etc, indicate the security org at that company is strong. Especially if these features are in the base pricing tier, since this means the desire to make customers secure won out over wanting to take more of their money. Salesforce, AWS, and Google are good examples here. Almost everyone listed at https://sso.tax/ is a bad example.
Open-source code: Companies with open-source components tend to be better. Their code is exposed, so they have to fix their easy-to-find vulns because their customers will scan it themselves and yell at them. I find the type of developers who prefer an OSS-oriented company care more about security issues.3 If you're feeling really fancy, you can audit the code yourself.
C-Suite background: You want to see leaders who come from places that are serious about security. A larger technical bias in the leaders will tend towards better security (and quality; look at what happened to Boeing once they got taken over by businessmen instead of engineers).
Security leader(s): Do they seem competent? Technical? Do they come from technical companies with good security programs? Do they have a nice blog?
Security team size and composition: How big is the team? Is it comprised more of software engineers (good) or analysts (less good)?
Company outlook: Companies that are struggling are more likely to cut security corners, cut staff, etc. Bad earnings, hiring freezes, or layoffs are all indications to be weary.
Past breaches: Obviously, recent ones are bad, but ones further in the past that the company seems to have learned lessons from can be good.
Security blogs and community engagement: If the security team is active in the community, then they have the slack to solve their security problems. If it’s intelligent engagement, they’re probably smart.
Current open roles: You want to see the security team grow reasonably with the rest of the org. If they aren't, that indicates under-investment.
All but the last two of these are hard to fake.
Application
If the vendor is touching your customer data, production systems, significant business processes, or really confidential internal data like employee PII or maybe significant internal intellectual property4:
Do what I described above. Read their documentation, their answers, etc. Probably schedule a call or two to dive into the questions they glossed over.
Bless the vendor, or not. What counts as risky is a bit up to you and out of the scope of this blog. If it's iffy, work with the purchasing team to define how you can reduce the blast radius of a vendor compromise. If bad, block the vendor.
This should not take you more than 8 person-hours, ideally less, and only occur for a couple of vendors a year, otherwise you’ve scoped things wrong. It’s important this is short. This process is low signal; you’re probably only getting a few percent more information by spending 40 hours instead of 8, so spend that extra time on more productive things.
If the vendor doesn't touch either of those things
Just ask them for their security collateral, no questionnaires or anything.
Google them to see if they had a breach in the past couple of years.
If they weren't breached and have a SOC 2 or ISO 27001 or whatever, put the collateral in your folder of vendor reviews to show your auditors.
Approve them and don't spend a second more thinking about it. This shouldn’t take more than 1-2 hours. You can't spend significant time on the sprint retro tool your engineers want to use.
A couple of other general suggestions:
Try to block vendors only when absolutely necessary. Always prefer a way to reduce the risk of the vendor, rather than outright rejection.5 You don't want your vendor review to burn your political capital; you have so many better places to spend it, and vendor review is just too low signal to be the place to burn it.
When you do block them, make sure you have a way to explain why to the business in a way they can understand. Don’t be a chimp. You taking their joy away for incomprehensible reasons burns political capital like napalm.
This doesn't solve everything. But it gets you a fair amount of signal and won't annoy everyone by blocking procurement. I’ve avoided several recent SaaS vendor compromises with this process. And most importantly, you won’t make all of your security peers across the industry miserable with your awful questionnaires.
I'd Love To Hear From You
Do you agree? Disagree? Intensely? Are there cool, clever things you’re doing to better assess your vendors and tools? Please leave a comment below; I'd love to hear it!
Startup-friendly is supposed to mean your auditors understand your fancy modern cloud & kubernetes systems, but it’s almost exclusively code for incredibly easy audits.
Some blame their compliance requirements, like SOC 2 and ISO, for the awful specifics of their vendor assessment process. This isn’t true! You can totally use the process I describe for any major standard.
I’m probably a bit biased since I’m currently at a very OSS-oriented company.
Your intellectual property probably isn’t that significant. The Windows source code leaks every few years and has had zero impact on Microsoft’s bottom line, so unless you’re in a specialized field, it won’t impact yours.
While you can always just reduce access, a robust detection & response program is very useful here. If you can define how you’d exploit your vendors’ access, and write detection rules for that activity. Knowing you can react swiftly to a vendor’s compromise is far better than seeing their SOC 2.