The minimum viable security posture for a 10-person SaaS
Seven controls that cover 90% of real breaches, in the order that removes risk fastest
Your first enterprise prospect sent over a security questionnaire. It's 47 questions long. The honest answer to about 35 of them is 'not yet.' The real question is which 12 gaps in your startup security posture you need to fix before you respond.
Most startup security guides are written for one of two audiences: companies preparing for SOC 2, or companies that just had a breach. At 10 people, you're in a third situation: real customers, real data, almost no security budget or headcount. The threat model is different too.
Seven controls cover most of the real risk at this size. None of them require a security hire.
The actual threat model at 10 people
The attacks that compromise small B2B software companies don't look like the ones in the movies. The most common vectors are credential stuffing (a bot tries every leaked email and password pair against your admin login until one works), secrets left in a git repository, misconfigured cloud storage that is accidentally public, direct dependencies with known CVEs that nobody updated, and phishing emails that capture credentials.
None of these require sophisticated attackers. None require zero-days. They are opportunistic, often automated, and low-cost to prevent.
The goal at 10 people is not an impenetrable perimeter. It is not being the easiest target, and limiting blast radius when something does go wrong.
Seven controls, in the order that matters
This list is not ordered by what sounds most responsible in a board meeting. It is ordered by what removes risk fastest, for the specific threat model above.
| Control | Primary risk prevented | Setup effort | When |
|---|---|---|---|
| SSO + MFA everywhere | Account compromise via credential theft | 1–2 days | Day 1 |
| Secrets management | Leaked credentials from git or logs | Half day | Day 1 |
| Least-privilege IAM | Full compromise from one leaked key | 1 day | Week 1 |
| Dependency scanning | Supply chain compromise via known CVEs | 30 min (CI setup) | Week 1 |
| Encrypted, tested backups | Ransomware, accidental deletion, data loss | 1 day | Week 2 |
| One incident owner | Slow response compounds breach damage | Zero — just decide | Day 1 |
| Data inventory | Enables all other security decisions | Half day | Day 1 |
SSO and MFA — every identity provider, not just the obvious ones
The most common gap: MFA is on Google Workspace, but not on AWS, GitHub, or Stripe. Every identity provider is an attack surface. A credential phish that gets your AWS root account password causes significantly more damage than one that gets a Google password, because AWS has fewer downstream safeguards.
The checklist is short: MFA on AWS root and every IAM user with console access, two-factor authentication required at the GitHub organisation level, 2FA enabled for every team member with Stripe dashboard access, and the same for any SaaS that touches customer data or billing.
The goal is to make credential theft alone insufficient. With the right password and no second factor, an attacker is stopped.
Secrets management from the start
The default workflow ends badly. A secret goes into a .env file, .env goes into .gitignore, and then one day someone adds it back, or creates a new .env.production, or includes a secret in a test fixture, or logs a stack trace with the connection string.
The fix: a secrets store before you need it. AWS Secrets Manager costs $0.40 per secret per month. GCP Secret Manager is free for the first six active secrets. The policy is simple: if it is a secret, it goes in the store. Not in environment variables in your CI/CD provider's UI. Not in comments. Not in any file that gets committed.
# Scan git history for secrets
trufflehog git file://. --only-verified
# Alternative with gitleaks
gitleaks detect --source .Least-privilege IAM in your cloud
The pattern to fix: a single IAM user with AdministratorAccess that every developer and every service uses. When that credential leaks, everything is compromised at once.
Least-privilege at 10 people means three specific things. First, no developer has persistent console access to production — when they need it, they assume a role with a short expiry that requires MFA. Second, services have narrow roles: the service that reads from S3 can read from that specific bucket and nothing else. Third, no IAM access keys for programmatic access where IAM roles are available. Keys leak. Roles do not.
Dependency scanning on every PR
Your application has 500 to 2,000 transitive dependencies. You are not reading their security advisories.
Dependabot on GitHub, Renovate, or Snyk's free tier will scan automatically and open PRs when a dependency has a known CVE. Setup is 30 minutes. The only discipline required is treating these PRs as real work, not as noise. One critical CVE in a direct dependency that sat open for two months is how a non-trivial percentage of supply-chain incidents happen at companies that size.
Encrypted backups you have actually restored
Almost every company has backups. Very few have tested the restore. These are different things.
The test does not need to be elaborate: once a quarter, take last night's backup, restore it to a staging environment, confirm the application comes up correctly. If this takes more than two hours, your recovery time is slower than you think.
Encryption matters separately: backups are often stored with broader access controls than production. An attacker who can exfiltrate your backup does not care that your production database is locked down.
One person owns incidents
Not a committee. One person, with a phone number, who has the authority to make calls — including the call to take the service offline.
The incident response plan for a 10-person company does not need to be a 40-page document. It needs to answer five questions: Who gets called first? Who can shut the service down? When do we notify customers? When are we legally required to notify regulators? Where is this document if we cannot access our normal tools?
That last question matters more than it seems. If your incident involves your SSO provider being compromised, you probably cannot log in to Notion to read the runbook.
A data inventory — the control that enables everything else
You cannot protect data you do not know about. You cannot answer a security questionnaire about data you have not mapped. You cannot scope your backup retention, your IAM policies, or your breach notification obligations without knowing what data you have and where it lives.
A data inventory at 10 people is a four-column spreadsheet: what data, where it lives, who has access, how sensitive it is. Half a day to build honestly. That half day clarifies every other security decision.
What a security questionnaire asks that is not on this list
When your first enterprise questionnaire arrives, it will include things not on the list above: penetration testing, written security policies signed by leadership, formal risk registers, change management procedures, vendor assessment programmes.
Most enterprise security teams reviewing small vendors at the top of the funnel are checking for obviously negligent behaviour, not for a mature programme. The absence of MFA is a red flag. The absence of a written penetration testing policy is a question about timeline, not a dealbreaker.
Three things tend to move a questionnaire from red flag to 'we will proceed with qualifications': evidence of MFA on all admin accounts, a one-page description of how you handle an incident, and an honest specific answer about what data you store and how long you keep it.
When this security posture needs to grow
Three triggers mean it is time to move beyond the minimum:
- A data processing agreement with specific security obligations. Read it. DPAs for regulated data often impose controls that are contractual commitments — annual pen tests, specific encryption standards, defined retention periods. Signing means you are committed.
- Regulated data at scale. Handling health, financial, or PII beyond what account management requires changes the threat model in ways the minimum-viable approach does not cover. A gap assessment from a security professional is genuinely useful at this point.
- 30 or more people. At that scale, informal security hygiene stops working. Onboarding a new engineer means they might create their own IAM user. Access controls that were obvious in a 10-person team become implicit assumptions new people do not share.
At any of these triggers: a gap assessment from a fractional CISO or security consultant. Not a compliance programme yet — a gap assessment. A few hours of their time will identify the three things that matter most for your specific situation, data types, and customers.
The seven controls above are a foundation. Treating them as permanent is the mistake.
Frequently asked questions
Related reading
The AI wrapper debate, three years in: what the survivors built
Three years after the GPT-4 wrapper wave, a handful of AI companies are thriving and most are gone. The split was not random — and the pattern tells you something useful about building on top of LLMs in 2026.
LLM database access: the RBAC gap most teams don't see
Giving an LLM access to your database is easy. The problem is that your application-layer RBAC is invisible when the model generates SQL. Here's where it goes wrong and how to fix it at the layer that enforces.
WebAuthn in 2026: the production explainer for engineers who missed the passkey shift
Most WebAuthn guides were written in 2019, before passkeys existed. This covers the 2026 picture: the concepts that trip teams up, the production decisions the spec doesn't make, and where to start.