The minimum viable security posture for a 10-person SaaS
Six controls that cover most of the real attack surface, and what enterprise security theatre to defer.
Your first enterprise sales call goes well. Then the questionnaire arrives: 47 questions about encryption at rest, penetration testing, SOC 2 status, data residency, and incident response procedures. The engineering team has six people. The company's SaaS security posture is a shared Notion doc from 2023 that nobody has opened since.
This is not a compliance problem. It is a prioritisation problem. Most security guides are written for teams with a dedicated security engineer who can spend three months instrumenting a SIEM, three more getting a pen test scoped, and a year getting SOC 2 Type II. At 10 people, that is not a valid plan. But having nothing is not either.
What follows is the six controls a 10-person SaaS should have before the first serious enterprise conversation. They are ranked by the ratio of attack surface eliminated to implementation time, not by how impressive they look on a questionnaire.
The SaaS security posture problem: most advice assumes a bigger team
The OWASP Top 10, the NIST Cybersecurity Framework, SOC 2 Trust Services Criteria: all correct, all built for teams with time to implement everything on the list. That is not a 10-person startup.
The actual threat model for a small SaaS is narrower than frameworks suggest. You are not protecting against nation-state attackers. You are protecting against:
- Credential stuffing: someone tries your users' reused passwords from a breach elsewhere.
- A secret hardcoded in a git repository — GitHub's automated scanner finds it within minutes of a public commit.
- A compromised package in your npm or PyPI dependency tree.
- A developer who left six months ago and still has production database access.
- Accidental data exposure through a misconfigured S3 bucket or an overly permissive API endpoint.
Five categories. The controls that address these five overlap considerably. You can knock out most of them in a sprint.
Control 1: use a managed auth provider
The fastest way to introduce authentication vulnerabilities is to build the session-management layer yourself. JWT rotation, refresh token handling, PKCE flows, account recovery logic: this is a full-time job to get right, and getting it wrong means account takeover at scale.
Pick one managed auth provider: Clerk, Auth0, Supabase Auth, Cognito, or WorkOS if you need enterprise SSO immediately. Commit to it from day one. The cost is negligible compared to the engineering time a bespoke auth stack requires to build and maintain.
Once you are on managed auth, three rules apply:
- Require MFA for all admin and production access. No exceptions. Make it part of the onboarding checklist for every new hire.
- Set session expiry to 24 hours for privileged sessions. A longer session means a stolen token works for longer.
- Log every login, MFA challenge, and privilege escalation event. You will need that log if something goes wrong.
Control 2: treat every secret like a password
This is the single highest-return control on this list. One hardcoded AWS key committed to a public repository is enough to cause a complete account compromise. Attackers who monitor the GitHub commit firehose act within minutes of a public push.
Four rules, no exceptions:
- No secrets in code. Use environment variables in development and a secrets manager (AWS Secrets Manager, Doppler, HashiCorp Vault) in production. All three have free tiers that work for small teams.
- Install detect-secrets or git-secrets as a pre-commit hook on every developer machine. This catches the mistake before it reaches the remote.
- If a secret was ever committed, even briefly and to a private repo, rotate it immediately. Not eventually. The secret is in the git history, and a private repo shared with a contractor is not the same as private.
- Production credentials do not go into Slack, Notion, or email. Ever.
# Install detect-secrets
pip install detect-secrets
# Create baseline file (commit this to the repo)
detect-secrets scan > .secrets.baseline
# .pre-commit-config.yaml entry
# repos:
# - repo: https://github.com/Yelp/detect-secrets
# rev: v1.4.0
# hooks:
# - id: detect-secrets
# args: ['--baseline', '.secrets.baseline']Control 3: automated dependency scanning
A single dependency with a known vulnerability can expose your entire application. The npm, PyPI, and RubyGems ecosystems all have histories of package takeovers, typosquatting attacks, and compromised maintainers. A transitive dependency two levels deep in your tree carries the same risk as a direct one.
The minimum viable setup:
- Enable Dependabot or Renovate on every repository. Both create automated pull requests when a package has a known vulnerability. Merge the security PRs; defer the non-security version bumps if you need to.
- Run npm audit or pip-audit in CI. Fail the build on high-severity findings, not just report them.
- Pin your Docker base image to a specific digest in production builds. FROM node:20-alpine can change silently under you. FROM node:20-alpine@sha256:... cannot.
- Once per quarter, run npm list --depth=0 and question any package you cannot explain. Unused dependencies that nobody added intentionally are worth investigating.
Dependency scanning takes an afternoon to wire up and runs forever after. The cost of not having it is one compromised transitive dependency triggering a full incident response.
Control 4: least privilege, enforced
Every developer should not have production database write access by default. Every service should not have access to every AWS resource. Every departed employee should lose access on their last day, not when someone remembers to revoke it.
In practice at 10 people:
- IAM policies should be specific, not wildcards. An S3 bucket for user uploads should have exactly one policy: read and write access for your application service only. Not for developers, not for monitoring lambdas, not for everything.
- Production database write access is not a default for developers. It is an explicit, time-limited grant, with the reason logged.
- Offboarding runs from a checklist, not from memory. Every system your company uses should be on that list. When someone leaves, the checklist runs, including contractors.
Control 5: know where your PII is
You cannot protect what you cannot locate. Before you process any customer personal data, you should be able to answer four questions: what data do you store, where does it live, who can access it, and how long is it retained?
This does not require a data-mapping tool or a data protection officer. A shared spreadsheet with four columns covers the substance for a 10-person team. The discipline of filling it in is what matters.
Once you know where PII lives, three practices follow:
- Log access to sensitive tables. Most managed databases and ORMs expose query audit hooks. Turn them on. You need to know who queried users.email and when.
- Encryption at rest is probably already enabled for any managed database service (RDS, Supabase, PlanetScale, and Neon all default to encrypted storage). Verify it explicitly rather than assuming.
- Do not store what you do not need. Every field of PII you never collect is a field that cannot be exposed. If you collected phone numbers during onboarding and never used them, stop collecting them.
Control 6: a one-page incident response plan
When something goes wrong (a credential exposed, a customer data query in the wrong logs, an unexpected login from an unknown geography), the worst time to design the response is during the incident itself.
A one-page plan covers four things:
- What counts as a security incident: credential exposure, data access outside normal patterns, or suspected unauthorised access.
- Who makes the notification decision: one named person, one named backup.
- How to revoke a compromised credential across all systems within 15 minutes. Write this as numbered steps. Test it.
- Where the plan lives: not in Notion, not in Slack. Somewhere accessible when production is down and the usual tools are unreachable. A printed page in the office works. A bookmark on every developer's personal machine works. A pinned message in a channel that predates the incident works.
A one-page plan tested quarterly is worth more than a 40-page document nobody knows the location of.
| Control | Implement when | Effort | What it prevents |
|---|---|---|---|
| Managed auth provider | Day 1 | Low: pick a provider, stop building | Auth bugs, account takeover |
| Secrets management | Day 1 | Low: pre-commit hook plus env vars | Credential exposure via git or chat |
| Dependency scanning | Before first paying user | Low: enable Dependabot | Supply chain attacks, known CVEs |
| Least privilege IAM | Before first paying user | Medium: audit all policies | Over-exposure, stale access |
| PII inventory + access logging | Before processing personal data | Medium: spreadsheet plus DB audit hooks | Data exposure, compliance gaps |
| Incident response plan | Before first enterprise prospect | Low: write once, test quarterly | Slow, uncoordinated incident response |
What to defer and when to revisit
These controls are real. They are worth having eventually. They are not the right use of engineering time at 10 people.
- SOC 2 Type II. This makes sense once enterprise prospects consistently ask for it and deal sizes justify the cost. A SOC 2 audit typically runs $20,000-$50,000 in auditor fees plus significant engineering time. Implement the six controls above first; they complete most of the technical work, and SOC 2 adds the audit evidence layer on top.
- Penetration testing. A pen test against a product that has not yet implemented the six controls above will surface issues you already know about. Get the foundation in place first.
- SIEM and centralised log aggregation. Your cloud provider's native logging is sufficient for a 10-person team. A full SIEM is the right answer for a team with a dedicated operator.
- Zero-trust network architecture. Genuinely the right model at scale, but expensive to implement correctly. Worth planning for once you have multiple production environments to segment.
The pattern: defer controls whose primary function is producing audit and compliance evidence until you have the pipeline that requires that evidence. Implement first the controls whose primary function is preventing bad things from happening.
Security at 10 people is mostly about eliminating the six specific ways that 10-person SaaS companies actually get compromised. None of these controls require a security team to implement. Each of them takes longer to recover from than to prevent. The enterprise questionnaire will arrive eventually, but the controls are worth having well before it does.
Frequently asked questions
Related reading
Open-source licensing for engineers: a corporate codebase guide
Legal is not reviewing every npm install — you are. Here is the practical check to run before adding a dependency, and the licence type that catches most SaaS teams off guard.
The AI wrapper debate, three years in: what the survivors built
Three years after the GPT-4 wrapper wave, a handful of AI companies are thriving and most are gone. The split was not random — and the pattern tells you something useful about building on top of LLMs in 2026.
LLM database access: the RBAC gap most teams don't see
Giving an LLM access to your database is easy. The problem is that your application-layer RBAC is invisible when the model generates SQL. Here's where it goes wrong and how to fix it at the layer that enforces.