Non-human identity: the security problem every 10-person SaaS team ignores
Your OAuth grants, CI/CD tokens, and IAM keys outnumber your employees by orders of magnitude. Most have never been audited.
Every SaaS security guide converges on the same list. MFA on every account. A password manager. Separate production and development environments. Rotate credentials when an engineer leaves. Run npm audit.
These are all fine. They also miss the actual attack surface of a 10-person SaaS team in 2026, which is not the employees.
The identities running in your infrastructure that are not attached to a human — service tokens, API keys, CI/CD credentials, OAuth grants, machine roles — outnumber human identities by roughly 144 to one in the average organisation. Your ten engineers might hold 30 human identities across systems. Your non-human identity count is closer to 300. Possibly more, if you have been building actively for more than six months.
Most of those 300 identities have no rotation schedule. Many are over-permissioned. Some you have forgotten you created. Several are accessible from systems with internet-facing attack surface. The checklist does not tell you any of this.
What a non-human identity is, and how many you have
A non-human identity (NHI) is any credential or token that authenticates a process, service, or automated system rather than a person. The category includes:
- Static API keys: the STRIPE_SECRET_KEY, SENDGRID_API_KEY, OPENAI_API_KEY in your .env files and CI/CD secrets
- CI/CD service tokens: GitHub Actions secrets, GitLab CI variables, CircleCI environment variables
- Cloud provider credentials: AWS IAM access keys (the kind that look like AKIA...), GCP service account JSON files
- OAuth application tokens: the access token Zapier holds to your Slack workspace, the refresh token your data pipeline holds to Google Sheets, the token your CRM integration holds to your calendar
- Webhook secrets: shared secrets used to verify incoming Stripe, GitHub, and Twilio webhooks
- Database credentials: the password your application server uses to connect to Postgres
- Deployment secrets: the VERCEL_TOKEN, FLY_API_TOKEN, or Heroku key sitting in your GitHub Actions secrets
A 10-person SaaS team that has been building for a year typically has 20 to 40 GitHub Actions secrets across all repositories, 15 to 30 OAuth integrations across the tools the team uses, 10 to 20 AWS IAM users with programmatic access, and 30 to 60 webhook secrets. Total: somewhere between 75 and 150 non-human identities, conservatively. Every one is a potential entry point.
Your OAuth integration graph is your blast radius
The most common attack vector against SaaS companies in 2025 was not phishing. It was OAuth token compromise through a third-party integration.
The pattern: a third-party SaaS vendor — a CRM plugin, a Slack bot, a data pipeline tool — is compromised or misconfigured. The attacker gains access to the OAuth tokens that tool holds. Those tokens were issued by your application with broad scopes and long expiry (often months to never). The attacker uses them to exfiltrate customer data or pivot into your production systems.
The reason this works at scale is that OAuth grants are invisible in day-to-day operations. Your employees can list them (Settings or Integrations on most SaaS tools) but they rarely do. Grants persist after the integration is no longer in use. The scopes were accepted in a rush. There is no centralised audit log across all the OAuth apps your company has connected to all the SaaS tools you use.
The practical fix is a quarterly integration audit:
- List every OAuth application currently authorised in your company's Slack, Google Workspace, GitHub organisation, Notion, and any SaaS tool with a connected-apps page.
- For each: when was it last used? What scope does it hold? Is this integration still active?
- Revoke anything unused or unrecognised.
- For anything active: does it need the scope it holds? Most webhook sinks need read-only. Most data integrations need specific resources, not workspace-wide access.
This takes about two hours the first time and becomes faster each subsequent quarter. The goal is not to stop connecting third-party tools; it is to keep your integration graph legible.
The five highest-risk NHI patterns
Not all non-human identities carry the same risk. These five patterns account for the majority of meaningful exposure at a small SaaS team, ordered roughly by exploit frequency.
| Pattern | Why it is dangerous | Fix |
|---|---|---|
| Long-lived IAM access keys in CI/CD | Never expire; whatever scope was set at creation; if a key leaks from a debug log or a public repo, the attacker has a persistent cloud foothold | Replace with workload identity federation (OIDC tokens) — see next section |
| Hardcoded secrets in git history | A secret committed and then "deleted" remains in history indefinitely; if the repo was ever public, it was scraped | Secret scanning in CI (Trufflehog or GitGuardian) blocking merges on detected secrets |
| Wildcard IAM policies with no resource scoping | AmazonS3FullAccess means access to every bucket, including backups and uploaded documents | Resource-level IAM policies scoped to the specific buckets, queues, or tables each service actually needs |
| Forgotten OAuth grants from deprecated integrations | Vendor holds a valid long-lived token to your data long after you stopped using the tool; vendor breach means your data is exposed | Quarterly OAuth audit; revoke anything inactive for 90+ days |
| Shared database credentials across workloads | One compromised service means full database access for the attacker | One service account per workload, with minimum schema-level privileges |
Workload identity federation: eliminating the most dangerous class of secret leaks
The single most impactful change a small team can make is replacing long-lived IAM keys in CI/CD with short-lived credentials via OIDC federation. GitHub Actions, GitLab CI, and CircleCI all support this natively with AWS, GCP, and Azure.
Here is what the GitHub Actions side looks like for AWS:
name: Deploy
on:
push:
branches: [main]
permissions:
id-token: write # enables OIDC token generation
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsDeployRole
aws-region: ap-south-1
# All subsequent steps now have AWS credentials valid for this run only
- run: aws s3 sync ./dist s3://my-app-bucket --deleteOn the AWS side, you create an IAM OIDC provider for token.actions.githubusercontent.com, then create an IAM role with a trust policy that restricts assumption to workflows from your specific repository and, optionally, your specific branch. The credentials are issued per workflow run, expire in under an hour, and are never stored anywhere.
The result: you can delete every AKIA... key sitting in your GitHub Actions secrets. If an attacker finds a way to execute code in your workflow, they get a credential that expires before they can pivot usefully. The OIDC token is also tied to the exact repository and ref you configure in the trust policy.
Setup takes about an hour the first time. This is among the changes with the best security-to-effort ratio in this entire space.
Auditing what you already have
Before fixing the posture, you need to understand it. Here is a practical audit sequence for a 10-person team that should take half a day total.
Cloud credentials (1–2 hours). List every IAM user with programmatic access in AWS (or equivalent in GCP/Azure). For each: when were the credentials last rotated? What policies are attached? Is the user still needed? Delete any users representing departed engineers, deprecated workflows, or experiments that concluded. For any remaining AKIA... keys, start the migration to OIDC or, at minimum, set a rotation deadline.
CI/CD secrets (30 minutes). List all secrets in your GitHub organisation and in each repository. For each: who created it? Does this workflow still run? Is this key still valid? Delete anything not actively used. Flag anything that looks like a long-lived cloud provider key for replacement with OIDC.
OAuth integration audit (2 hours). Go to the Connected Apps or Integrations section in Google Workspace admin, Slack admin, GitHub organisation settings, Notion, and any other SaaS tool that surfaces this. For each authorised application: when was it last active? What scope does it hold? Is it still in use? Revoke anything inactive for 90 or more days. Flag anything with write access you cannot immediately explain.
Database credentials (30 minutes). List every database user or role. For each workload that connects: does it use a shared credential with other services? What schema-level permissions does it hold? The goal is one service account per workload with minimum privilege.
The findings from this audit are more useful than any compliance questionnaire you will fill out this year.
A reordered checklist
The standard checklist items are not wrong. They are just not where a small SaaS team's meaningful risk lives. Here is a reordered version, with the highest-impact items first:
- Secrets scanning in CI: Trufflehog or GitGuardian running on every pull request, blocking merges on detected secrets. Thirty-minute setup.
- Workload identity federation: OIDC-based short-lived credentials in CI/CD for every cloud provider interaction. Eliminates the largest class of credential leak.
- Quarterly OAuth audit: integration graph visibility; revoke unused grants, downscope where possible.
- Resource-scoped IAM policies: no wildcards; specific ARNs for what each service account actually touches.
- Separate database credentials per workload: one service account per service, minimum privilege.
- Branch protection: required reviews, status checks, and signed commits on main and production branches.
- MFA on all human accounts: yes, do this; it is just not the highest-impact first step.
- Password manager: yes, also do this.
The non-human identity audit in step one through five is where the actual risk reduction happens. The last two items are necessary hygiene but not where small SaaS teams get breached.
The interesting thing about this list is that none of it requires a security budget, a dedicated security hire, or a compliance framework. It requires about two days of engineering time to implement, and a calendar reminder every quarter. Most teams skip it because the checklist guides they read never told them to do it.
Frequently asked questions
Related reading
Bootstrapped or venture-backed: the Indian SaaS calculus in 2026
India hosts the second-largest SaaS ecosystem outside the US. The raise-or-bootstrap question has a different answer in 2026 than it did in 2021. Here's the data behind the shift.
The AI wrapper debate, three years in: what the survivors built
Three years after the GPT-4 wrapper wave, a handful of AI companies are thriving and most are gone. The split was not random — and the pattern tells you something useful about building on top of LLMs in 2026.
LLM database access: the RBAC gap most teams don't see
Giving an LLM access to your database is easy. The problem is that your application-layer RBAC is invisible when the model generates SQL. Here's where it goes wrong and how to fix it at the layer that enforces.