DPDP compliance for engineers: the four code changes your SaaS actually needs
Most DPDP guides are written for compliance officers. This one is written for the engineer who has to implement it.
The gap between policy documents and sprint tickets
DPDP compliance for engineers starts with a translation problem. When the DPDP Rules 2025 came into force, most engineering teams had the same experience: leadership said "we need to comply," someone printed the rules document, and nothing moved for three months because nobody could turn those thirty-odd sections into sprint tickets.
The rules use language like "reasonable security safeguards" and "appropriate technical measures." Lawyers can work with that. Sprint planning cannot.
This article is a translation. The four sections below map the DPDP Rules' engineering-relevant obligations to specific changes in your codebase. If your product handles personal data of Indian residents, and most B2B SaaS products do, this is what compliance looks like in practice.
One framing note before the code: the rules distinguish between a Data Fiduciary (the entity that determines why data is processed) and a Data Processor (a vendor processing on behalf of the fiduciary). If you're a SaaS product, you're the fiduciary for your customer data and a processor for your customers' end-users. The changes below apply to both roles, though the consent architecture differs slightly depending on which hat you're wearing at a given moment.
What the DPDP Rules actually require of your codebase
The rules enumerate several obligations. The ones that directly produce engineering work are:
- Obtaining and logging consent before processing personal data, with granularity by purpose
- Enabling erasure of personal data on user request
- Knowing where personal data lives: what you hold, why, and for how long
- Notifying affected users and the Data Protection Board within 72 hours of a confirmed breach
Everything else is either in the legal and contractual layer (data processing agreements, privacy notices) or is standard security hygiene you should already have: encryption at rest and in transit, access controls, and audit logs showing who accessed what. Most teams doing an honest gap assessment find they're 60 to 70 percent there from basic security hygiene. The structural gaps are almost always the consent log and the deletion API.
Change 1: The consent log
The DPDP Act's consent requirement has more teeth than a signup checkbox. You need to be able to prove, at any point, that a specific user gave consent for a specific processing purpose at a specific time, under a specific version of your privacy notice.
The minimum your consent log needs to capture:
- User identifier: a stable internal ID, not a name
- Purpose: a machine-readable string like marketing_email, analytics, or third_party_data_sharing
- Timestamp: ISO 8601, timezone-aware
- Consent version: the version of the privacy notice they consented to
- Channel: web, mobile, or API
What makes this different from a consent_given boolean in your users table is purpose granularity and immutability. Consent logs are appended to, not updated. When a user withdraws consent for marketing_email, you add a withdrawal record. You do not modify the original grant.
CREATE TABLE consent_events (
id BIGSERIAL PRIMARY KEY,
user_id UUID NOT NULL,
purpose TEXT NOT NULL,
event_type TEXT NOT NULL CHECK (event_type IN ('granted','withdrawn','expired')),
notice_version TEXT NOT NULL,
channel TEXT,
ip_address INET,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Never UPDATE or DELETE rows; the history is the audit trail.
-- Query current consent state:
CREATE VIEW current_consent AS
SELECT DISTINCT ON (user_id, purpose)
user_id, purpose, event_type, created_at
FROM consent_events
ORDER BY user_id, purpose, created_at DESC;The append-only constraint is non-negotiable for audit purposes. Most teams who try to simplify this into a consent_preferences JSONB column in the users table end up unable to answer the question "what did user X consent to on date Y?" which is exactly what a regulator will ask.
For B2B SaaS products where your customers are the fiduciaries and their end-users are the data subjects, you don't need to manage consent flows for end-users directly. But you do need to log the data processing agreement terms and versions attached to each customer account, and produce that record on request.
Change 2: The data-deletion API
Rule 12 of the DPDP Rules gives data principals the right to erasure. When a user requests deletion of their personal data, you must delete it or render it permanently unidentifiable. Pseudonymisation (replacing a name with a UUID while keeping the mapping table) does not count.
The practical architecture is an async job pattern:
- User hits your account-deletion endpoint; you set deletion_requested_at and status = pending_deletion
- A background worker (cron or queue) picks up pending_deletion accounts nightly
- The worker fans out deletion tasks to every service that holds personal data for that user
- Each service reports completion; once all tasks are done, a final audit record is written
The fan-out step is where most implementations break down. Before you can build it, you need to know which services and tables hold personal data, which is exactly what the personal-data register in the next section gives you.
Three schema patterns that make deletion tractable:
- Centralise identity. If a user's email appears in 12 tables, deletion requires 12 targeted queries. If personal fields live only in a users table and everything else references user_id, you can rely on cascades.
- Soft-delete first. Set deleted_at and stop returning the user in queries. Run hard deletion after a grace period to handle account-recovery requests or legal holds.
- Build an exemption registry. Consent logs, financial records, and transaction logs may have competing retention obligations. Your deletion job should check this registry before removing records. The DPDP Rules explicitly allow retention where required by other law.
Change 3: The personal-data register
You cannot satisfy deletion requests, consent obligations, or breach notifications without knowing where your personal data lives. Most teams that skip this step discover during an incident that personal data has spread to services they forgot they built.
A personal-data register does not need to be sophisticated. The minimum it needs to capture:
| Field | What to capture | Why it matters |
|---|---|---|
| Data element | e.g. Email address, IP address, Device ID | Defines scope of deletion and consent |
| Classification | Personal data or sensitive personal data | Sensitive data triggers additional obligations |
| Storage location | Table, column, service, cloud region | Required for deletion fan-out |
| Retention period | How long before deletion or anonymisation | Sets the timer for your deletion jobs |
| Processing purpose | What you use this data for | Must match the purposes in your consent log |
| Third-party shares | Which downstream systems receive it | Required for downstream deletion requests |
The hard part is not building the register. It's keeping it current. The most common failure mode is a new service or a new data field being added without an update to the register.
Two practices that work: first, make the register part of your schema-migration checklist: any PR that adds a column to a user-facing table should require a register update before merge. Second, run a quarterly grep across your codebase for common personal-data field names (email, phone, name, address, date_of_birth, ip_address) and verify every appearance is documented.
Change 4: The breach-notification hook
DPDP Rule 22 requires notification to the Data Protection Board within 72 hours of discovering a personal data breach. Rule 23 requires notification to affected users without delay if their rights or interests are at risk. The 72-hour window runs from when you know about the breach, not from when it happened. Detection speed matters.
What this requires in your system:
- Alerting rules that fire on anomalous data access: bulk exports, unusual query volumes on user tables, new IP addresses hitting admin APIs
- An incident-response runbook that defines what constitutes a reportable breach under DPDP and names the person who owns the notification obligation
- A notification pipeline capable of reaching all affected users at scale; check that your transactional email provider is sized for a worst-case scenario before you need it
One practical note: as of mid-2026, the Data Protection Board's formal submission portal is still being stood up. Document your breach response process now anyway. When the portal goes live, you want a runbook ready, not a queue of catch-up filings.
What you're probably already doing that counts
Before scoping this as a ground-up build, run an honest audit against what you already have:
- TLS everywhere: counts toward the technical security measures requirement
- IAM access controls with least-privilege principles: counts toward security safeguards
- Database audit logs showing who ran what query, when: counts
- Regular offsite backups: counts
- Signed DPAs with your cloud providers: counts toward vendor management obligations
Most teams doing this audit find they're closer to compliance than they expected. The gaps are almost always structural: the consent log's append-only design, the deletion job's fan-out, and the data register's completeness. Those are the three places to spend engineering time. Everything else is paperwork layered on top of infrastructure you already have.
The May 2027 deadline creates a false sense of distance. Building a deletion API into a three-year-old schema is harder than building it into a current one: foreign keys are messier, data is more spread out, and the team that wrote the original code may not be around to explain the edge cases. Start with the consent log and the data register. The rest follows from knowing where your data actually is.
Frequently asked questions
Related reading
DPDP for engineers: the code changes that actually matter
Most DPDP guidance is written for compliance officers. This is the engineering version: schema migrations, consent state machines, retention jobs, and audit patterns for a defensible Indian SaaS codebase.
ONDC, three years in: the number that matters isn't the headline
ONDC crossed 218 million transactions in FY 2025-26. But mobility now drives over half of all orders, and retail — the segment the protocol was built to democratise — peaked in October 2024 and has been falling since.
Bootstrapped or venture-backed: the Indian SaaS calculus in 2026
India hosts the second-largest SaaS ecosystem outside the US. The raise-or-bootstrap question has a different answer in 2026 than it did in 2021. Here's the data behind the shift.