DPDP Act for engineers: what you actually have to change in your code
A practical map of obligations from the DPDP Act 2023 and Rules 2025 to the schema changes, API endpoints, and data flows you need to build
The DPDP Rules 2025 landed on 19 November 2025. Since then, most of the coverage has been written for compliance officers and legal counsel: what the Act says, what it means for your data governance posture, what penalties apply. That framing is useful for someone deciding whether to take the law seriously. It is not useful for the engineer who actually has to ship the changes.
This piece maps the major obligations from the Digital Personal Data Protection Act 2023 and the Rules 2025 to concrete engineering work. It is not a substitute for legal review. Edge cases matter, and your situation may differ. It is a starting point for understanding which tables to add to your Postgres database, which endpoints to build, and which internal processes to put in place before the compliance deadlines arrive.
The timeline you are actually working with
The Rules have a phased rollout. Most obligations (Rules 3 and 5 through 16, covering notice, consent, data principal rights, and security safeguards) come into force 18 months after notification, putting the effective deadline at around May 2027. Rule 4, governing the Consent Manager framework, comes into force 12 months after notification, around November 2026.
May 2027 looks distant. It is not. The schema changes, new API endpoints, and internal processes described here are not things you wire in a sprint. Organisations that wait until late 2026 to start will be retrofitting consent infrastructure onto data models that were never designed for purpose-based processing. That is a painful rewrite. Start with the schema.
Consent: what the Act actually requires of your backend
Section 6 of the Act specifies that consent must be free, specific, informed, unconditional, and unambiguous. The Rules add that the consent notice must be a separate communication, not buried in your terms of service or combined with a general sign-up flow. Each purpose for which data will be processed must be stated individually. And withdrawal must be "as easy as giving consent."
That last requirement is the one most engineers underestimate. If a user consented via a single checkbox on signup, withdrawal must also be achievable with a single action, not buried in a five-step account closure flow. Once consent is withdrawn, processing for that purpose must stop immediately, not in a nightly batch.
The foundational schema change is a consent records table. Every processing activity that relies on consent needs to be traceable to a specific consent record:
CREATE TABLE consent_records (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id),
purpose TEXT NOT NULL, -- e.g. 'marketing_email', 'analytics', 'service_delivery'
notice_version TEXT NOT NULL, -- ties the record to the exact notice text shown
consented BOOLEAN NOT NULL,
recorded_at TIMESTAMPTZ NOT NULL DEFAULT now(),
revoked_at TIMESTAMPTZ, -- set when consent is withdrawn for this purpose
channel TEXT -- 'web', 'mobile_ios', 'api'
-- do not store raw IPs; hash if you need an audit signal
);
-- Index for fast lookup on rights requests
CREATE INDEX ON consent_records (user_id, purpose, revoked_at);The purpose column is the critical field. Every table row that holds personal data should carry a purpose code, not just a user_id. Without it, you cannot correctly determine when retention expires (when the stated purpose is served), and you cannot correctly process a consent withdrawal for one purpose without affecting another.
The six data principal rights and the API they map to
The Act gives data principals six categories of rights. Each maps to a specific API surface:
| Right | What it means | API endpoint | Timing obligation |
|---|---|---|---|
| Information (S. 11) | What personal data do you hold about me, and for what purposes? | GET /me/data-summary | 30-day resolution window |
| Correction (S. 12) | Correct inaccurate or incomplete personal data | PATCH /me/personal-data | Acknowledge within 48 hrs; resolve within 30 days |
| Erasure (S. 12) | Delete personal data when consent is withdrawn or purpose is served | POST /me/erasure-request | Acknowledge within 48 hrs; process within 30 days |
| Consent withdrawal (S. 6) | Withdraw consent for a specific purpose | DELETE /consent/:purpose | Immediately |
| Nomination (S. 14) | Nominate someone to exercise rights on their behalf in case of death or incapacity | POST /me/nomination | Standard processing |
| Grievance redressal (S. 13) | Raise a complaint if the fiduciary does not respond to requests | POST /grievances | Acknowledge within 48 hrs; resolve within 30 days |
The erasure endpoint deserves more than a 202 Accepted and a ticket in your backlog. User data exists in your primary database, your analytics warehouse, your object storage event logs, your email provider's contact list, your support tool's customer record, and at least two third-party integrations you added in 2022. A proper erasure flow needs to: identify every data surface, trigger deletion or anonymisation across each, log what was processed and when, and notify the user when complete.
One carve-out matters here: legal retention requirements can override an erasure request. If you are required to retain financial transaction records for seven years, you can decline to erase that data, but you must tell the user exactly why, and you must cease using it for any purpose beyond the legal obligation. "We need to keep it" is not a sufficient answer; "we are required to retain it under [specific obligation] until [specific date]" is.
Deletion is a data model problem before it is an API problem
Most production databases are designed to accumulate. Adding a deleted_at soft-delete column handles the obvious case. It does not handle the shadow copies.
For every table in your schema that references user_id, you need to decide the category. Data directly about the user (profile fields, preferences, addresses): delete or anonymise on erasure. Transactional records tied to a legal obligation (invoices, signed agreements, financial history): retain for the required period, but cease using for non-required purposes. Aggregated or derived stats: anonymise at the source if they were ever computed from PII at the row level.
Backups are the hardest part. Your nightly Postgres dump contains the personal data that was deleted from the live database. Your backup retention policy needs to be consistent with your stated retention policy, and that consistency needs to be documented, not assumed. A user who exercises the right to erasure today does not necessarily expect their data to persist in a 90-day rolling backup. Your privacy notice should address this explicitly.
Server logs are the second hardest part. Many teams write logs containing email addresses or user IDs to a log aggregation service that retains data for 12 months by default. If your logs contain personal data, they are in scope. Either redact PII at the log sink, shorten retention, or document the legal basis for retaining it.
Breach notification: the clock starts when you detect it, not when you report it
The Act requires data fiduciaries to notify the Data Protection Board of personal data breaches. The Rules do not name a fixed 72-hour window; the language is "without delay", but enforcement practice across comparable regimes treats 72 hours as the de facto threshold. The safe engineering assumption is: if you detect a breach on Monday morning, someone should be drafting the Board notification by Tuesday morning.
What you need in place before a breach occurs:
- A breach classification policy: what is a personal data breach (unauthorised access, disclosure, destruction) versus a security incident that does not involve personal data
- A data_breaches log table: detected_at, affected_user_count, data_types_affected, board_notified_at, principals_notified_at, summary
- Notification templates: one for the Board (technical detail, scope, remediation steps) and one for affected users (plain language, what happened, what to do)
- A named decision-maker: the single person who says "this triggers notification"; do not leave this implicit
The most common failure mode is internal escalation delay. A team discovers a breach, it gets deprioritised because it appears minor, and by the time someone realises it needs reporting, 96 hours have passed. Your incident runbook must put breach classification at the start, not after root cause analysis is complete.
What to build first
You have until May 2027 for most obligations, and November 2026 for the Consent Manager framework. Here is a practical order of priority:
- Audit your PII surface. Map every table, every log, every third-party integration that receives personal data. This takes longer than you expect; most teams find three or four places they forgot about.
- Add the consent_records table and purpose codes to all personal data collection. This is the foundation everything else depends on.
- Build the erasure flow, even manually at first. A checklist your ops team follows is better than no process at all. Automate it once you have mapped all the data surfaces.
- Add the data rights endpoints. Start with the read-only GET /me/data-summary. Add write endpoints once you have thought through the cascading implications.
- Prepare for the Consent Manager integration. Rule 4 creates a government-registered intermediary layer for consent signals. Design your consent schema so that wiring it to a Consent Manager is an integration, not a rewrite. Think of it as accepting an OAuth-style consent token from an external platform.
- Document your breach notification process. Write it, name the decision-maker, walk through it with the team. Do not leave this until after an incident.
The DPDP Act is not asking you to rebuild your product. It is asking you to add an accountability layer to your existing data model: who holds what, why they hold it, who authorised that, and what happens when authorisation ends. That is achievable engineering work. It starts with the schema, and the time to start is before the compliance deadline is on a Gantt chart.
Frequently asked questions
Related reading
DPDP for engineers: the code changes that actually matter
Most DPDP guidance is written for compliance officers. This is the engineering version: schema migrations, consent state machines, retention jobs, and audit patterns for a defensible Indian SaaS codebase.
ONDC, three years in: the number that matters isn't the headline
ONDC crossed 218 million transactions in FY 2025-26. But mobility now drives over half of all orders, and retail — the segment the protocol was built to democratise — peaked in October 2024 and has been falling since.
Bootstrapped or venture-backed: the Indian SaaS calculus in 2026
India hosts the second-largest SaaS ecosystem outside the US. The raise-or-bootstrap question has a different answer in 2026 than it did in 2021. Here's the data behind the shift.