Postgres Error: 23505 — Duplicate Key Violation
ERROR: duplicate key value violates unique constraint "users_email_key"
DETAIL: Key (email)=(alice@example.com) already exists.
SQLSTATE: 23505
SQLSTATE 23505 is Postgres’s way of saying “I refused to insert this row because doing so would violate a unique constraint.” It’s the most common integrity error in production Postgres and almost always falls into one of three categories: a race between SELECT and INSERT, a non-idempotent retry, or client-generated IDs colliding under load.
The fix is rarely “lock harder” — it’s “express the operation atomically.” INSERT ... ON CONFLICT lets Postgres arbitrate concurrent writers without application-level coordination. For webhooks and event-driven flows, catching 23505 and treating it as success is the simplest possible deduplication strategy and works correctly even at high concurrency.
Why this happens
- SELECT-then-INSERT race condition. Two requests check `SELECT id FROM users WHERE email = $1`, both see no row, both run INSERT. One wins; the other gets 23505. The fix is to use a single atomic `INSERT ... ON CONFLICT` and let the database arbitrate.
- Re-running a non-idempotent migration or import. Re-running a seed script, replaying an event stream, or rerunning a migration after a partial failure all attempt to insert rows that already exist. Without idempotency built in, every retry surfaces 23505.
- Wrong column generated client-side. Generating IDs in application code (`Date.now()`, short hashes, sequence drift across replicas) eventually collides. Use UUID v4 for distributed generation, or let Postgres own the sequence with `BIGSERIAL`/`IDENTITY`.
- Composite unique constraint on a column subset. A `UNIQUE (tenant_id, slug)` constraint is satisfied by any one of those columns being different. If your app forgets to scope by tenant when checking, the duplicate-detection logic misses cross-tenant slug collisions and 23505 fires from the database.
- Concurrent UPSERTs on the same key. Even with ON CONFLICT, two concurrent inserters of the same key serialise — one gets the conflict and runs the DO UPDATE branch. If you wrote `ON CONFLICT DO NOTHING` but expected the row to be present afterwards, you may misread the empty RETURNING and treat success as failure.
How to fix it
Fixes are ordered by likelihood. Start with the first one that matches your context.
1. Use ON CONFLICT for idempotent upserts
`INSERT ... ON CONFLICT (col) DO UPDATE SET ...` is the canonical Postgres upsert. It atomically inserts or updates without a SELECT round-trip and is race-safe.
INSERT INTO users (email, name, last_login)
VALUES ($1, $2, now())
ON CONFLICT (email)
DO UPDATE SET
name = EXCLUDED.name,
last_login = now()
RETURNING id, email;
2. For "create only if new" use ON CONFLICT DO NOTHING + RETURNING
To create a row only when missing, use DO NOTHING. RETURNING will be empty on conflict — query separately for the existing row's id.
import { Pool } from 'pg';
const pool = new Pool();
async function getOrCreateUser(email, name) {
const { rows: inserted } = await pool.query(
`INSERT INTO users (email, name) VALUES ($1, $2)
ON CONFLICT (email) DO NOTHING
RETURNING id`,
[email, name],
);
if (inserted.length) return inserted[0].id;
const { rows: existing } = await pool.query(
'SELECT id FROM users WHERE email = $1',
[email],
);
return existing[0].id;
}
3. Catch 23505 in application code as a domain signal
For workflows where duplicates are expected (idempotent webhooks, retried API calls), treat SQLSTATE 23505 as a successful outcome rather than an error. The pg driver exposes it on the error code field.
try {
await pool.query(
'INSERT INTO webhook_events (event_id, payload) VALUES ($1, $2)',
[event.id, event.payload],
);
} catch (err) {
if (err.code === '23505') {
// Duplicate event delivery — already processed.
return;
}
throw err;
}
4. Switch ID generation to UUID v4 or DB-side sequences
Client-side timestamps and short hashes will eventually collide under load. UUID v4 (16 bytes random) has astronomically low collision probability. Postgres `BIGSERIAL` or `GENERATED BY DEFAULT AS IDENTITY` keep generation server-side and monotonic.
5. Add a partial unique index for soft-delete tables
Tables with soft-delete (`deleted_at IS NOT NULL`) shouldn't enforce uniqueness on tombstoned rows. Use a partial unique index: `CREATE UNIQUE INDEX users_email_active_idx ON users (email) WHERE deleted_at IS NULL;`
Detection and monitoring in production
Track 23505 separately from generic SQL errors — most are expected (idempotent retries) and should not page. Alarm only when 23505 fires on a constraint that's *meant* to be exclusive (e.g., `payments_idempotency_key_key`) and the upstream caller is supposed to be passing a unique key. Tag by constraint name in your error tracker.
Related errors
- postgresECONNREFUSEDYour application tried to open a TCP connection to Postgres and the OS rejected it — Postgres isn't listening on the host:port you specified, or a firewall blocked the connection.
- postgres53300Postgres rejected your connection because the server has hit `max_connections`. Each connection consumes ~10MB of RAM plus a backend process; the limit exists to protect the host from memory exhaustion.
- postgres40P01Two or more transactions are waiting on locks held by each other in a cycle. Postgres's deadlock detector kills one of them with SQLSTATE 40P01 to break the cycle, rolling back its work.
Frequently asked questions
What's the difference between 23505 and 23503? +
Does ON CONFLICT acquire a lock on the entire table? +
Can 23505 fire from an UPDATE, not just INSERT? +
How do I find which constraint was violated when there are several? +
Should I use ON CONFLICT DO UPDATE or DO NOTHING? +
Why does ON CONFLICT need an index, not just a column? +
Can 23505 happen on a deferred constraint? +
Is 23505 a sign of a bug or a normal race? +
When to escalate to Postgres support
No vendor escalation needed for 23505 — it's a deterministic constraint violation, not a Postgres bug. If you see 23505 fire on inserts where the key shouldn't be possible (e.g., fresh UUIDs), check for replica lag (you may be inserting against a stale replica) or check for sequence advancement after a `pg_dump` restore (sequences don't reset automatically).