Why Dynamic PostgreSQL Connection Pools Work Locally but Fail in Production

 

🚨 Why Dynamic PostgreSQL Connection Pools Work Locally but Fail in Production


If you’ve ever built a Node.js API using PostgreSQL and pg.Pool, you might have faced this confusing situation:

“My code works perfectly on my local machine, but br
eaks on the server.”

This issue commonly appears when using dynamic PostgreSQL databases together with connection pooling. It often stays hidden during development and surfaces only after deployment.

This article explains why this happens, how to identify the root cause, and what a production‑safe architecture looks like.


🧩 The Problematic Pattern

A common implementation looks like this:

const pools = {}; // cache pools per theme
function getThemePool(theme) {
if (!pools[theme]) {
pools[theme] = new pg.Pool({
user: process.env.DB_USER,
host: process.env.DB_HOST,
database: theme, // dynamic database
password: process.env.DB_PASSWORD,
port: process.env.DB_PORT,
max: 10
});
}
return pools[theme];
}

Here, each request selects a theme, and a new PostgreSQL database (not table or schema) is used dynamically.

This works locally — but fails under production load.


✅ Why It Works on Local Machines

Local environments are forgiving:

  • Low traffic

  • No PM2 cluster mode

  • Fewer databases

  • Higher default connection tolerance

As a result, connection exhaustion never becomes visible during development.


❌ Why It Fails in Production

1️⃣ Each Pool Opens Multiple Connections

The line below is the main culprit:

max: 10

This means:

  • One pool = up to 10 PostgreSQL connections

  • Each new database creates a new pool

Example:

6 databases × 10 connections = 60 connections

Now add PM2 cluster mode:

2 workers × 60 = 120 connections

Most production PostgreSQL servers are configured with:

max_connections = 100

Result: connection exhaustion and random failures.


2️⃣ Pools Are Never Destroyed

Once a pool is created, it:

  • stays alive

  • holds idle connections

  • is never closed unless .end() is called

Over time, connections accumulate silently.


3️⃣ PM2 Multiplies the Issue

When running:

pm2 start app.js -i max

Each worker process:

  • has its own memory

  • creates its own pool cache

  • opens its own database connections

This dramatically increases the total number of open connections.


🔥 The Smoking Gun

Even when the database exists, this configuration is unsafe in production:

max: 10

Dynamic databases + pooled connections = connection explosion.


✅ Immediate Fix (Short‑Term)

If dynamic databases cannot be avoided immediately:

pools[theme] = new pg.Pool({
user: process.env.DB_USER,
host: process.env.DB_HOST,
database: theme,
password: process.env.DB_PASSWORD,
port: process.env.DB_PORT,
max: 1 // critical
});

Also ensure PM2 runs in single‑instance mode:

pm2 start app.js -i 1

This limits damage but does not fix the architectural problem.


✅ The Correct Production Architecture

Instead of multiple databases:

education
health
forest

Use one database with multiple schemas:

gis_catalog
├── education.schools
├── education.colleges
├── health.hospitals
├── forest.reserves

Then use one global pool:

const pool = new pg.Pool({
user: process.env.DB_USER,
host: process.env.DB_HOST,
database: 'gis_catalog',
password: process.env.DB_PASSWORD,
port: process.env.DB_PORT,
max: 10
});

This approach:

  • scales safely

  • avoids connection exhaustion

  • works well with PostGIS and GeoServer

  • follows PostgreSQL best practices


🧠 Key Takeaway

Dynamic schemas are safe.
Dynamic databases are not suitable for web APIs.

PostgreSQL is optimized for:

  • one database

  • many schemas

  • a small number of well‑managed pools


🛠 Debug Checklist

If your app behaves differently in production, check:

SHOW max_connections;
SELECT COUNT(*) FROM pg_stat_activity;

And in Node.js:

  • pool max value

  • number of dynamic databases

  • PM2 cluster mode


✨ Final Thoughts

This is a classic production‑only bug that rarely appears in tutorials.

If your Node.js + PostgreSQL application works locally but fails on the server, always inspect your connection pooling strategy first.

Sometimes, one innocent line (max: 10) is all it takes to bring a server down.

Comments