JSON Security Best Practices: Protecting Your APIs

ยท 18 min read

As a CTO handling sensitive data across healthcare and e-commerce systems, I've seen how JSON security vulnerabilities can expose customer data, crash production systems, and cost companies thousands. JSON isn't inherently insecure, but the way we parse, validate, and transmit it often is.

This guide shares the security lessons I've learned from building enterprise APIs that handle thousands of JSON requests daily. These aren't theoretical concerns - they're real vulnerabilities I've patched in production systems.

Why JSON Security Matters

When I led technical teams building AI ecosystems for healthcare systems, a single JSON vulnerability could have exposed thousands of patient records. The stakes are real.

Real-world impact

In 2023, I consulted for an e-commerce platform that suffered a data breach. The attacker exploited a JSON injection vulnerability in their product search API. They injected malicious JSON that modified query logic, extracting 50,000 customer credit card details.

The cost: $2.3 million in fines, remediation, and lost customer trust.

JSON in the attack surface

JSON appears everywhere in modern applications:

  • REST API requests and responses
  • WebSocket messages
  • Configuration files
  • Database storage (MongoDB, PostgreSQL JSON columns)
  • Local storage and cookies
  • Inter-service communication in microservices

Each instance is a potential attack vector. When building enterprise AI translation systems, I secured every single JSON endpoint because one weakness compromises the entire system.

JSON Injection Attack Patterns and Prevention

JSON injection happens when attackers manipulate JSON input to alter application behavior. I've encountered these patterns repeatedly in production systems.

NoSQL injection through JSON

The most dangerous pattern I've seen involves MongoDB queries:

// VULNERABLE CODE - Never do this
app.post('/api/login', async (req, res) => {
  const { username, password } = req.body;

  // Attacker can inject: {"username": {"$ne": null}, "password": {"$ne": null}}
  const user = await db.users.findOne({ username, password });

  if (user) {
    res.json({ success: true, token: generateToken(user) });
  }
});

An attacker sends this JSON, bypassing authentication entirely by exploiting MongoDB's query operators:

{
  "username": {"$ne": null},
  "password": {"$ne": null}
}

Secure version:

// SECURE CODE
app.post('/api/login', async (req, res) => {
  const { username, password } = req.body;

  // Validate input types
  if (typeof username !== 'string' || typeof password !== 'string') {
    return res.status(400).json({ error: 'Invalid input types' });
  }

  // Sanitize by ensuring strings
  const user = await db.users.findOne({
    username: String(username),
    password: hashPassword(String(password))
  });

  if (user && verifyPassword(password, user.password)) {
    res.json({ success: true, token: generateToken(user) });
  } else {
    res.status(401).json({ error: 'Invalid credentials' });
  }
});

SQL injection via JSON parameters

When building content translation APIs for e-commerce, I encountered developers constructing SQL from JSON. Never do this:

// VULNERABLE - SQL injection via JSON
app.post('/api/products', async (req, res) => {
  const { category, minPrice } = req.body;

  // Attacker injects: {"category": "'; DROP TABLE products; --"}
  const query = `SELECT * FROM products WHERE category = '${category}' AND price >= ${minPrice}`;
  const results = await db.query(query);

  res.json(results);
});

Secure version with parameterized queries:

// SECURE CODE
app.post('/api/products', async (req, res) => {
  const { category, minPrice } = req.body;

  // Validate input
  if (typeof category !== 'string' || typeof minPrice !== 'number') {
    return res.status(400).json({ error: 'Invalid input' });
  }

  // Use parameterized queries
  const query = 'SELECT * FROM products WHERE category = $1 AND price >= $2';
  const results = await db.query(query, [category, minPrice]);

  res.json(results);
});

Command injection through JSON

In Copenhagen, while building AI translation systems, I reviewed code that executed shell commands based on JSON input. This is catastrophic:

// EXTREMELY VULNERABLE - Never execute commands from JSON
app.post('/api/convert', (req, res) => {
  const { filename } = req.body;

  // Attacker injects: {"filename": "file.json; rm -rf /"}
  const command = `convert ${filename} output.pdf`;
  exec(command, (error, stdout) => {
    res.json({ output: stdout });
  });
});

Prevention: Never execute shell commands from user input. Use libraries:

// SECURE CODE
const path = require('path');
const { convertFile } = require('safe-converter-lib');

app.post('/api/convert', async (req, res) => {
  const { filename } = req.body;

  // Validate filename
  if (!/^[a-zA-Z0-9_-]+\.json$/.test(filename)) {
    return res.status(400).json({ error: 'Invalid filename' });
  }

  // Use safe path joining
  const safePath = path.join(UPLOAD_DIR, path.basename(filename));

  try {
    const result = await convertFile(safePath);
    res.json({ success: true, output: result });
  } catch (error) {
    res.status(500).json({ error: 'Conversion failed' });
  }
});

Sensitive Data Exposure in JSON Responses

The biggest mistake I see in production APIs is exposing sensitive data in JSON responses. As a CTO dealing with international expansion, I've seen this cause GDPR violations and regulatory fines.

Over-fetching from databases

This pattern appears in almost every API I review:

// VULNERABLE - Exposes password hashes, emails, and PII
app.get('/api/users/:id', async (req, res) => {
  const user = await db.users.findById(req.params.id);

  // Returns EVERYTHING including password hash, email, SSN, etc.
  res.json(user);
});

Secure version with explicit field selection:

// SECURE CODE
app.get('/api/users/:id', async (req, res) => {
  const user = await db.users.findById(req.params.id, {
    select: 'id username displayName avatar createdAt'
  });

  if (!user) {
    return res.status(404).json({ error: 'User not found' });
  }

  // Only return public fields
  res.json({
    id: user.id,
    username: user.username,
    displayName: user.displayName,
    avatar: user.avatar,
    memberSince: user.createdAt
  });
});

Exposing internal identifiers

When building AI translation workflows for PIM systems, I learned to never expose internal database IDs:

// BAD - Exposes sequential IDs attackers can enumerate
{
  "user_id": 12345,
  "order_id": 67890,
  "internal_product_code": "PROD-2024-001"
}

Attackers can iterate through IDs to access other users' data.

// GOOD - Use UUIDs or opaque tokens
{
  "user_id": "usr_8f4e3c2a1b9d",
  "order_id": "ord_k2m5n8p1q4r7",
  "product_code": "prd_a3f6h9j2k5m8"
}

Debug information in production

I once reviewed an API that leaked stack traces in JSON errors:

// VULNERABLE - Exposes internal paths and logic
{
  "error": "Database connection failed",
  "stack": "Error: ECONNREFUSED 10.0.1.52:5432\n at TCPConnectWrap.afterConnect...",
  "query": "SELECT * FROM users WHERE email = 'attacker@evil.com'",
  "dbHost": "prod-db-master.internal"
}

This reveals database hosts, internal IP addresses, and query patterns.

// SECURE ERROR HANDLING
app.use((err, req, res, next) => {
  // Log full error internally
  console.error('API Error:', err);

  // Return generic message to client
  res.status(err.statusCode || 500).json({
    error: 'An error occurred',
    message: process.env.NODE_ENV === 'production'
      ? 'Please try again later'
      : err.message,
    // Never include stack traces in production
    ...(process.env.NODE_ENV === 'development' && { stack: err.stack })
  });
});

Prototype Pollution Vulnerabilities in JavaScript

Prototype pollution is one of the most dangerous JSON vulnerabilities in Node.js applications. I've seen it bypass authentication, escalate privileges, and execute arbitrary code.

How prototype pollution works

JavaScript's prototype chain can be manipulated through JSON:

// VULNERABLE CODE
function merge(target, source) {
  for (let key in source) {
    if (typeof source[key] === 'object') {
      merge(target[key], source[key]);
    } else {
      target[key] = source[key];
    }
  }
  return target;
}

// Attacker sends this JSON
const malicious = JSON.parse('{"__proto__": {"isAdmin": true}}');

const user = {};
merge(user, malicious);

// Now ALL objects have isAdmin = true
const newUser = {};
console.log(newUser.isAdmin); // true - polluted!

When building enterprise systems, I discovered this vulnerability allowed attackers to modify the behavior of all objects in the application.

Real-world exploit

An authentication bypass I fixed in production:

// VULNERABLE authentication check
function isAuthenticated(req, res, next) {
  const user = getUser(req.token);

  // Attacker polluted Object.prototype.isAdmin = true
  if (user && user.isAdmin) {
    next();
  } else {
    res.status(403).json({ error: 'Forbidden' });
  }
}

Prevention strategies

1. Use Object.create(null) for dictionaries:

// SECURE - No prototype chain
const safeObject = Object.create(null);
safeObject.isAdmin = true;

console.log(safeObject.__proto__); // undefined

2. Validate keys before assignment:

// SECURE merge function
function safeMerge(target, source) {
  const dangerousKeys = ['__proto__', 'constructor', 'prototype'];

  for (let key in source) {
    if (dangerousKeys.includes(key)) {
      continue; // Skip dangerous keys
    }

    if (source.hasOwnProperty(key)) {
      if (typeof source[key] === 'object' && source[key] !== null) {
        target[key] = safeMerge(target[key] || {}, source[key]);
      } else {
        target[key] = source[key];
      }
    }
  }
  return target;
}

3. Use libraries with pollution protection:

// Use secure libraries
const _ = require('lodash');

// lodash.merge protects against prototype pollution
const result = _.merge({}, userInput);

4. Freeze critical prototypes:

// Freeze Object.prototype at application startup
Object.freeze(Object.prototype);
Object.freeze(Array.prototype);
Object.freeze(Function.prototype);

Safe JSON Parsing Practices

When building AI-powered translation APIs in Copenhagen, I learned that parsing JSON safely is critical. Here are the patterns that kept our systems secure.

Never use eval()

This should be obvious, but I still see it in production:

// EXTREMELY DANGEROUS - Never do this
const data = eval('(' + jsonString + ')');

// Attacker sends: "console.log('pwned'); process.exit()"
// Your application crashes or worse

Always use JSON.parse():

// SAFE
try {
  const data = JSON.parse(jsonString);
} catch (error) {
  console.error('Invalid JSON:', error.message);
}

Validate before parsing

Prevent resource exhaustion from malicious JSON:

// SECURE parsing with validation
function safeJSONParse(jsonString, maxSize = 1024 * 1024) {
  // Check size before parsing
  if (jsonString.length > maxSize) {
    throw new Error('JSON too large');
  }

  // Check nesting depth
  const nestingDepth = (jsonString.match(/{/g) || []).length;
  if (nestingDepth > 50) {
    throw new Error('JSON too deeply nested');
  }

  try {
    return JSON.parse(jsonString);
  } catch (error) {
    throw new Error('Invalid JSON: ' + error.message);
  }
}

Schema validation

Always validate JSON structure against expected schema:

const Ajv = require('ajv');
const ajv = new Ajv();

const schema = {
  type: 'object',
  properties: {
    username: { type: 'string', minLength: 3, maxLength: 20 },
    email: { type: 'string', format: 'email' },
    age: { type: 'integer', minimum: 0, maximum: 120 }
  },
  required: ['username', 'email'],
  additionalProperties: false // Reject unexpected fields
};

const validate = ajv.compile(schema);

app.post('/api/register', (req, res) => {
  if (!validate(req.body)) {
    return res.status(400).json({
      error: 'Invalid input',
      details: validate.errors
    });
  }

  // Process validated data
});

Sanitize after parsing

Even valid JSON can contain malicious content:

const sanitize = require('sanitize-html');

function sanitizeJSONStrings(obj) {
  if (typeof obj === 'string') {
    return sanitize(obj, {
      allowedTags: [],
      allowedAttributes: {}
    });
  }

  if (Array.isArray(obj)) {
    return obj.map(sanitizeJSONStrings);
  }

  if (obj && typeof obj === 'object') {
    return Object.keys(obj).reduce((acc, key) => {
      acc[key] = sanitizeJSONStrings(obj[key]);
      return acc;
    }, {});
  }

  return obj;
}

CORS Configuration for JSON APIs

Misconfigured CORS is one of the most common vulnerabilities I fix in production JSON APIs. Here's how to do it securely.

The dangerous wildcard

// VULNERABLE - Allows any origin
app.use((req, res, next) => {
  res.header('Access-Control-Allow-Origin', '*');
  res.header('Access-Control-Allow-Credentials', 'true');
  next();
});

This combination allows any website to make authenticated requests to your API, stealing user data.

Secure CORS configuration

// SECURE CORS setup
const allowedOrigins = [
  'https://utilitiz.com',
  'https://app.utilitiz.com',
  process.env.NODE_ENV === 'development' && 'http://localhost:3000'
].filter(Boolean);

app.use((req, res, next) => {
  const origin = req.headers.origin;

  if (allowedOrigins.includes(origin)) {
    res.header('Access-Control-Allow-Origin', origin);
    res.header('Access-Control-Allow-Credentials', 'true');
    res.header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE');
    res.header('Access-Control-Allow-Headers', 'Content-Type, Authorization');
    res.header('Access-Control-Max-Age', '86400'); // 24 hours
  }

  if (req.method === 'OPTIONS') {
    return res.sendStatus(200);
  }

  next();
});

API key authentication with CORS

For public APIs, validate API keys properly:

app.use('/api', async (req, res, next) => {
  const apiKey = req.headers['x-api-key'];

  if (!apiKey) {
    return res.status(401).json({ error: 'API key required' });
  }

  const validKey = await validateAPIKey(apiKey);
  if (!validKey) {
    return res.status(403).json({ error: 'Invalid API key' });
  }

  req.apiKeyData = validKey;
  next();
});

Rate Limiting and DoS Protection

When building translation APIs handling thousands of JSON requests, rate limiting prevented our systems from being overwhelmed by attackers.

Basic rate limiting

const rateLimit = require('express-rate-limit');

const limiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100, // Limit each IP to 100 requests per window
  message: { error: 'Too many requests, please try again later' },
  standardHeaders: true,
  legacyHeaders: false,
});

app.use('/api/', limiter);

JSON payload size limits

Prevent resource exhaustion from huge JSON payloads:

const express = require('express');

app.use(express.json({
  limit: '10kb', // Reject payloads larger than 10KB
  verify: (req, res, buf, encoding) => {
    // Additional validation before parsing
    if (buf.length > 10240) {
      throw new Error('Payload too large');
    }
  }
}));

Nested depth protection

Deeply nested JSON can crash your parser:

function checkJSONDepth(obj, maxDepth = 10, currentDepth = 0) {
  if (currentDepth > maxDepth) {
    throw new Error('JSON nested too deeply');
  }

  if (obj && typeof obj === 'object') {
    for (let key in obj) {
      checkJSONDepth(obj[key], maxDepth, currentDepth + 1);
    }
  }
}

app.post('/api/data', (req, res) => {
  try {
    checkJSONDepth(req.body);
    // Process data
  } catch (error) {
    res.status(400).json({ error: error.message });
  }
});

Encryption and Signing JSON Data

When handling sensitive healthcare data and e-commerce transactions, I encrypt JSON in transit and at rest. Here's how.

HTTPS everywhere

First rule: Never transmit JSON over HTTP. Always use HTTPS. I enforce this in production:

// Redirect HTTP to HTTPS
app.use((req, res, next) => {
  if (req.headers['x-forwarded-proto'] !== 'https' && process.env.NODE_ENV === 'production') {
    return res.redirect(301, `https://${req.hostname}${req.url}`);
  }
  next();
});

Signing JSON with JWS

Ensure JSON hasn't been tampered with:

const jose = require('jose');
const crypto = require('crypto');

// Sign JSON
async function signJSON(payload, secret) {
  const key = crypto.createSecretKey(Buffer.from(secret, 'utf-8'));

  const jws = await new jose.SignJWT(payload)
    .setProtectedHeader({ alg: 'HS256' })
    .setIssuedAt()
    .setExpirationTime('2h')
    .sign(key);

  return jws;
}

// Verify signature
async function verifyJSON(jws, secret) {
  const key = crypto.createSecretKey(Buffer.from(secret, 'utf-8'));

  try {
    const { payload } = await jose.jwtVerify(jws, key);
    return payload;
  } catch (error) {
    throw new Error('Invalid signature');
  }
}

Encrypting sensitive JSON

For PII and sensitive data:

const crypto = require('crypto');

function encryptJSON(data, key) {
  const iv = crypto.randomBytes(16);
  const cipher = crypto.createCipheriv('aes-256-gcm', Buffer.from(key, 'hex'), iv);

  let encrypted = cipher.update(JSON.stringify(data), 'utf8', 'hex');
  encrypted += cipher.final('hex');

  const authTag = cipher.getAuthTag();

  return {
    iv: iv.toString('hex'),
    encrypted,
    authTag: authTag.toString('hex')
  };
}

function decryptJSON(encryptedData, key) {
  const decipher = crypto.createDecipheriv(
    'aes-256-gcm',
    Buffer.from(key, 'hex'),
    Buffer.from(encryptedData.iv, 'hex')
  );

  decipher.setAuthTag(Buffer.from(encryptedData.authTag, 'hex'));

  let decrypted = decipher.update(encryptedData.encrypted, 'hex', 'utf8');
  decrypted += decipher.final('utf8');

  return JSON.parse(decrypted);
}

Security Checklist for JSON APIs

Here's the checklist I use when auditing JSON APIs for security. This comes from years of building secure systems across healthcare and e-commerce.

Input validation

  • Validate JSON schema before processing
  • Enforce payload size limits (10-100KB for most APIs)
  • Check nesting depth (max 10-20 levels)
  • Validate data types match expected types
  • Reject additional properties not in schema
  • Sanitize string values to prevent XSS

Injection prevention

  • Never concatenate JSON values into SQL queries
  • Use parameterized queries for all database operations
  • Validate types to prevent NoSQL injection ($ne, $gt, etc.)
  • Never execute shell commands from JSON input
  • Filter dangerous keys (__proto__, constructor, prototype)

Data exposure

  • Explicitly select fields to return in responses
  • Use UUIDs instead of sequential IDs
  • Never return password hashes or sensitive tokens
  • Remove debug information in production
  • Implement field-level access control
  • Audit what data each endpoint exposes

Authentication & authorization

  • Validate JWT tokens properly
  • Check user permissions before returning data
  • Use short-lived access tokens (15 minutes)
  • Implement refresh token rotation
  • Rate limit authentication endpoints
  • Log failed authentication attempts

Network security

  • Enforce HTTPS everywhere (reject HTTP)
  • Configure CORS with specific allowed origins
  • Never use Access-Control-Allow-Origin: * with credentials
  • Implement rate limiting per IP and per user
  • Set appropriate security headers (CSP, HSTS, etc.)

Error handling

  • Never expose stack traces in production
  • Return generic error messages to clients
  • Log detailed errors internally
  • Don't reveal system information in errors
  • Implement proper HTTP status codes

Monitoring & response

  • Log all API requests with sanitized payloads
  • Monitor for unusual patterns (sudden traffic spikes)
  • Alert on repeated authentication failures
  • Track payload sizes and nesting depths
  • Have an incident response plan ready

I review this checklist for every JSON API I build or audit. Security isn't a one-time task - it's a continuous process. When building translation systems handling sensitive product data across e-commerce and healthcare, this systematic approach prevented countless vulnerabilities from reaching production.