Authentication, authorization, input validation, rate limiting, CORS, secrets management, and the OWASP API Top 10. What I check before every production deployment.
I've shipped APIs that were wide open. Not maliciously, not lazily — I just didn't know what I didn't know. An endpoint that returned every field in the user object, including hashed passwords. A rate limiter that only checked IP addresses, which meant anyone behind a proxy could hammer the API. A JWT implementation where I forgot to verify the iss claim, so tokens from a completely different service worked just fine.
Every one of those mistakes made it to production. Every one of them got caught — some by me, some by users, one by a security researcher who was kind enough to email me instead of posting it on Twitter.
This post is the checklist I built from those mistakes. I run through it before every production deployment. Not because I'm paranoid, but because I've learned that security bugs are the ones that hurt the most. A broken button annoys users. A broken auth flow leaks their data.
These two words get used interchangeably in meetings, in docs, even in code comments. They are not the same thing.
Authentication answers: "Who are you?" It's the login step. Username and password, OAuth flow, magic link — whatever proves your identity.
Authorization answers: "What are you allowed to do?" It's the permission step. Can this user delete this resource? Can they access this admin endpoint? Can they read another user's data?
The most common security bug I've seen in production APIs is not a broken login flow. It's a missing authorization check. The user is authenticated — they have a valid token — but the API never checks whether they're allowed to perform the action they're requesting.
JWTs are everywhere. They're also misunderstood everywhere. A JWT has three parts, separated by dots:
header.payload.signature
The header says which algorithm was used. The payload contains claims (user ID, roles, expiration). The signature proves nobody tampered with the first two parts.
Here's a proper JWT verification in Node.js:
import jwt from "jsonwebtoken";
import { timingSafeEqual } from "crypto";
interface TokenPayload {
sub: string;
role: "user" | "admin";
iss: string;
aud: string;
exp: number;
iat: number;
jti: string;
}
function verifyToken(token: string): TokenPayload {
try {
const payload = jwt.verify(token, process.env.JWT_SECRET!, {
algorithms: ["HS256"], // Never allow "none"
issuer: "api.yourapp.com",
audience: "yourapp.com",
clockTolerance: 30, // 30 seconds leeway for clock skew
}) as TokenPayload;
return payload;
} catch (error) {
if (error instanceof jwt.TokenExpiredError) {
throw new ApiError(401, "Token expired");
}
if (error instanceof jwt.JsonWebTokenError) {
throw new ApiError(401, "Invalid token");
}
throw new ApiError(401, "Authentication failed");
}
}A few things to notice:
algorithms: ["HS256"] — This is critical. If you don't specify the algorithm, an attacker can send a token with "alg": "none" in the header and skip verification entirely. This is the alg: none attack, and it has affected real production systems.
issuer and audience — Without these, a token minted for Service A works on Service B. If you run multiple services sharing the same secret (which you shouldn't, but people do), this is how cross-service token abuse happens.
Specific error handling — Don't return "invalid token" for every failure. Distinguishing between expired and invalid helps the client know whether to refresh or re-authenticate.
Access tokens should be short-lived — 15 minutes is standard. But you don't want users re-entering their password every 15 minutes. That's where refresh tokens come in.
The pattern that actually works in production:
import { randomBytes } from "crypto";
import { redis } from "./redis";
interface RefreshTokenData {
userId: string;
family: string; // Token family for rotation detection
createdAt: number;
}
async function rotateRefreshToken(
oldRefreshToken: string
): Promise<{ accessToken: string; refreshToken: string }> {
const tokenData = await redis.get(`refresh:${oldRefreshToken}`);
if (!tokenData) {
// Token not found — either expired or already used.
// If already used, this is a potential replay attack.
// Invalidate the entire token family.
const parsed = decodeRefreshToken(oldRefreshToken);
if (parsed?.family) {
await invalidateTokenFamily(parsed.family);
}
throw new ApiError(401, "Invalid refresh token");
}
const data: RefreshTokenData = JSON.parse(tokenData);
// Delete the old token immediately — single use only
await redis.del(`refresh:${oldRefreshToken}`);
// Generate new tokens
const newRefreshToken = randomBytes(64).toString("hex");
const newAccessToken = generateAccessToken(data.userId);
// Store the new refresh token with the same family
await redis.setex(
`refresh:${newRefreshToken}`,
60 * 60 * 24 * 30, // 30 days
JSON.stringify({
userId: data.userId,
family: data.family,
createdAt: Date.now(),
})
);
return { accessToken: newAccessToken, refreshToken: newRefreshToken };
}
async function invalidateTokenFamily(family: string): Promise<void> {
// Scan for all tokens in this family and delete them.
// This is the nuclear option — if someone replays a refresh token,
// we kill every token in the family, forcing re-authentication.
const keys = await redis.keys(`refresh:*`);
for (const key of keys) {
const data = await redis.get(key);
if (data) {
const parsed = JSON.parse(data) as RefreshTokenData;
if (parsed.family === family) {
await redis.del(key);
}
}
}
}The token family concept is what makes this secure. Every refresh token belongs to a family (created at login). When you rotate, the new token inherits the family. If an attacker replays an old refresh token, you detect the reuse and kill the entire family. The legitimate user gets logged out, but the attacker doesn't get in.
This debate has been going on for years, and the answer is clear: httpOnly cookies for refresh tokens, always.
localStorage is accessible to any JavaScript running on your page. If you have a single XSS vulnerability — and at scale, you will eventually — the attacker can read the token and exfiltrate it. Game over.
httpOnly cookies are not accessible to JavaScript. Period. An XSS vulnerability can still make requests on behalf of the user (because cookies are sent automatically), but the attacker can't steal the token itself. That's a meaningful difference.
// Setting a secure refresh token cookie
function setRefreshTokenCookie(res: Response, token: string): void {
res.cookie("refresh_token", token, {
httpOnly: true, // Not accessible via JavaScript
secure: true, // HTTPS only
sameSite: "strict", // No cross-site requests
maxAge: 30 * 24 * 60 * 60 * 1000, // 30 days
path: "/api/auth", // Only sent to auth endpoints
});
}The path: "/api/auth" is a detail most people miss. By default, cookies are sent to every endpoint on your domain. Your refresh token doesn't need to go to /api/users or /api/products. Restrict the path, reduce the attack surface.
For access tokens, I keep them in memory (a JavaScript variable). Not localStorage, not sessionStorage, not a cookie. In memory. They're short-lived (15 minutes), and when the page refreshes, the client silently hits the refresh endpoint to get a new one. Yes, this means an extra request on page load. It's worth it.
The client is not your friend. The client is a stranger who walked into your house and said "I'm allowed to be here." You check their ID anyway.
Every piece of data that comes from outside your server — request body, query parameters, URL params, headers — is untrusted input. It doesn't matter that your React form has validation. Someone will bypass it with curl.
Zod is the best thing that happened to Node.js input validation. It gives you runtime validation with TypeScript types for free:
import { z } from "zod";
const CreateUserSchema = z.object({
email: z
.string()
.email("Invalid email format")
.max(254, "Email too long")
.transform((e) => e.toLowerCase().trim()),
password: z
.string()
.min(12, "Password must be at least 12 characters")
.max(128, "Password too long") // Prevent bcrypt DoS
.regex(
/^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)/,
"Password must contain uppercase, lowercase, and a number"
),
name: z
.string()
.min(1, "Name is required")
.max(100, "Name too long")
.regex(/^[\p{L}\p{M}\s'-]+$/u, "Name contains invalid characters"),
role: z.enum(["user", "editor"]).default("user"),
// Note: "admin" is intentionally not an option here.
// Admin role assignment goes through a separate, privileged endpoint.
});
type CreateUserInput = z.infer<typeof CreateUserSchema>;
// Usage in an Express handler
app.post("/api/users", async (req, res) => {
const result = CreateUserSchema.safeParse(req.body);
if (!result.success) {
return res.status(400).json({
error: "Validation failed",
details: result.error.issues.map((issue) => ({
field: issue.path.join("."),
message: issue.message,
})),
});
}
// result.data is fully typed as CreateUserInput
const user = await createUser(result.data);
return res.status(201).json({ id: user.id, email: user.email });
});A few security-relevant details:
max(128) on password — bcrypt has a 72-byte input limit, and some implementations just truncate silently. But more importantly, if you allow a 10MB password, bcrypt will spend significant time hashing it. That's a DoS vector.max(254) on email — RFC 5321 limits email addresses to 254 characters. Anything longer isn't a valid email."role": "admin" and hope for the best."Just use an ORM" doesn't protect you if you write raw queries for performance. And everyone writes raw queries for performance eventually.
// VULNERABLE — string concatenation
const query = `SELECT * FROM users WHERE email = '${email}'`;
// SAFE — parameterized query
const query = `SELECT * FROM users WHERE email = $1`;
const result = await pool.query(query, [email]);With Prisma, you're mostly safe — but $queryRaw can still bite you:
// VULNERABLE — template literal in $queryRaw
const users = await prisma.$queryRaw`
SELECT * FROM users WHERE name LIKE '%${searchTerm}%'
`;
// SAFE — using Prisma.sql for parameterization
import { Prisma } from "@prisma/client";
const users = await prisma.$queryRaw(
Prisma.sql`SELECT * FROM users WHERE name LIKE ${`%${searchTerm}%`}`
);MongoDB doesn't use SQL, but it's not immune to injection. If you pass unsanitized user input as a query object, things go wrong:
// VULNERABLE — if req.body.username is { "$gt": "" }
// this returns the first user in the collection
const user = await db.collection("users").findOne({
username: req.body.username,
});
// SAFE — explicitly coerce to string
const user = await db.collection("users").findOne({
username: String(req.body.username),
});
// BETTER — validate with Zod first
const LoginSchema = z.object({
username: z.string().min(1).max(50),
password: z.string().min(1).max(128),
});The fix is simple: validate input types before they reach your database driver. If username should be a string, assert that it's a string.
If your API serves files or reads from a path that includes user input, path traversal will ruin your week:
import path from "path";
import { access, constants } from "fs/promises";
const ALLOWED_DIR = "/app/uploads";
async function resolveUserFilePath(userInput: string): Promise<string> {
// Normalize and resolve to an absolute path
const resolved = path.resolve(ALLOWED_DIR, userInput);
// Critical: verify the resolved path is still within the allowed directory
if (!resolved.startsWith(ALLOWED_DIR + path.sep)) {
throw new ApiError(403, "Access denied");
}
// Verify the file actually exists
await access(resolved, constants.R_OK);
return resolved;
}
// Without this check:
// GET /api/files?name=../../../etc/passwd
// resolves to /etc/passwdThe path.resolve + startsWith pattern is the correct approach. Don't try to strip ../ manually — there are too many encoding tricks (..%2F, ..%252F, ....//) that will bypass your regex.
Without rate limiting, your API is an all-you-can-eat buffet for bots. Brute force attacks, credential stuffing, resource exhaustion — rate limiting is the first defense against all of them.
Token bucket: You have a bucket that holds N tokens. Each request costs one token. Tokens refill at a fixed rate. If the bucket is empty, the request is rejected. This allows bursts — if the bucket is full, you can make N requests instantly.
Sliding window: Count requests within a moving time window. More predictable, harder to burst through.
I use sliding window for most things because the behavior is easier to reason about and explain to the team:
import { Redis } from "ioredis";
interface RateLimitResult {
allowed: boolean;
remaining: number;
resetAt: number;
}
async function slidingWindowRateLimit(
redis: Redis,
key: string,
limit: number,
windowMs: number
): Promise<RateLimitResult> {
const now = Date.now();
const windowStart = now - windowMs;
const multi = redis.multi();
// Remove entries outside the window
multi.zremrangebyscore(key, 0, windowStart);
// Count entries in the window
multi.zcard(key);
// Add the current request (we'll remove it if over limit)
multi.zadd(key, now.toString(), `${now}:${Math.random()}`);
// Set expiry on the key
multi.pexpire(key, windowMs);
const results = await multi.exec();
if (!results) {
throw new Error("Redis transaction failed");
}
const count = results[1][1] as number;
if (count >= limit) {
// Over limit — remove the entry we just added
await redis.zremrangebyscore(key, now, now);
return {
allowed: false,
remaining: 0,
resetAt: windowStart + windowMs,
};
}
return {
allowed: true,
remaining: limit - count - 1,
resetAt: now + windowMs,
};
}One global rate limit is not enough. Different endpoints have different risk profiles:
interface RateLimitConfig {
window: number;
max: number;
}
const RATE_LIMITS: Record<string, RateLimitConfig> = {
// Auth endpoints — tight limits, brute force target
"POST:/api/auth/login": { window: 15 * 60 * 1000, max: 5 },
"POST:/api/auth/register": { window: 60 * 60 * 1000, max: 3 },
"POST:/api/auth/reset-password": { window: 60 * 60 * 1000, max: 3 },
// Data reads — more generous
"GET:/api/users": { window: 60 * 1000, max: 100 },
"GET:/api/products": { window: 60 * 1000, max: 200 },
// Data writes — moderate
"POST:/api/posts": { window: 60 * 1000, max: 10 },
"PUT:/api/posts": { window: 60 * 1000, max: 30 },
// Global fallback
"*": { window: 60 * 1000, max: 60 },
};
function getRateLimitKey(req: Request, config: RateLimitConfig): string {
const identifier = req.user?.id ?? getClientIp(req);
const endpoint = `${req.method}:${req.path}`;
return `ratelimit:${identifier}:${endpoint}`;
}Notice: authenticated users are rate-limited by user ID, not IP. This is important because many legitimate users share IPs (corporate networks, VPNs, mobile carriers). If you only limit by IP, you'll block entire offices.
Always tell the client what's going on:
function setRateLimitHeaders(
res: Response,
result: RateLimitResult,
limit: number
): void {
res.set({
"X-RateLimit-Limit": limit.toString(),
"X-RateLimit-Remaining": result.remaining.toString(),
"X-RateLimit-Reset": Math.ceil(result.resetAt / 1000).toString(),
"Retry-After": result.allowed
? undefined
: Math.ceil((result.resetAt - Date.now()) / 1000).toString(),
});
if (!result.allowed) {
res.status(429).json({
error: "Too many requests",
retryAfter: Math.ceil((result.resetAt - Date.now()) / 1000),
});
}
}CORS is probably the most misunderstood security mechanism in web development. Half the Stack Overflow answers about CORS are "just set Access-Control-Allow-Origin: * and it works." That's technically true. It's also how you open your API to every malicious site on the internet.
CORS is a browser mechanism. It tells the browser whether JavaScript from Origin A is allowed to read the response from Origin B. That's it.
What CORS does not do:
What CORS does do:
// DANGEROUS — allows any website to read your API responses
app.use(cors({ origin: "*" }));
// ALSO DANGEROUS — this is a common "dynamic" approach that's just * with extra steps
app.use(
cors({
origin: (origin, callback) => {
callback(null, true); // Allows everything
},
})
);The problem with * is that it makes your API responses readable by any JavaScript on any page. If your API returns user data and the user is authenticated via cookies, any website the user visits can read that data.
Even worse: Access-Control-Allow-Origin: * cannot be combined with credentials: true. So if you need cookies (for auth), you literally can't use the wildcard. But I've seen people try to work around this by reflecting the Origin header back — which is equivalent to * with credentials, the worst of both worlds.
import cors from "cors";
const ALLOWED_ORIGINS = new Set([
"https://yourapp.com",
"https://www.yourapp.com",
"https://admin.yourapp.com",
]);
if (process.env.NODE_ENV === "development") {
ALLOWED_ORIGINS.add("http://localhost:3000");
ALLOWED_ORIGINS.add("http://localhost:5173");
}
app.use(
cors({
origin: (origin, callback) => {
// Allow requests with no origin (mobile apps, curl, server-to-server)
if (!origin) {
return callback(null, true);
}
if (ALLOWED_ORIGINS.has(origin)) {
return callback(null, origin);
}
callback(new Error(`Origin ${origin} not allowed by CORS`));
},
credentials: true, // Allow cookies
methods: ["GET", "POST", "PUT", "DELETE", "PATCH"],
allowedHeaders: ["Content-Type", "Authorization"],
exposedHeaders: ["X-RateLimit-Limit", "X-RateLimit-Remaining"],
maxAge: 86400, // Cache preflight for 24 hours
})
);Key decisions:
yourapp.com might match evilyourapp.com if your regex isn't anchored properly.credentials: true because we use httpOnly cookies for refresh tokens.maxAge: 86400 — Preflight requests (OPTIONS) add latency. Telling the browser to cache the CORS result for 24 hours reduces unnecessary round trips.exposedHeaders — By default, the browser only exposes a handful of "simple" response headers to JavaScript. If you want the client to read your rate limit headers, you have to explicitly expose them.When a request isn't "simple" (it uses a non-standard header, a non-standard method, or a non-standard content type), the browser sends an OPTIONS request first to ask for permission. This is the preflight.
If your CORS configuration doesn't handle OPTIONS, preflight requests will fail, and the actual request will never be sent. Most CORS libraries handle this automatically, but if you're using a framework that doesn't, you need to handle it:
// Manual preflight handling (most frameworks do this for you)
app.options("*", (req, res) => {
res.set({
"Access-Control-Allow-Origin": getAllowedOrigin(req.headers.origin),
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, PATCH",
"Access-Control-Allow-Headers": "Content-Type, Authorization",
"Access-Control-Max-Age": "86400",
});
res.status(204).end();
});Security headers are the cheapest security improvement you can make. They're response headers that tell the browser to enable security features. Most of them are a single line of configuration, and they protect against entire classes of attacks.
import helmet from "helmet";
// One line. This is the fastest security win in any Express app.
app.use(
helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'"],
styleSrc: ["'self'", "'unsafe-inline'"], // Needed for many CSS-in-JS solutions
imgSrc: ["'self'", "data:", "https:"],
connectSrc: ["'self'", "https://api.yourapp.com"],
fontSrc: ["'self'"],
objectSrc: ["'none'"],
mediaSrc: ["'self'"],
frameSrc: ["'none'"],
upgradeInsecureRequests: [],
},
},
hsts: {
maxAge: 31536000, // 1 year
includeSubDomains: true,
preload: true,
},
referrerPolicy: { policy: "strict-origin-when-cross-origin" },
})
);What each header does:
Content-Security-Policy (CSP) — The most powerful security header. It tells the browser exactly which sources are allowed for scripts, styles, images, fonts, etc. If an attacker injects a <script> tag that loads from evil.com, CSP blocks it. This is the single most effective defense against XSS.
Strict-Transport-Security (HSTS) — Tells the browser to always use HTTPS, even if the user types http://. The preload directive lets you submit your domain to the browser's built-in HSTS list, so even the first request is forced to HTTPS.
X-Frame-Options — Prevents your site from being embedded in an iframe. This stops clickjacking attacks where an attacker overlays your page with invisible elements. Helmet sets this to SAMEORIGIN by default. The modern replacement is frame-ancestors in CSP.
X-Content-Type-Options: nosniff — Prevents the browser from guessing (sniffing) the MIME type of a response. Without this, if you serve a file with the wrong Content-Type, the browser might execute it as JavaScript.
Referrer-Policy — Controls how much URL information is sent in the Referer header. strict-origin-when-cross-origin sends the full URL for same-origin requests but only the origin for cross-origin requests. This prevents leaking sensitive URL parameters to third parties.
After deploying, check your score at securityheaders.com. Aim for an A+ rating. It takes about five minutes of configuration to get there.
You can also verify headers programmatically:
import { describe, it, expect } from "vitest";
describe("Security headers", () => {
it("should include all required security headers", async () => {
const response = await fetch("https://api.yourapp.com/health");
expect(response.headers.get("strict-transport-security")).toBeTruthy();
expect(response.headers.get("x-content-type-options")).toBe("nosniff");
expect(response.headers.get("x-frame-options")).toBe("SAMEORIGIN");
expect(response.headers.get("content-security-policy")).toBeTruthy();
expect(response.headers.get("referrer-policy")).toBeTruthy();
expect(response.headers.get("x-powered-by")).toBeNull(); // Helmet removes this
});
});The x-powered-by check is subtle but important. Express sets X-Powered-By: Express by default, telling attackers exactly what framework you're using. Helmet removes it.
This one should be obvious, but I still see it in pull requests: API keys, database passwords, and JWT secrets hardcoded in source files. Or committed in .env files that weren't in .gitignore. Once it's in the git history, it's there forever, even if you delete the file in the next commit.
Never commit secrets to git. Not in code, not in .env, not in config files, not in Docker Compose files, not in "just for testing" comments.
Use .env.example as a template. It documents what environment variables are needed, without containing actual values:
# .env.example — commit this
DATABASE_URL=postgresql://user:password@localhost:5432/dbname
JWT_SECRET=your-secret-here
REDIS_URL=redis://localhost:6379
SMTP_API_KEY=your-smtp-key
# .env — NEVER commit this
# Listed in .gitignoreimport { z } from "zod";
const envSchema = z.object({
DATABASE_URL: z.string().url(),
JWT_SECRET: z.string().min(32, "JWT secret must be at least 32 characters"),
REDIS_URL: z.string().url(),
NODE_ENV: z.enum(["development", "production", "test"]).default("development"),
PORT: z.coerce.number().default(3000),
CORS_ORIGINS: z.string().transform((s) => s.split(",")),
});
export type Env = z.infer<typeof envSchema>;
function validateEnv(): Env {
const result = envSchema.safeParse(process.env);
if (!result.success) {
console.error("Invalid environment variables:");
console.error(result.error.format());
process.exit(1); // Don't start with bad config
}
return result.data;
}
export const env = validateEnv();For production systems, use a proper secret manager:
The pattern is the same regardless of which one you use: the application fetches secrets at startup from the secret manager, not from environment variables.
interface SigningKey {
id: string;
secret: string;
createdAt: Date;
active: boolean; // Only the active key signs new tokens
}
async function verifyWithRotation(token: string): Promise<TokenPayload> {
const keys = await getSigningKeys(); // Returns all valid keys
for (const key of keys) {
try {
return jwt.verify(token, key.secret, {
algorithms: ["HS256"],
}) as TokenPayload;
} catch {
continue; // Try the next key
}
}
throw new ApiError(401, "Invalid token");
}
function signToken(payload: Omit<TokenPayload, "iat" | "exp">): string {
const activeKey = getActiveSigningKey();
return jwt.sign(payload, activeKey.secret, {
algorithm: "HS256",
expiresIn: "15m",
keyid: activeKey.id, // Include key ID in the header
});
}The OWASP API Security Top 10 is the industry standard list of API vulnerabilities. It's updated periodically, and every item on the list is something I've seen in real codebases. Let me walk through each one.
The most common API vulnerability. The user is authenticated, but the API doesn't check whether they have access to the specific object they're requesting.
// VULNERABLE — any authenticated user can access any user's data
app.get("/api/users/:id", authenticate, async (req, res) => {
const user = await db.users.findById(req.params.id);
return res.json(user);
});
// FIXED — verify the user is accessing their own data (or is an admin)
app.get("/api/users/:id", authenticate, async (req, res) => {
if (req.user.id !== req.params.id && req.user.role !== "admin") {
return res.status(403).json({ error: "Access denied" });
}
const user = await db.users.findById(req.params.id);
return res.json(user);
});The vulnerable version is everywhere. It passes every auth check — the user has a valid token — but it doesn't verify they're authorized to access this specific resource. Change the ID in the URL, and you get someone else's data.
Weak login mechanisms, missing MFA, tokens that never expire, passwords stored in plaintext. This covers the authentication layer itself.
The fix is everything we discussed in the authentication section: strong password requirements, bcrypt with sufficient rounds, short-lived access tokens, refresh token rotation, account lockout after failed attempts.
const MAX_LOGIN_ATTEMPTS = 5;
const LOCKOUT_DURATION = 15 * 60 * 1000; // 15 minutes
async function handleLogin(email: string, password: string): Promise<AuthResult> {
const lockoutKey = `lockout:${email}`;
const attempts = await redis.get(lockoutKey);
if (attempts && parseInt(attempts) >= MAX_LOGIN_ATTEMPTS) {
const ttl = await redis.pttl(lockoutKey);
throw new ApiError(
429,
`Account locked. Try again in ${Math.ceil(ttl / 60000)} minutes.`
);
}
const user = await db.users.findByEmail(email);
if (!user || !(await bcrypt.compare(password, user.passwordHash))) {
// Increment failed attempts
await redis.multi()
.incr(lockoutKey)
.pexpire(lockoutKey, LOCKOUT_DURATION)
.exec();
// Same error message for both cases — don't reveal whether the email exists
throw new ApiError(401, "Invalid email or password");
}
// Reset failed attempts on successful login
await redis.del(lockoutKey);
return generateTokens(user);
}The comment about "same error message" is important. If your API returns "user not found" for invalid emails and "wrong password" for valid emails with wrong passwords, you're telling an attacker which emails exist in your system.
Returning more data than necessary, or allowing users to modify properties they shouldn't.
// VULNERABLE — returns the entire user object, including internal fields
app.get("/api/users/:id", authenticate, authorize, async (req, res) => {
const user = await db.users.findById(req.params.id);
return res.json(user);
// Response includes: passwordHash, internalNotes, billingId, ...
});
// FIXED — explicit allowlist of returned fields
app.get("/api/users/:id", authenticate, authorize, async (req, res) => {
const user = await db.users.findById(req.params.id);
return res.json({
id: user.id,
name: user.name,
email: user.email,
avatar: user.avatar,
createdAt: user.createdAt,
});
});Never return entire database objects. Always pick the fields you want to expose. This applies to writes too — don't spread the entire request body into your update query:
// VULNERABLE — mass assignment
app.put("/api/users/:id", authenticate, async (req, res) => {
await db.users.update(req.params.id, req.body);
// Attacker sends: { "role": "admin", "verified": true }
});
// FIXED — pick allowed fields
const UpdateUserSchema = z.object({
name: z.string().min(1).max(100).optional(),
avatar: z.string().url().optional(),
});
app.put("/api/users/:id", authenticate, async (req, res) => {
const data = UpdateUserSchema.parse(req.body);
await db.users.update(req.params.id, data);
});Your API is a resource. CPU, memory, bandwidth, database connections — they're all finite. Without limits, a single client can exhaust them all.
This goes beyond rate limiting. It includes:
// Limit request body size
app.use(express.json({ limit: "1mb" }));
// Limit query complexity
const MAX_PAGE_SIZE = 100;
const DEFAULT_PAGE_SIZE = 20;
const PaginationSchema = z.object({
page: z.coerce.number().int().positive().default(1),
limit: z.coerce
.number()
.int()
.positive()
.max(MAX_PAGE_SIZE)
.default(DEFAULT_PAGE_SIZE),
});
// Limit file upload size
const upload = multer({
limits: {
fileSize: 5 * 1024 * 1024, // 5MB
files: 1,
},
fileFilter: (req, file, cb) => {
const allowed = ["image/jpeg", "image/png", "image/webp"];
if (allowed.includes(file.mimetype)) {
cb(null, true);
} else {
cb(new Error("Invalid file type"));
}
},
});
// Timeout long-running requests
app.use((req, res, next) => {
res.setTimeout(30000, () => {
res.status(408).json({ error: "Request timeout" });
});
next();
});Different from BOLA. This is about accessing functions (endpoints) you shouldn't have access to, not objects. The classic example: a regular user discovering admin endpoints.
// Middleware that checks role-based access
function requireRole(...allowedRoles: string[]) {
return (req: Request, res: Response, next: NextFunction) => {
if (!req.user) {
return res.status(401).json({ error: "Not authenticated" });
}
if (!allowedRoles.includes(req.user.role)) {
// Log the attempt — this might be an attack
logger.warn("Unauthorized access attempt", {
userId: req.user.id,
role: req.user.role,
requiredRoles: allowedRoles,
endpoint: `${req.method} ${req.path}`,
ip: req.ip,
});
return res.status(403).json({ error: "Insufficient permissions" });
}
next();
};
}
// Apply to routes
app.delete("/api/users/:id", authenticate, requireRole("admin"), deleteUser);
app.get("/api/admin/stats", authenticate, requireRole("admin"), getStats);
app.post("/api/posts", authenticate, requireRole("admin", "editor"), createPost);Don't rely on hiding endpoints. "Security through obscurity" is not security. Even if the admin panel URL isn't linked anywhere, someone will find /api/admin/users by fuzzing.
Automated abuse of legitimate business functionality. Think: bots buying limited-stock items, automated account creation for spam, scraping product prices.
The mitigations are context-specific: CAPTCHAs, device fingerprinting, behavioral analysis, step-up authentication for sensitive operations. There's no one-size-fits-all code snippet.
If your API fetches URLs provided by the user (webhooks, profile picture URLs, link previews), an attacker can make your server request internal resources:
import { URL } from "url";
import dns from "dns/promises";
import { isPrivateIP } from "./network-utils";
async function safeFetch(userProvidedUrl: string): Promise<Response> {
let parsed: URL;
try {
parsed = new URL(userProvidedUrl);
} catch {
throw new ApiError(400, "Invalid URL");
}
// Only allow HTTP(S)
if (!["http:", "https:"].includes(parsed.protocol)) {
throw new ApiError(400, "Only HTTP(S) URLs are allowed");
}
// Resolve the hostname and check if it's a private IP
const addresses = await dns.resolve4(parsed.hostname);
for (const addr of addresses) {
if (isPrivateIP(addr)) {
throw new ApiError(400, "Internal addresses are not allowed");
}
}
// Now fetch with a timeout and size limit
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 5000);
try {
const response = await fetch(userProvidedUrl, {
signal: controller.signal,
redirect: "error", // Don't follow redirects (they could redirect to internal IPs)
});
return response;
} finally {
clearTimeout(timeout);
}
}Key details: resolve the DNS first and check the IP before making the request. Block redirects — an attacker can host a URL that redirects to http://169.254.169.254/ (AWS metadata endpoint) to bypass your URL-level check.
Default credentials left unchanged, unnecessary HTTP methods enabled, verbose error messages in production, directory listing enabled, CORS misconfigured. This is the "you forgot to lock the door" category.
// Don't leak stack traces in production
app.use((err: Error, req: Request, res: Response, next: NextFunction) => {
logger.error("Unhandled error", {
error: err.message,
stack: err.stack,
path: req.path,
method: req.method,
});
if (process.env.NODE_ENV === "production") {
// Generic error message — don't reveal internals
res.status(500).json({
error: "Internal server error",
requestId: req.id, // Include a request ID for debugging
});
} else {
// In development, show the full error
res.status(500).json({
error: err.message,
stack: err.stack,
});
}
});
// Disable unnecessary HTTP methods
app.use((req, res, next) => {
const allowed = ["GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"];
if (!allowed.includes(req.method)) {
return res.status(405).json({ error: "Method not allowed" });
}
next();
});You deployed v2 of the API but forgot to shut down v1. Or there's a /debug/ endpoint that was useful during development and still running in production. Or a staging server that's publicly accessible with production data.
This isn't a code fix — it's an ops discipline. Maintain a list of all API endpoints, all deployed versions, and all environments. Use automated scanning to find exposed services. Kill what you don't need.
Your API consumes third-party APIs. Do you validate their responses? What happens if a webhook payload from Stripe is actually from an attacker?
import crypto from "crypto";
// Verify Stripe webhook signatures
function verifyStripeWebhook(
payload: string,
signature: string,
secret: string
): boolean {
const timestamp = signature.split(",").find((s) => s.startsWith("t="))?.slice(2);
const expectedSig = signature.split(",").find((s) => s.startsWith("v1="))?.slice(3);
if (!timestamp || !expectedSig) return false;
// Reject old timestamps (prevent replay attacks)
const age = Math.abs(Date.now() / 1000 - parseInt(timestamp));
if (age > 300) return false; // 5 minute tolerance
const signedPayload = `${timestamp}.${payload}`;
const computedSig = crypto
.createHmac("sha256", secret)
.update(signedPayload)
.digest("hex");
return crypto.timingSafeEqual(
Buffer.from(computedSig),
Buffer.from(expectedSig)
);
}Always verify signatures on webhooks. Always validate the structure of third-party API responses. Always set timeouts on outgoing requests. Never trust data just because it came from "a trusted partner."
When something goes wrong — and it will — audit logs are how you figure out what happened. But logging is a double-edged sword. Log too little and you're blind. Log too much and you create a privacy liability.
interface AuditLogEntry {
timestamp: string;
action: string; // "user.login", "post.delete", "admin.role_change"
actor: {
id: string;
ip: string;
userAgent: string;
};
target: {
type: string; // "user", "post", "setting"
id: string;
};
result: "success" | "failure";
metadata: Record<string, unknown>; // Additional context
requestId: string; // For correlating with application logs
}
async function auditLog(entry: AuditLogEntry): Promise<void> {
// Write to a separate, append-only data store
// This should NOT be the same database your application uses
await auditDb.collection("audit_logs").insertOne({
...entry,
timestamp: new Date().toISOString(),
});
// For critical actions, also write to an immutable external log
if (isCriticalAction(entry.action)) {
await externalLogger.send(entry);
}
}Log these events:
Never log:
sk_live_...abc)function sanitizeForLogging(data: Record<string, unknown>): Record<string, unknown> {
const sensitiveKeys = new Set([
"password",
"passwordHash",
"token",
"secret",
"apiKey",
"creditCard",
"ssn",
"authorization",
]);
const sanitized: Record<string, unknown> = {};
for (const [key, value] of Object.entries(data)) {
if (sensitiveKeys.has(key.toLowerCase())) {
sanitized[key] = "[REDACTED]";
} else if (typeof value === "object" && value !== null) {
sanitized[key] = sanitizeForLogging(value as Record<string, unknown>);
} else {
sanitized[key] = value;
}
}
return sanitized;
}If an attacker gains access to your system, one of the first things they'll do is modify the logs to cover their tracks. Tamper-evident logging makes this detectable:
import crypto from "crypto";
let previousHash = "GENESIS"; // The initial hash in the chain
function createTamperEvidentEntry(entry: AuditLogEntry): AuditLogEntry & { hash: string } {
const content = JSON.stringify(entry) + previousHash;
const hash = crypto.createHash("sha256").update(content).digest("hex");
previousHash = hash;
return { ...entry, hash };
}
// To verify the chain integrity:
function verifyLogChain(entries: Array<AuditLogEntry & { hash: string }>): boolean {
let expectedPreviousHash = "GENESIS";
for (const entry of entries) {
const { hash, ...rest } = entry;
const content = JSON.stringify(rest) + expectedPreviousHash;
const computedHash = crypto.createHash("sha256").update(content).digest("hex");
if (computedHash !== hash) {
return false; // Chain is broken — logs have been tampered with
}
expectedPreviousHash = hash;
}
return true;
}This is the same concept as a blockchain — each log entry's hash depends on the previous entry. If someone modifies or deletes an entry, the chain breaks.
Your code might be secure. But what about the 847 npm packages in your node_modules? The supply chain problem is real, and it's gotten worse over the years.
# Run this in CI, fail the build on high/critical vulnerabilities
npm audit --audit-level=high
# Fix what can be auto-fixed
npm audit fix
# See what you're actually pulling in
npm ls --allBut npm audit has limitations. It only checks the npm advisory database, and its severity ratings aren't always accurate. Layer additional tools:
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 10
reviewers:
- "your-team"
labels:
- "dependencies"
# Group minor and patch updates to reduce PR noise
groups:
production-dependencies:
patterns:
- "*"
update-types:
- "minor"
- "patch"Always commit your package-lock.json (or pnpm-lock.yaml, or yarn.lock). The lockfile pins exact versions of every dependency, including transitive ones. Without it, npm install might pull in a different version than what you tested — and that different version might be compromised.
# In CI, use ci instead of install — it respects the lockfile strictly
npm cinpm ci fails if the lockfile doesn't match package.json, instead of silently updating it. This catches cases where someone modified package.json but forgot to update the lockfile.
Before adding a dependency, ask:
is-odd depends on is-number which depends on kind-of. That's three packages to do something one line of code can do.// You don't need a package for this:
const isEven = (n: number): boolean => n % 2 === 0;
// Or this:
const leftPad = (str: string, len: number, char = " "): string =>
str.padStart(len, char);
// Or this:
const isNil = (value: unknown): value is null | undefined =>
value === null || value === undefined;This is the actual checklist I use before every production deployment. It's not exhaustive — security is never "done" — but it catches the mistakes that matter most.
| # | Check | Pass Criteria | Priority |
|---|---|---|---|
| 1 | Authentication | JWTs verified with explicit algorithm, issuer, and audience. No alg: none. | Critical |
| 2 | Token expiration | Access tokens expire in 15 min or less. Refresh tokens rotate on use. | Critical |
| 3 | Token storage | Refresh tokens in httpOnly secure cookies. No tokens in localStorage. | Critical |
| 4 | Authorization on every endpoint | Every data-access endpoint checks object-level permissions. BOLA tested. | Critical |
| 5 | Input validation | All user input validated with Zod or equivalent. No raw req.body in queries. | Critical |
| 6 | SQL/NoSQL injection | All database queries use parameterized queries or ORM methods. No string concatenation. | Critical |
| 7 | Rate limiting | Auth endpoints: 5/15min. General API: 60/min. Rate limit headers returned. | High |
| 8 | CORS | Explicit origin allowlist. No wildcard with credentials. Preflight cached. | High |
| 9 | Security headers | CSP, HSTS, X-Frame-Options, X-Content-Type-Options, Referrer-Policy all present. | High |
| 10 | Error handling | Production errors return generic messages. No stack traces, no SQL errors exposed. | High |
| 11 | Secrets | No secrets in code or git history. .env in .gitignore. Validated at startup. | Critical |
| 12 | Dependencies | npm audit clean (no high/critical). Lockfile committed. npm ci in CI. | High |
| 13 | HTTPS only | HSTS enabled with preload. HTTP redirects to HTTPS. Secure cookie flag set. | Critical |
| 14 | Logging | Auth events, access denied, and data mutations logged. No PII in logs. | Medium |
| 15 | Request size limits | Body parser limited (1MB default). File uploads capped. Query pagination enforced. | Medium |
| 16 | SSRF protection | User-provided URLs validated. Private IPs blocked. Redirects disabled or validated. | Medium |
| 17 | Account lockout | Failed login attempts trigger lockout after 5 tries. Lockout logged. | High |
| 18 | Webhook verification | All incoming webhooks verified with signatures. Replay protection via timestamp. | High |
| 19 | Admin endpoints | Role-based access control on all admin routes. Attempts logged. | Critical |
| 20 | Mass assignment | Update endpoints use Zod schema with allowlisted fields. No raw body spread. | High |
I keep this as a GitHub issue template. Before tagging a release, someone on the team has to check every row and sign off. It's not glamorous, but it works.
Security is not a feature you add at the end. It's not a sprint you do once a year. It's a way of thinking about every line of code you write.
When you write an endpoint, think: "What if someone sends data I don't expect?" When you add a parameter, think: "What if someone changes this to someone else's ID?" When you add a dependency, think: "What happens if this package is compromised next Tuesday?"
You won't catch everything. Nobody does. But running through this checklist — methodically, before every deployment — catches the things that matter most. The easy wins. The obvious holes. The mistakes that turn a bad day into a data breach.
Build the habit. Run the checklist. Ship with confidence.