The full authentication landscape: when to use sessions vs JWT, OAuth 2.0 / OIDC flows, refresh token rotation, passkeys (WebAuthn), and the Next.js auth patterns I actually use.
Authentication is the one area of web development where "it works" is never good enough. A bug in your date picker is annoying. A bug in your auth system is a data breach.
I've implemented authentication from scratch, migrated between providers, debugged token theft incidents, and dealt with the fallout of "we'll fix the security later" decisions. This post is the comprehensive guide I wish I'd had when I started. Not just the theory — the actual trade-offs, the real vulnerabilities, and the patterns that hold up under production pressure.
We'll cover the full landscape: sessions, JWTs, OAuth 2.0, passkeys, MFA, and authorization. By the end, you'll understand not just how each mechanism works, but when to use it and why the alternatives exist.
This is the first decision you'll face, and the internet is full of bad advice about it. Let me lay out what actually matters.
Sessions are the original approach. The server creates a session record, stores it somewhere (database, Redis, memory), and gives the client an opaque session ID in a cookie.
// Simplified session creation
import { randomBytes } from "crypto";
import { cookies } from "next/headers";
interface Session {
userId: string;
createdAt: Date;
expiresAt: Date;
ipAddress: string;
userAgent: string;
}
async function createSession(userId: string, request: Request): Promise<string> {
const sessionId = randomBytes(32).toString("hex");
const session: Session = {
userId,
createdAt: new Date(),
expiresAt: new Date(Date.now() + 24 * 60 * 60 * 1000), // 24 hours
ipAddress: request.headers.get("x-forwarded-for") ?? "unknown",
userAgent: request.headers.get("user-agent") ?? "unknown",
};
// Store in your database or Redis
await redis.set(`session:${sessionId}`, JSON.stringify(session), "EX", 86400);
const cookieStore = await cookies();
cookieStore.set("session_id", sessionId, {
httpOnly: true,
secure: true,
sameSite: "lax",
maxAge: 86400,
path: "/",
});
return sessionId;
}The advantages are real:
The disadvantages are also real:
JWTs flip the model. Instead of storing session state on the server, you encode it into a signed token that the client holds.
import { SignJWT, jwtVerify } from "jose";
const secret = new TextEncoder().encode(process.env.JWT_SECRET);
async function createAccessToken(userId: string, role: string): Promise<string> {
return new SignJWT({ sub: userId, role })
.setProtectedHeader({ alg: "HS256" })
.setIssuedAt()
.setExpirationTime("15m")
.setIssuer("https://akousa.net")
.setAudience("https://akousa.net")
.sign(secret);
}
async function verifyAccessToken(token: string) {
try {
const { payload } = await jwtVerify(token, secret, {
issuer: "https://akousa.net",
audience: "https://akousa.net",
});
return payload;
} catch {
return null;
}
}The advantages:
The disadvantages — and these are the ones people gloss over:
My rule of thumb:
Use sessions when: You have a monolithic application, you need instant revocation, you're building a consumer-facing product where account security is critical, or your auth requirements might change frequently.
Use JWTs when: You have a microservices architecture where services need to independently verify identity, you're building API-to-API communication, or you're implementing a third-party authentication system.
In practice: Most applications should use sessions. The "JWTs are more scalable" argument only applies if you actually have a scaling problem that session storage can't solve — and Redis handles millions of session lookups per second. I've seen too many projects choose JWTs because they sound more modern, then build a blocklist and a refresh token system that's more complex than sessions would have been.
Even if you choose session-based auth, you'll encounter JWTs through OAuth, OIDC, and third-party integrations. Understanding the internals is non-negotiable.
A JWT has three parts separated by dots: header.payload.signature
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.
eyJzdWIiOiJ1c2VyXzEyMyIsInJvbGUiOiJhZG1pbiIsImlhdCI6MTcwOTMxMjAwMCwiZXhwIjoxNzA5MzEyOTAwfQ.
kQ8s7nR2xC...
Header — declares the algorithm and token type:
{
"alg": "RS256",
"typ": "JWT"
}Payload — contains claims. Standard claims have short names:
{
"sub": "user_123", // Subject (who is this about)
"iss": "https://auth.example.com", // Issuer (who created this)
"aud": "https://api.example.com", // Audience (who should accept this)
"iat": 1709312000, // Issued At (Unix timestamp)
"exp": 1709312900, // Expiration (Unix timestamp)
"role": "admin" // Custom claim
}Signature — proves the token hasn't been tampered with. Created by signing the encoded header and payload with a secret key.
HS256 (HMAC-SHA256) — symmetric. The same secret signs and verifies. Simple, but every service that needs to verify tokens must have the secret. If any one of them is compromised, an attacker can forge tokens.
RS256 (RSA-SHA256) — asymmetric. A private key signs, a public key verifies. Only the auth server needs the private key. Any service can verify with the public key. If a verification service is compromised, the attacker can read tokens but not forge them.
import { SignJWT, jwtVerify, importPKCS8, importSPKI } from "jose";
// RS256 — use this when multiple services verify tokens
const privateKeyPem = process.env.JWT_PRIVATE_KEY!;
const publicKeyPem = process.env.JWT_PUBLIC_KEY!;
async function signWithRS256(payload: Record<string, unknown>) {
const privateKey = await importPKCS8(privateKeyPem, "RS256");
return new SignJWT(payload)
.setProtectedHeader({ alg: "RS256", typ: "JWT" })
.setIssuedAt()
.setExpirationTime("15m")
.sign(privateKey);
}
async function verifyWithRS256(token: string) {
const publicKey = await importSPKI(publicKeyPem, "RS256");
const { payload } = await jwtVerify(token, publicKey, {
algorithms: ["RS256"], // CRITICAL: always restrict algorithms
});
return payload;
}Rule: Use RS256 whenever tokens cross service boundaries. Use HS256 only when the same service both signs and verifies.
alg: none Attack#This is the most famous JWT vulnerability, and it's embarrassingly simple. Some JWT libraries used to:
alg field from the headeralg: "none", skip signature verification entirelyAn attacker could take a valid JWT, change the payload (e.g., set "role": "admin"), set alg to "none", remove the signature, and send it. The server would accept it.
// VULNERABLE — never do this
function verifyJwt(token: string) {
const [headerB64, payloadB64, signature] = token.split(".");
const header = JSON.parse(atob(headerB64));
if (header.alg === "none") {
// "No signature needed" — CATASTROPHIC
return JSON.parse(atob(payloadB64));
}
// ... verify signature
}The fix is simple: always specify the expected algorithm explicitly. Never let the token tell you how to verify it.
// SAFE — algorithm is hardcoded, not read from the token
const { payload } = await jwtVerify(token, key, {
algorithms: ["RS256"], // Only accept RS256 — ignore the header
});Modern libraries like jose handle this correctly by default, but you should still explicitly pass the algorithms option as defense in depth.
Related to the above: if a server is configured to accept RS256, an attacker might:
alg: "HS256"If the server reads the alg header and switches to HS256 verification, the public key (which everyone knows) becomes the shared secret. The signature is valid. The attacker has forged a token.
Again, the fix is the same: never trust the algorithm from the token header. Always hardcode it.
If you use JWTs, you need a refresh token strategy. Sending a long-lived access token is asking for trouble — if it's stolen, the attacker has access for the entire lifetime.
The pattern:
import { randomBytes } from "crypto";
interface RefreshTokenRecord {
tokenHash: string;
userId: string;
familyId: string; // Groups related tokens together
used: boolean;
expiresAt: Date;
createdAt: Date;
}
async function issueTokenPair(userId: string) {
const familyId = randomBytes(16).toString("hex");
const accessToken = await createAccessToken(userId);
const refreshToken = randomBytes(64).toString("hex");
const refreshTokenHash = await hashToken(refreshToken);
// Store refresh token record
await db.refreshToken.create({
data: {
tokenHash: refreshTokenHash,
userId,
familyId,
used: false,
expiresAt: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000),
createdAt: new Date(),
},
});
return { accessToken, refreshToken };
}Every time the client uses a refresh token to get a new access token, you issue a new refresh token and invalidate the old one:
async function rotateTokens(incomingRefreshToken: string) {
const tokenHash = await hashToken(incomingRefreshToken);
const record = await db.refreshToken.findUnique({
where: { tokenHash },
});
if (!record) {
// Token doesn't exist — possible theft
return null;
}
if (record.expiresAt < new Date()) {
// Token expired
await db.refreshToken.delete({ where: { tokenHash } });
return null;
}
if (record.used) {
// THIS TOKEN WAS ALREADY USED.
// Someone is replaying it — either the legitimate user
// or an attacker. Either way, kill the entire family.
await db.refreshToken.deleteMany({
where: { familyId: record.familyId },
});
console.error(
`Refresh token reuse detected for user ${record.userId}, family ${record.familyId}. All tokens in family invalidated.`
);
return null;
}
// Mark current token as used (don't delete — we need it for reuse detection)
await db.refreshToken.update({
where: { tokenHash },
data: { used: true },
});
// Issue new pair with the same family ID
const newRefreshToken = randomBytes(64).toString("hex");
const newRefreshTokenHash = await hashToken(newRefreshToken);
await db.refreshToken.create({
data: {
tokenHash: newRefreshTokenHash,
userId: record.userId,
familyId: record.familyId, // Same family
used: false,
expiresAt: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000),
createdAt: new Date(),
},
});
const newAccessToken = await createAccessToken(record.userId);
return { accessToken: newAccessToken, refreshToken: newRefreshToken };
}Consider this scenario:
Without reuse detection, the user just gets an error. The attacker continues with token B. The user logs in again, never knowing their account was compromised.
With reuse detection and family invalidation: when the user tries to use the already-used token A, the system detects reuse, invalidates every token in the family (including B), and forces both the user and the attacker to re-authenticate. The user gets a "please log in again" prompt and might realize something is wrong.
This is the approach used by Auth0, Okta, and Auth.js. It's not perfect — if the attacker uses the token before the legitimate user does, the legitimate user becomes the one who triggers the reuse alert. But it's the best we can do with bearer tokens.
OAuth 2.0 and OpenID Connect are the protocols behind "Sign in with Google/GitHub/Apple." Understanding them is essential even if you use a library, because when things break — and they will — you need to know what's happening at the protocol level.
OAuth 2.0 is an authorization protocol. It answers: "Can this application access this user's data?" The result is an access token that grants specific permissions (scopes).
OpenID Connect (OIDC) is an authentication layer built on top of OAuth 2.0. It answers: "Who is this user?" The result is an ID token (a JWT) that contains user identity information.
When you "Sign in with Google," you're using OIDC. Google tells your app who the user is (authentication). You might also request OAuth scopes to access their calendar or drive (authorization).
This is the flow you should use for web applications. PKCE (Proof Key for Code Exchange) was originally designed for mobile apps but is now recommended for all clients, including server-side applications.
import { randomBytes, createHash } from "crypto";
// Step 1: Generate PKCE values and redirect the user
function initiateOAuthFlow() {
// Code verifier: random 43-128 character string
const codeVerifier = randomBytes(32)
.toString("base64url")
.slice(0, 43);
// Code challenge: SHA256 hash of the verifier, base64url-encoded
const codeChallenge = createHash("sha256")
.update(codeVerifier)
.digest("base64url");
// State: random value for CSRF protection
const state = randomBytes(16).toString("hex");
// Store both in the session (server-side!) before redirecting
// NEVER put the code_verifier in a cookie or URL parameter
session.codeVerifier = codeVerifier;
session.oauthState = state;
const authUrl = new URL("https://accounts.google.com/o/oauth2/v2/auth");
authUrl.searchParams.set("client_id", process.env.GOOGLE_CLIENT_ID!);
authUrl.searchParams.set("redirect_uri", "https://example.com/api/auth/callback/google");
authUrl.searchParams.set("response_type", "code");
authUrl.searchParams.set("scope", "openid email profile");
authUrl.searchParams.set("state", state);
authUrl.searchParams.set("code_challenge", codeChallenge);
authUrl.searchParams.set("code_challenge_method", "S256");
return authUrl.toString();
}// Step 2: Handle the callback
async function handleOAuthCallback(request: Request) {
const url = new URL(request.url);
const code = url.searchParams.get("code");
const state = url.searchParams.get("state");
const error = url.searchParams.get("error");
// Check for errors from the provider
if (error) {
throw new Error(`OAuth error: ${error}`);
}
// Verify state matches (CSRF protection)
if (state !== session.oauthState) {
throw new Error("State mismatch — possible CSRF attack");
}
// Exchange the authorization code for tokens
const tokenResponse = await fetch("https://oauth2.googleapis.com/token", {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: new URLSearchParams({
grant_type: "authorization_code",
code: code!,
redirect_uri: "https://example.com/api/auth/callback/google",
client_id: process.env.GOOGLE_CLIENT_ID!,
client_secret: process.env.GOOGLE_CLIENT_SECRET!,
code_verifier: session.codeVerifier, // PKCE: proves we started this flow
}),
});
const tokens = await tokenResponse.json();
// tokens.access_token — for API calls to Google
// tokens.id_token — JWT with user identity (OIDC)
// tokens.refresh_token — for getting new access tokens
// Step 3: Verify the ID token and extract user info
const idTokenPayload = await verifyGoogleIdToken(tokens.id_token);
return {
googleId: idTokenPayload.sub,
email: idTokenPayload.email,
name: idTokenPayload.name,
picture: idTokenPayload.picture,
};
}Every OAuth/OIDC provider exposes these:
The state parameter prevents CSRF attacks on the OAuth callback. Without it:
https://yourapp.com/callback?code=ATTACKER_CODEWith state: your app generates a random value, stores it in the session, and includes it in the authorization URL. When the callback comes, you verify the state matches. The attacker can't forge this because they don't have access to the victim's session.
Auth.js is what I reach for first in most Next.js projects. It handles the OAuth dance, session management, database persistence, and CSRF protection. Here's a production-ready setup.
// src/lib/auth.ts
import NextAuth from "next-auth";
import Google from "next-auth/providers/google";
import GitHub from "next-auth/providers/github";
import Credentials from "next-auth/providers/credentials";
import { PrismaAdapter } from "@auth/prisma-adapter";
import { prisma } from "@/lib/prisma";
import { verifyPassword } from "@/lib/password";
export const { handlers, auth, signIn, signOut } = NextAuth({
adapter: PrismaAdapter(prisma),
// Use database sessions (not JWT) for better security
session: {
strategy: "database",
maxAge: 30 * 24 * 60 * 60, // 30 days
updateAge: 24 * 60 * 60, // Extend session every 24 hours
},
providers: [
Google({
clientId: process.env.GOOGLE_CLIENT_ID!,
clientSecret: process.env.GOOGLE_CLIENT_SECRET!,
// Request specific scopes
authorization: {
params: {
scope: "openid email profile",
prompt: "consent",
access_type: "offline", // Get refresh token
},
},
}),
GitHub({
clientId: process.env.GITHUB_CLIENT_ID!,
clientSecret: process.env.GITHUB_CLIENT_SECRET!,
}),
// Email/password login (use carefully)
Credentials({
credentials: {
email: { label: "Email", type: "email" },
password: { label: "Password", type: "password" },
},
authorize: async (credentials) => {
if (!credentials?.email || !credentials?.password) {
return null;
}
const user = await prisma.user.findUnique({
where: { email: credentials.email as string },
});
if (!user || !user.passwordHash) {
return null;
}
const isValid = await verifyPassword(
credentials.password as string,
user.passwordHash
);
if (!isValid) {
return null;
}
return {
id: user.id,
email: user.email,
name: user.name,
image: user.image,
};
},
}),
],
callbacks: {
// Control who can sign in
async signIn({ user, account }) {
// Block sign-in for banned users
if (user.id) {
const dbUser = await prisma.user.findUnique({
where: { id: user.id },
select: { banned: true },
});
if (dbUser?.banned) return false;
}
return true;
},
// Add custom fields to the session
async session({ session, user }) {
if (session.user) {
session.user.id = user.id;
// Fetch role from database
const dbUser = await prisma.user.findUnique({
where: { id: user.id },
select: { role: true },
});
session.user.role = dbUser?.role ?? "user";
}
return session;
},
},
pages: {
signIn: "/login",
error: "/auth/error",
verifyRequest: "/auth/verify",
},
});// src/app/api/auth/[...nextauth]/route.ts
import { handlers } from "@/lib/auth";
export const { GET, POST } = handlers;// src/middleware.ts
import { auth } from "@/lib/auth";
import { NextResponse } from "next/server";
export default auth((req) => {
const isLoggedIn = !!req.auth;
const isAuthPage = req.nextUrl.pathname.startsWith("/login")
|| req.nextUrl.pathname.startsWith("/register");
const isProtectedRoute = req.nextUrl.pathname.startsWith("/dashboard")
|| req.nextUrl.pathname.startsWith("/settings")
|| req.nextUrl.pathname.startsWith("/admin");
const isAdminRoute = req.nextUrl.pathname.startsWith("/admin");
// Redirect logged-in users away from auth pages
if (isLoggedIn && isAuthPage) {
return NextResponse.redirect(new URL("/dashboard", req.nextUrl));
}
// Redirect unauthenticated users to login
if (!isLoggedIn && isProtectedRoute) {
const callbackUrl = encodeURIComponent(req.nextUrl.pathname);
return NextResponse.redirect(
new URL(`/login?callbackUrl=${callbackUrl}`, req.nextUrl)
);
}
// Check admin role
if (isAdminRoute && req.auth?.user?.role !== "admin") {
return NextResponse.redirect(new URL("/dashboard", req.nextUrl));
}
return NextResponse.next();
});
export const config = {
matcher: [
"/dashboard/:path*",
"/settings/:path*",
"/admin/:path*",
"/login",
"/register",
],
};// src/app/dashboard/page.tsx
import { auth } from "@/lib/auth";
import { redirect } from "next/navigation";
export default async function DashboardPage() {
const session = await auth();
if (!session?.user) {
redirect("/login");
}
return (
<div>
<h1>Welcome, {session.user.name}</h1>
<p>Role: {session.user.role}</p>
</div>
);
}"use client";
import { useSession } from "next-auth/react";
export function UserMenu() {
const { data: session, status } = useSession();
if (status === "loading") {
return <div>Loading...</div>;
}
if (status === "unauthenticated") {
return <a href="/login">Sign In</a>;
}
return (
<div>
<img
src={session?.user?.image ?? "/default-avatar.png"}
alt={session?.user?.name ?? "User"}
/>
<span>{session?.user?.name}</span>
</div>
);
}Passkeys are the most significant authentication improvement in years. They're phishing-resistant, replay-resistant, and eliminate the entire category of password-related vulnerabilities. If you're starting a new project in 2026, you should support passkeys.
Passkeys use public-key cryptography, backed by biometrics or device PINs:
No shared secret ever crosses the network. There's nothing to phish, nothing to leak, nothing to stuff.
When a passkey is created, it's bound to the origin (e.g., https://example.com). The browser will only use the passkey on the exact origin it was created for. If an attacker creates a lookalike site at https://exarnple.com, the passkey simply won't be offered. This is enforced by the browser, not by user vigilance.
This is fundamentally different from passwords, where users routinely enter their credentials on phishing sites because the page looks right.
SimpleWebAuthn is the library I recommend. It handles the WebAuthn protocol correctly and has good TypeScript types.
// Server-side: Registration
import {
generateRegistrationOptions,
verifyRegistrationResponse,
} from "@simplewebauthn/server";
import type {
GenerateRegistrationOptionsOpts,
VerifiedRegistrationResponse,
} from "@simplewebauthn/server";
const rpName = "akousa.net";
const rpID = "akousa.net";
const origin = "https://akousa.net";
async function startRegistration(userId: string, userEmail: string) {
// Get user's existing passkeys to exclude them
const existingCredentials = await db.credential.findMany({
where: { userId },
select: { credentialId: true, transports: true },
});
const options: GenerateRegistrationOptionsOpts = {
rpName,
rpID,
userID: new TextEncoder().encode(userId),
userName: userEmail,
attestationType: "none", // We don't need attestation for most apps
excludeCredentials: existingCredentials.map((cred) => ({
id: cred.credentialId,
transports: cred.transports,
})),
authenticatorSelection: {
residentKey: "preferred",
userVerification: "preferred",
},
};
const registrationOptions = await generateRegistrationOptions(options);
// Store the challenge temporarily — we need it for verification
await redis.set(
`webauthn:challenge:${userId}`,
registrationOptions.challenge,
"EX",
300 // 5 minute expiry
);
return registrationOptions;
}
async function finishRegistration(userId: string, response: unknown) {
const expectedChallenge = await redis.get(`webauthn:challenge:${userId}`);
if (!expectedChallenge) {
throw new Error("Challenge expired or not found");
}
let verification: VerifiedRegistrationResponse;
try {
verification = await verifyRegistrationResponse({
response: response as any,
expectedChallenge,
expectedOrigin: origin,
expectedRPID: rpID,
});
} catch (error) {
throw new Error(`Registration verification failed: ${error}`);
}
if (!verification.verified || !verification.registrationInfo) {
throw new Error("Registration verification failed");
}
const { credential } = verification.registrationInfo;
// Store the credential in the database
await db.credential.create({
data: {
userId,
credentialId: credential.id,
publicKey: Buffer.from(credential.publicKey),
counter: credential.counter,
transports: credential.transports ?? [],
},
});
// Clean up
await redis.del(`webauthn:challenge:${userId}`);
return { verified: true };
}// Server-side: Authentication
import {
generateAuthenticationOptions,
verifyAuthenticationResponse,
} from "@simplewebauthn/server";
async function startAuthentication(userId?: string) {
let allowCredentials;
// If we know the user (e.g., they entered their email), limit to their passkeys
if (userId) {
const credentials = await db.credential.findMany({
where: { userId },
select: { credentialId: true, transports: true },
});
allowCredentials = credentials.map((cred) => ({
id: cred.credentialId,
transports: cred.transports,
}));
}
const options = await generateAuthenticationOptions({
rpID,
allowCredentials,
userVerification: "preferred",
});
// Store challenge for verification
const challengeKey = userId
? `webauthn:auth:${userId}`
: `webauthn:auth:${options.challenge}`;
await redis.set(challengeKey, options.challenge, "EX", 300);
return options;
}
async function finishAuthentication(
response: any,
expectedChallenge: string,
userId: string
) {
const credential = await db.credential.findUnique({
where: { credentialId: response.id },
});
if (!credential) {
throw new Error("Credential not found");
}
const verification = await verifyAuthenticationResponse({
response,
expectedChallenge,
expectedOrigin: origin,
expectedRPID: rpID,
credential: {
id: credential.credentialId,
publicKey: credential.publicKey,
counter: credential.counter,
transports: credential.transports,
},
});
if (!verification.verified) {
throw new Error("Authentication verification failed");
}
// IMPORTANT: Update the counter to prevent replay attacks
await db.credential.update({
where: { credentialId: response.id },
data: {
counter: verification.authenticationInfo.newCounter,
},
});
return { verified: true, userId: credential.userId };
}// Client-side: Registration
import { startRegistration as webAuthnRegister } from "@simplewebauthn/browser";
async function registerPasskey() {
// Get options from your server
const optionsResponse = await fetch("/api/auth/webauthn/register", {
method: "POST",
});
const options = await optionsResponse.json();
try {
// This triggers the browser's passkey UI (biometric prompt)
const credential = await webAuthnRegister(options);
// Send the credential to your server for verification
const verifyResponse = await fetch("/api/auth/webauthn/register/verify", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(credential),
});
const result = await verifyResponse.json();
if (result.verified) {
console.log("Passkey registered successfully!");
}
} catch (error) {
if ((error as Error).name === "NotAllowedError") {
console.log("User cancelled the passkey registration");
}
}
}Two terms you'll encounter:
attestationType: "none".Even with passkeys, you'll encounter scenarios where MFA via TOTP is needed — passkeys as a second factor alongside passwords, or supporting users whose devices don't support passkeys.
TOTP is the protocol behind Google Authenticator, Authy, and 1Password. It works by:
import { createHmac, randomBytes } from "crypto";
// Generate a TOTP secret for a user
function generateTOTPSecret(): string {
const buffer = randomBytes(20);
return base32Encode(buffer);
}
function base32Encode(buffer: Buffer): string {
const alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ234567";
let result = "";
let bits = 0;
let value = 0;
for (const byte of buffer) {
value = (value << 8) | byte;
bits += 8;
while (bits >= 5) {
result += alphabet[(value >>> (bits - 5)) & 0x1f];
bits -= 5;
}
}
if (bits > 0) {
result += alphabet[(value << (5 - bits)) & 0x1f];
}
return result;
}
// Generate the TOTP URI for QR code
function generateTOTPUri(
secret: string,
userEmail: string,
issuer: string = "akousa.net"
): string {
const encodedIssuer = encodeURIComponent(issuer);
const encodedEmail = encodeURIComponent(userEmail);
return `otpauth://totp/${encodedIssuer}:${encodedEmail}?secret=${secret}&issuer=${encodedIssuer}&algorithm=SHA1&digits=6&period=30`;
}// Verify a TOTP code
function verifyTOTP(secret: string, code: string, window: number = 1): boolean {
const secretBuffer = base32Decode(secret);
const now = Math.floor(Date.now() / 1000);
// Check current time step and adjacent ones (clock drift tolerance)
for (let i = -window; i <= window; i++) {
const timeStep = Math.floor(now / 30) + i;
const expectedCode = generateTOTPCode(secretBuffer, timeStep);
// Constant-time comparison to prevent timing attacks
if (timingSafeEqual(code, expectedCode)) {
return true;
}
}
return false;
}
function generateTOTPCode(secret: Buffer, timeStep: number): string {
// Convert time step to 8-byte big-endian buffer
const timeBuffer = Buffer.alloc(8);
timeBuffer.writeBigInt64BE(BigInt(timeStep));
// HMAC-SHA1
const hmac = createHmac("sha1", secret).update(timeBuffer).digest();
// Dynamic truncation
const offset = hmac[hmac.length - 1] & 0x0f;
const code =
((hmac[offset] & 0x7f) << 24) |
((hmac[offset + 1] & 0xff) << 16) |
((hmac[offset + 2] & 0xff) << 8) |
(hmac[offset + 3] & 0xff);
return (code % 1_000_000).toString().padStart(6, "0");
}
function timingSafeEqual(a: string, b: string): boolean {
if (a.length !== b.length) return false;
const bufA = Buffer.from(a);
const bufB = Buffer.from(b);
return createHmac("sha256", "key").update(bufA).digest()
.equals(createHmac("sha256", "key").update(bufB).digest());
}Users lose their phones. Always generate backup codes during MFA setup:
import { randomBytes, createHash } from "crypto";
function generateBackupCodes(count: number = 10): string[] {
return Array.from({ length: count }, () =>
randomBytes(4).toString("hex").toUpperCase() // 8-character hex codes
);
}
async function storeBackupCodes(userId: string, codes: string[]) {
// Hash the codes before storing — treat them like passwords
const hashedCodes = codes.map((code) =>
createHash("sha256").update(code).digest("hex")
);
await db.backupCode.createMany({
data: hashedCodes.map((hash) => ({
userId,
codeHash: hash,
used: false,
})),
});
// Return the plain codes ONCE for the user to save
// After this, we only have the hashes
return codes;
}
async function verifyBackupCode(userId: string, code: string): Promise<boolean> {
const codeHash = createHash("sha256")
.update(code.toUpperCase().replace(/\s/g, ""))
.digest("hex");
const backupCode = await db.backupCode.findFirst({
where: {
userId,
codeHash,
used: false,
},
});
if (!backupCode) return false;
// Mark as used — each backup code works exactly once
await db.backupCode.update({
where: { id: backupCode.id },
data: { used: true, usedAt: new Date() },
});
return true;
}MFA recovery is the part most tutorials skip and most real applications botch. Here's what I implement:
The waiting period is critical. If an attacker has compromised the user's email, you don't want to let them disable MFA instantly. The 24-hour delay gives the legitimate user time to notice the email and intervene.
async function initiateAccountRecovery(email: string) {
const user = await db.user.findUnique({ where: { email } });
if (!user) {
// Don't reveal whether the account exists
return { message: "If that email exists, we've sent recovery instructions." };
}
const recoveryToken = randomBytes(32).toString("hex");
const tokenHash = createHash("sha256").update(recoveryToken).digest("hex");
await db.recoveryRequest.create({
data: {
userId: user.id,
tokenHash,
expiresAt: new Date(Date.now() + 24 * 60 * 60 * 1000), // 24 hours
status: "pending",
},
});
// Send email with recovery link
await sendEmail(email, {
subject: "Account Recovery Request",
body: `
A request was made to disable MFA on your account.
If this was you, click the link below after 24 hours: ...
If this was NOT you, please change your password immediately.
`,
});
return { message: "If that email exists, we've sent recovery instructions." };
}Authentication tells you who someone is. Authorization tells you what they're allowed to do. Getting this wrong is how you end up on the news.
RBAC (Role-Based Access Control): Users have roles, roles have permissions. Simple, easy to reason about, works for most applications.
// RBAC — straightforward role checks
type Role = "user" | "editor" | "admin" | "super_admin";
const ROLE_PERMISSIONS: Record<Role, string[]> = {
user: ["read:own_profile", "update:own_profile", "read:posts"],
editor: ["read:own_profile", "update:own_profile", "read:posts", "create:posts", "update:posts"],
admin: [
"read:own_profile", "update:own_profile",
"read:posts", "create:posts", "update:posts", "delete:posts",
"read:users", "update:users",
],
super_admin: ["*"], // Careful with wildcards
};
function hasPermission(role: Role, permission: string): boolean {
const permissions = ROLE_PERMISSIONS[role];
return permissions.includes("*") || permissions.includes(permission);
}
// Usage in an API route
export async function DELETE(
request: Request,
{ params }: { params: Promise<{ id: string }> }
) {
const session = await auth();
if (!session?.user) {
return Response.json({ error: "Unauthorized" }, { status: 401 });
}
if (!hasPermission(session.user.role as Role, "delete:posts")) {
return Response.json({ error: "Forbidden" }, { status: 403 });
}
const { id } = await params;
await db.post.delete({ where: { id } });
return Response.json({ success: true });
}ABAC (Attribute-Based Access Control): Permissions depend on attributes of the user, the resource, and the context. More flexible but more complex.
// ABAC — when RBAC isn't enough
interface PolicyContext {
user: {
id: string;
role: string;
department: string;
clearanceLevel: number;
};
resource: {
type: string;
ownerId: string;
classification: string;
department: string;
};
action: string;
environment: {
ipAddress: string;
time: Date;
mfaVerified: boolean;
};
}
function evaluatePolicy(context: PolicyContext): boolean {
const { user, resource, action, environment } = context;
// Users can always read their own resources
if (action === "read" && resource.ownerId === user.id) {
return true;
}
// Admins can read any resource in their department
if (
action === "read" &&
user.role === "admin" &&
user.department === resource.department
) {
return true;
}
// Classified resources require MFA and minimum clearance
if (resource.classification === "confidential") {
if (!environment.mfaVerified) return false;
if (user.clearanceLevel < 3) return false;
}
// Destructive actions blocked outside business hours
if (action === "delete") {
const hour = environment.time.getHours();
if (hour < 9 || hour > 17) return false;
}
return false; // Default deny
}This is the single most important authorization principle: check permissions at every trust boundary, not just at the UI level.
// BAD — only checking in the component
function DeleteButton({ post }: { post: Post }) {
const { data: session } = useSession();
// This hides the button, but doesn't prevent deletion
if (session?.user?.role !== "admin") return null;
return <button onClick={() => deletePost(post.id)}>Delete</button>;
}
// ALSO BAD — checking in a server action but not the API route
async function deletePostAction(postId: string) {
const session = await auth();
if (session?.user?.role !== "admin") throw new Error("Forbidden");
await db.post.delete({ where: { id: postId } });
}
// An attacker can still call POST /api/posts/123 directly
// GOOD — check at every boundary
// 1. Hide the button in the UI (UX, not security)
// 2. Check in the server action (defense in depth)
// 3. Check in the API route (the actual security boundary)
// 4. Optionally, check in middleware (for route-level protection)The UI check is for user experience. The server check is for security. Never rely on only one of them.
Middleware runs before every matched request. It's a good place for coarse-grained access control:
// "Is this user allowed to access this section at all?"
// Fine-grained checks ("Can this user edit THIS post?") belong in the route handler
// because middleware doesn't have access to the request body or route params easily.
export default auth((req) => {
const path = req.nextUrl.pathname;
const role = req.auth?.user?.role;
// Route-level access control
const routeAccess: Record<string, Role[]> = {
"/admin": ["admin", "super_admin"],
"/editor": ["editor", "admin", "super_admin"],
"/dashboard": ["user", "editor", "admin", "super_admin"],
};
for (const [route, allowedRoles] of Object.entries(routeAccess)) {
if (path.startsWith(route)) {
if (!role || !allowedRoles.includes(role as Role)) {
return NextResponse.redirect(new URL("/unauthorized", req.nextUrl));
}
}
}
return NextResponse.next();
});These are the attacks I see most often in real codebases. Understanding them is essential.
The attack: An attacker creates a valid session on your site, then tricks a victim into using that session ID (e.g., via a URL parameter or by setting a cookie through a subdomain). When the victim logs in, the attacker's session now has an authenticated user.
The fix: Always regenerate the session ID after successful authentication. Never let a pre-authentication session ID carry over to a post-authentication session.
async function login(credentials: { email: string; password: string }, request: Request) {
const user = await verifyCredentials(credentials);
if (!user) throw new Error("Invalid credentials");
// CRITICAL: Delete the old session and create a new one
const oldSessionId = getSessionIdFromCookie(request);
if (oldSessionId) {
await redis.del(`session:${oldSessionId}`);
}
// Create a completely new session with a new ID
const newSessionId = await createSession(user.id, request);
return newSessionId;
}The attack: A user is logged into your site. They visit a malicious page that makes a request to your site. Because cookies are sent automatically, the request is authenticated.
The modern fix: SameSite cookies. Setting SameSite: Lax (the default in most browsers now) prevents cookies from being sent on cross-origin POST requests, which covers most CSRF scenarios.
// SameSite=Lax covers most CSRF scenarios:
// - Blocks cookies on cross-origin POST, PUT, DELETE
// - Allows cookies on cross-origin GET (top-level navigation)
// This is fine because GET requests shouldn't have side effects
cookieStore.set("session_id", sessionId, {
httpOnly: true,
secure: true,
sameSite: "lax", // This is your CSRF protection
maxAge: 86400,
path: "/",
});For APIs that accept JSON, you get additional protection for free: the Content-Type: application/json header can't be set by HTML forms, and CORS prevents JavaScript on other origins from making requests with custom headers.
If you need stronger guarantees (e.g., you accept form submissions), use the double-submit cookie pattern or a synchronizer token. Auth.js handles this for you.
The attack: An attacker crafts an OAuth callback URL that redirects to their site after authentication: https://yourapp.com/callback?redirect_to=https://evil.com/steal-token
If your callback handler blindly redirects to the redirect_to parameter, the user ends up on the attacker's site, potentially with tokens in the URL.
// VULNERABLE
async function handleCallback(request: Request) {
const url = new URL(request.url);
const redirectTo = url.searchParams.get("redirect_to") ?? "/";
// ... authenticate the user ...
return Response.redirect(redirectTo); // Could be https://evil.com!
}
// SAFE
async function handleCallback(request: Request) {
const url = new URL(request.url);
const redirectTo = url.searchParams.get("redirect_to") ?? "/";
// Validate the redirect URL
const safeRedirect = sanitizeRedirectUrl(redirectTo, request.url);
// ... authenticate the user ...
return Response.redirect(safeRedirect);
}
function sanitizeRedirectUrl(redirect: string, baseUrl: string): string {
try {
const url = new URL(redirect, baseUrl);
const base = new URL(baseUrl);
// Only allow redirects to the same origin
if (url.origin !== base.origin) {
return "/";
}
// Only allow path redirects (no javascript: or data: URIs)
if (!url.pathname.startsWith("/")) {
return "/";
}
return url.pathname + url.search;
} catch {
return "/";
}
}If you put tokens in URLs (don't), they'll leak through the Referer header when users click links. This has caused real breaches, including at GitHub.
Rules:
Referrer-Policy: strict-origin-when-cross-origin (or stricter)// In your Next.js middleware or layout
const headers = new Headers();
headers.set("Referrer-Policy", "strict-origin-when-cross-origin");A less well-known attack: some JWT libraries support a jwk or jku header that tells the verifier where to find the public key. An attacker can:
jwk header to point to their public keyIf your library blindly fetches and uses the key from the jwk header, the signature verifies. The fix: never allow the token to specify its own verification key. Always use keys from your own configuration.
After years of building authentication systems, here's what I actually use today.
This is my default stack for new projects:
The session strategy is database-backed (not JWT), which gives me instant revocation and simple session management.
// This is my typical auth.ts for a new project
import NextAuth from "next-auth";
import Google from "next-auth/providers/google";
import GitHub from "next-auth/providers/github";
import Passkey from "next-auth/providers/passkey";
import { PrismaAdapter } from "@auth/prisma-adapter";
import { prisma } from "@/lib/prisma";
export const { handlers, auth, signIn, signOut } = NextAuth({
adapter: PrismaAdapter(prisma),
session: { strategy: "database" },
providers: [
Google,
GitHub,
Passkey({
// Auth.js v5 has built-in passkey support
// This uses SimpleWebAuthn under the hood
}),
],
experimental: {
enableWebAuthn: true,
},
});I reach for a managed auth provider when:
The trade-offs of managed providers:
jose for JWTs, @simplewebauthn/server for passkeys, bcrypt or argon2 for passwords. Never hand-rolled.crypto.randomBytes(32) minimum. UUID v4 is acceptable but less entropy than raw random bytes.Since we mentioned it — here's how to hash passwords properly in 2026:
import { hash, verify } from "@node-rs/argon2";
// Argon2id is the recommended algorithm
// These are reasonable defaults for a web application
async function hashPassword(password: string): Promise<string> {
return hash(password, {
memoryCost: 65536, // 64 MB
timeCost: 3, // 3 iterations
parallelism: 4, // 4 threads
});
}
async function verifyPassword(
password: string,
hashedPassword: string
): Promise<boolean> {
try {
return await verify(hashedPassword, password);
} catch {
return false;
}
}Why argon2id over bcrypt? Argon2id is memory-hard, which means attacking it requires not just CPU power but also large amounts of RAM. This makes GPU and ASIC attacks significantly more expensive. Bcrypt is still fine — it's not broken — but argon2id is the better choice for new projects.
Before shipping any authentication system, verify:
HttpOnly, Secure, SameSite=Lax or Strictalg header)Referrer-Policy header is setThis list isn't exhaustive, but it covers the vulnerabilities I've seen most often in production systems.
Authentication is one of those domains where the landscape keeps evolving, but the fundamentals stay the same: verify identity, issue the minimum necessary credentials, check permissions at every boundary, and assume breach.
The biggest shift in 2026 is passkeys going mainstream. Browser support is universal, platform support (iCloud Keychain, Google Password Manager) makes the UX seamless, and the security properties are genuinely superior to anything we've had before. If you're building a new application, make passkeys your primary login method and treat passwords as the fallback.
The second-biggest shift is that rolling your own auth has become harder to justify. Auth.js v5, Clerk, and similar solutions handle the hard parts correctly. The only reason to go custom is when your requirements genuinely don't fit any existing solution — and that's rarer than most developers think.
Whatever you choose, test your auth the way an attacker would. Try replaying tokens, forging signatures, accessing routes you shouldn't, and manipulating redirect URLs. The bugs you find before launch are the ones that don't make the news.