Bun as a runtime, package manager, bundler, and test runner. Real benchmarks, Node.js compatibility gaps, migration patterns, and where I use Bun in production today.
Every few years, the JavaScript ecosystem gets a new runtime and the discourse follows a predictable arc. Hype. Benchmarks. "X is dead." Reality check. Settling into the actual use cases where the new tool genuinely shines.
Bun is in the middle of that arc right now. And unlike most challengers, it's sticking around. Not because it's "faster" (though it often is), but because it's solving a genuinely different problem: the JavaScript toolchain has too many moving parts, and Bun collapses them into one.
I've been using Bun in various capacities for over a year now. Some of it in production. Some of it replacing tools I thought I'd never replace. This post is an honest accounting of what works, what doesn't, and where the gaps still matter.
The first misconception to clear up: Bun is not "a faster Node.js." That framing undersells it.
Bun is four tools in one binary:
The key architectural difference from Node.js is the engine. Node.js uses V8 (Chrome's engine). Bun uses JavaScriptCore (Safari's engine). Both are mature, production-grade engines, but they make different tradeoffs. JavaScriptCore tends to have faster startup times and lower memory overhead. V8 tends to have better peak throughput for long-running computations. In practice, these differences are smaller than you'd think for most workloads.
The other major differentiator: Bun is written in Zig, a systems programming language that sits at roughly the same level as C but with better memory safety guarantees. This is why Bun can be so aggressive with performance — Zig gives you the kind of low-level control that C provides without the footgun density of C.
# Check your Bun version
bun --version
# Run a TypeScript file directly — no tsconfig, no compilation step
bun run server.ts
# Install packages
bun install
# Run tests
bun test
# Bundle for production
bun build ./src/index.ts --outdir ./distThat's one binary doing the job of node + npm + esbuild + vitest. Love it or hate it, that's a compelling reduction in complexity.
Let me be direct about this: Bun's marketing benchmarks are cherry-picked. Not fraudulent — cherry-picked. They show the scenarios where Bun performs best, which is exactly what you'd expect from marketing material. The problem is that people extrapolate from those benchmarks to claim Bun is "25x faster" at everything, which it absolutely is not.
Here's where Bun is genuinely, meaningfully faster:
This is Bun's biggest genuine advantage and it's not even close.
# Measuring startup time — run each 100 times
hyperfine --warmup 5 'node -e "console.log(1)"' 'bun -e "console.log(1)"'
# Typical results:
# node: ~40ms
# bun: ~6msThat's roughly a 6-7x difference in startup time. For scripts, CLI tools, and serverless functions where cold start matters, this is significant. For a long-running server process that starts once and runs for weeks, it's irrelevant.
This is the other area where Bun embarrasses the competition.
# Clean install benchmark — delete node_modules and lockfile first
rm -rf node_modules bun.lockb package-lock.json
# Time npm
time npm install
# Real: ~18.4s (typical medium-sized project)
# Time bun
time bun install
# Real: ~2.1sThat's an 8-9x difference, and it's consistent. The reasons are primarily:
bun.lockb is a binary format, not JSON. Faster to read and write.Bun's built-in HTTP server is fast, but the comparisons need context.
# Quick and dirty benchmark with bombardier
# Testing a simple "Hello World" response
# Bun server
bombardier -c 100 -d 10s http://localhost:3000
# Requests/sec: ~105,000
# Node.js (native http module)
bombardier -c 100 -d 10s http://localhost:3001
# Requests/sec: ~48,000
# Node.js (Express)
bombardier -c 100 -d 10s http://localhost:3002
# Requests/sec: ~15,000Bun vs. raw Node.js: roughly 2x for trivial responses. Bun vs. Express: roughly 7x, but that's unfair because Express adds middleware overhead. The moment you add real logic — database queries, authentication, JSON serialization of actual data — the gap narrows dramatically.
CPU-bound computation:
// fibonacci.ts — this is engine-bound, not runtime-bound
function fib(n: number): number {
if (n <= 1) return n;
return fib(n - 1) + fib(n - 2);
}
const start = performance.now();
console.log(fib(42));
console.log(`${(performance.now() - start).toFixed(0)}ms`);bun run fibonacci.ts # ~1650ms
node fibonacci.ts # ~1580msNode.js (V8) actually wins slightly here. V8's JIT compiler is more aggressive on hot loops. For CPU-bound work, the engine differences are a wash — sometimes V8 wins, sometimes JSC wins, and the differences are within noise.
Don't trust anyone's benchmarks, including mine. Here's how to measure what matters for your specific workload:
# Install hyperfine for proper benchmarking
brew install hyperfine # macOS
# or: cargo install hyperfine
# Benchmark startup + execution of your actual app
hyperfine --warmup 3 \
'node dist/server.js' \
'bun src/server.ts' \
--prepare 'sleep 0.1'
# For HTTP servers, use bombardier or wrk
# Important: test with realistic payloads, not "Hello World"
bombardier -c 50 -d 30s -l http://localhost:3000/api/users
# Memory comparison
/usr/bin/time -v node server.js # Linux
/usr/bin/time -l bun server.ts # macOSThe rule of thumb: if your bottleneck is I/O (file system, network, database), Bun's advantage is modest. If your bottleneck is startup time or toolchain speed, Bun wins big. If your bottleneck is raw computation, it's a toss-up.
This is where I've fully switched. Even on projects where I run Node.js in production, I use bun install for local development and CI. It's just faster, and the compatibility is excellent.
# Install all dependencies from package.json
bun install
# Add a dependency
bun add express
# Add a dev dependency
bun add -d vitest
# Remove a dependency
bun remove express
# Update a dependency
bun update express
# Install a specific version
bun add express@4.18.2If you've used npm or yarn, this is entirely familiar. The flags are slightly different (-d instead of --save-dev), but the mental model is identical.
Bun uses bun.lockb, a binary lockfile. This is both its superpower and its biggest friction point.
The good: It's dramatically faster to read and write. The binary format means Bun can parse the lockfile in microseconds, not the hundreds of milliseconds npm spends parsing package-lock.json.
The bad: You can't review it in a diff. If you're on a team and someone updates a dependency, you can't look at the lockfile diff in a PR and see what changed. This matters more than speed advocates want to admit.
# You can dump the lockfile to human-readable format
bun bun.lockb > lockfile-dump.txt
# Or use the built-in text output
bun install --yarn
# This generates a yarn.lock alongside bun.lockbMy approach: I commit bun.lockb to the repo and also generate a yarn.lock or package-lock.json as a readable fallback. Belt and suspenders.
Bun supports npm/yarn-style workspaces:
{
"name": "my-monorepo",
"workspaces": [
"packages/*",
"apps/*"
]
}# Install dependencies for all workspaces
bun install
# Run a script in a specific workspace
bun run --filter packages/shared build
# Add a dependency to a specific workspace
bun add react --filter apps/webWorkspace support is solid and has improved significantly. The main gap compared to pnpm is that Bun's workspace dependency resolution is less strict — pnpm's strictness is a feature for monorepos because it catches phantom dependencies.
You can drop bun install into almost any existing Node.js project. It reads package.json, respects .npmrc for registry configuration, and handles peerDependencies correctly. The transition is typically:
# Step 1: Delete existing lockfile and node_modules
rm -rf node_modules package-lock.json yarn.lock pnpm-lock.yaml
# Step 2: Install with Bun
bun install
# Step 3: Verify your app still works
bun run dev
# or: node dist/server.js (Bun package manager, Node runtime)I've done this on a dozen projects and had zero issues with the package manager itself. The only gotcha is if your CI pipeline specifically looks for package-lock.json — you'll need to update it to handle bun.lockb.
This is the section where I have to be the most careful, because the situation changes every month. As of early 2026, here's the honest picture.
The vast majority of npm packages work without modification. Bun implements most Node.js built-in modules:
// These all work as expected in Bun
import fs from "node:fs";
import path from "node:path";
import crypto from "node:crypto";
import { Buffer } from "node:buffer";
import { EventEmitter } from "node:events";
import { Readable, Writable } from "node:stream";
import http from "node:http";
import https from "node:https";
import { URL, URLSearchParams } from "node:url";
import os from "node:os";
import child_process from "node:child_process";Both CommonJS and ESM work. require() and import can coexist. TypeScript runs without any compilation step — Bun strips types at parse time.
Frameworks that work:
The gaps tend to fall into a few categories:
Native addons (node-gyp): If a package uses C++ addons compiled with node-gyp, it might not work with Bun. Bun has its own FFI system and supports many native modules, but the coverage isn't 100%. For example, bcrypt (the native one) has had issues — use bcryptjs instead.
# Check if a package uses native addons
ls node_modules/your-package/binding.gyp # If this exists, it's nativeSpecific Node.js internals: Some packages reach into Node.js internals like process.binding() or use V8-specific APIs. These won't work in Bun since it runs on JavaScriptCore.
// This will NOT work in Bun — V8-specific
const v8 = require("v8");
v8.serialize({ data: "test" });
// This WILL work — use Bun's equivalent or a cross-runtime approach
const encoded = new TextEncoder().encode(JSON.stringify({ data: "test" }));Worker threads: Bun supports Web Workers and node:worker_threads, but there are edge cases. Some advanced usage patterns — especially around SharedArrayBuffer and Atomics — can behave differently.
vm module: node:vm has partial support. If your code or a dependency uses vm.createContext() extensively (some template engines do), test thoroughly.
Bun maintains an official compatibility tracker. Check it before committing to Bun for a project:
# Run Bun's built-in compatibility check on your project
bun --bun node_modules/.bin/your-tool
# The --bun flag forces Bun's runtime even for node_modules scriptsMy recommendation: don't assume compatibility. Run your test suite under Bun before deciding. It takes five minutes and saves hours of debugging.
# Quick compatibility check — run your full test suite under Bun
bun test # If you use bun test runner
# or
bun run vitest # If you use vitestThis is where Bun gets interesting. Instead of just re-implementing Node.js APIs, Bun provides its own APIs that are designed to be simpler and faster.
This is the API I use most. It's clean, fast, and WebSocket support is built right in.
const server = Bun.serve({
port: 3000,
fetch(req) {
const url = new URL(req.url);
if (url.pathname === "/") {
return new Response("Hello from Bun!", {
headers: { "Content-Type": "text/plain" },
});
}
if (url.pathname === "/api/users") {
const users = [
{ id: 1, name: "Alice" },
{ id: 2, name: "Bob" },
];
return Response.json(users);
}
return new Response("Not Found", { status: 404 });
},
});
console.log(`Server running at http://localhost:${server.port}`);A few things to notice:
fetch handler receives a standard Request and returns a standard Response. If you've written a Cloudflare Worker, this feels identical.Response.json() — built-in JSON response helper.Bun.serve is a global. No require("http").Here's a more realistic example with routing, JSON body parsing, and error handling:
import { Database } from "bun:sqlite";
const db = new Database("app.db");
db.run(`
CREATE TABLE IF NOT EXISTS todos (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
completed INTEGER DEFAULT 0,
created_at TEXT DEFAULT (datetime('now'))
)
`);
const server = Bun.serve({
port: process.env.PORT || 3000,
async fetch(req) {
const url = new URL(req.url);
const method = req.method;
try {
// GET /api/todos
if (url.pathname === "/api/todos" && method === "GET") {
const todos = db.query("SELECT * FROM todos ORDER BY created_at DESC").all();
return Response.json(todos);
}
// POST /api/todos
if (url.pathname === "/api/todos" && method === "POST") {
const body = await req.json();
if (!body.title || typeof body.title !== "string") {
return Response.json({ error: "Title is required" }, { status: 400 });
}
const stmt = db.prepare("INSERT INTO todos (title) VALUES (?) RETURNING *");
const todo = stmt.get(body.title);
return Response.json(todo, { status: 201 });
}
// DELETE /api/todos/:id
const deleteMatch = url.pathname.match(/^\/api\/todos\/(\d+)$/);
if (deleteMatch && method === "DELETE") {
const id = parseInt(deleteMatch[1], 10);
db.run("DELETE FROM todos WHERE id = ?", [id]);
return new Response(null, { status: 204 });
}
return Response.json({ error: "Not found" }, { status: 404 });
} catch (error) {
console.error("Request error:", error);
return Response.json({ error: "Internal server error" }, { status: 500 });
}
},
});
console.log(`Server running on port ${server.port}`);That's a full CRUD API with SQLite in about 50 lines. No Express, no ORM, no middleware chain. For small APIs and internal tools, this is my go-to setup now.
Bun's file API is refreshingly simple compared to fs.readFile():
// Reading files
const file = Bun.file("./config.json");
const text = await file.text(); // Read as string
const json = await file.json(); // Parse as JSON directly
const bytes = await file.arrayBuffer(); // Read as ArrayBuffer
const stream = file.stream(); // Read as ReadableStream
// File metadata
console.log(file.size); // Size in bytes
console.log(file.type); // MIME type (e.g., "application/json")
// Writing files
await Bun.write("./output.txt", "Hello, World!");
await Bun.write("./data.json", JSON.stringify({ key: "value" }));
await Bun.write("./copy.png", Bun.file("./original.png"));
// Write a Response body to a file
const response = await fetch("https://example.com/data.json");
await Bun.write("./downloaded.json", response);The Bun.file() API is lazy — it doesn't read the file until you call .text(), .json(), etc. This means you can pass Bun.file() references around without incurring I/O costs until you actually need the data.
WebSockets are first-class in Bun.serve():
const server = Bun.serve({
port: 3000,
fetch(req, server) {
const url = new URL(req.url);
if (url.pathname === "/ws") {
const upgraded = server.upgrade(req, {
data: {
userId: url.searchParams.get("userId"),
joinedAt: Date.now(),
},
});
if (!upgraded) {
return new Response("WebSocket upgrade failed", { status: 400 });
}
return undefined;
}
return new Response("Use /ws for WebSocket connections");
},
websocket: {
open(ws) {
console.log(`Client connected: ${ws.data.userId}`);
ws.subscribe("chat");
},
message(ws, message) {
// Broadcast to all subscribers
server.publish("chat", `${ws.data.userId}: ${message}`);
},
close(ws) {
console.log(`Client disconnected: ${ws.data.userId}`);
ws.unsubscribe("chat");
},
},
});The server.publish() and ws.subscribe() pattern is built-in pub/sub. No Redis, no separate WebSocket library. For simple real-time features, this is incredibly convenient.
This surprised me the most. Bun ships with SQLite built right into the runtime:
import { Database } from "bun:sqlite";
// Open or create a database
const db = new Database("myapp.db");
// WAL mode for better concurrent read performance
db.exec("PRAGMA journal_mode = WAL");
// Create tables
db.run(`
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE NOT NULL,
name TEXT NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
)
`);
// Prepared statements (reusable, faster for repeated queries)
const insertUser = db.prepare(
"INSERT INTO users (email, name) VALUES ($email, $name) RETURNING *"
);
const findByEmail = db.prepare(
"SELECT * FROM users WHERE email = $email"
);
// Usage
const user = insertUser.get({
$email: "alice@example.com",
$name: "Alice",
});
console.log(user); // { id: 1, email: "alice@example.com", name: "Alice", ... }
// Transactions
const insertMany = db.transaction((users: { email: string; name: string }[]) => {
for (const user of users) {
insertUser.run({ $email: user.email, $name: user.name });
}
return users.length;
});
const count = insertMany([
{ email: "bob@example.com", name: "Bob" },
{ email: "carol@example.com", name: "Carol" },
]);
console.log(`Inserted ${count} users`);This is synchronous SQLite with the performance of a C library (because it is one — Bun embeds libsqlite3 directly). For CLI tools, local-first apps, and small services, built-in SQLite means zero external dependencies for your data layer.
bun test is a drop-in replacement for Jest in most cases. It uses the same describe/it/expect API and supports most Jest matchers.
// math.test.ts
import { describe, it, expect } from "bun:test";
describe("math utilities", () => {
it("adds numbers correctly", () => {
expect(1 + 2).toBe(3);
});
it("handles floating point", () => {
expect(0.1 + 0.2).toBeCloseTo(0.3);
});
});# Run all tests
bun test
# Run specific file
bun test math.test.ts
# Run tests matching a pattern
bun test --test-name-pattern "adds numbers"
# Watch mode
bun test --watch
# Coverage
bun test --coverageBun supports Jest-compatible mocking:
import { describe, it, expect, mock, spyOn } from "bun:test";
import { fetchUsers } from "./api";
// Mock a module
mock.module("./database", () => ({
query: mock(() => [{ id: 1, name: "Alice" }]),
}));
describe("fetchUsers", () => {
it("returns users from database", async () => {
const users = await fetchUsers();
expect(users).toHaveLength(1);
expect(users[0].name).toBe("Alice");
});
});
// Spy on an object method
describe("console", () => {
it("tracks console.log calls", () => {
const logSpy = spyOn(console, "log");
console.log("test message");
expect(logSpy).toHaveBeenCalledWith("test message");
logSpy.mockRestore();
});
});I use Vitest for this project (and most of my projects). Here's why I haven't fully switched:
Where bun test wins:
bun test starts executing tests faster than Vitest can finish loading its config.vitest.config.ts needed for basic setups.Where Vitest still wins:
For a new project with simple testing needs, I'd use bun test. For an established project with Testing Library, MSW, and complex mocking, I'm keeping Vitest.
bun build is a fast JavaScript/TypeScript bundler. It's not a webpack replacement — it's more in the esbuild category: fast, opinionated, and focused on the common cases.
# Bundle a single entry point
bun build ./src/index.ts --outdir ./dist
# Bundle for different targets
bun build ./src/index.ts --outdir ./dist --target browser
bun build ./src/index.ts --outdir ./dist --target bun
bun build ./src/index.ts --outdir ./dist --target node
# Minify
bun build ./src/index.ts --outdir ./dist --minify
# Generate sourcemaps
bun build ./src/index.ts --outdir ./dist --sourcemap externalconst result = await Bun.build({
entrypoints: ["./src/index.ts", "./src/worker.ts"],
outdir: "./dist",
target: "browser",
minify: {
whitespace: true,
identifiers: true,
syntax: true,
},
splitting: true, // Code splitting
sourcemap: "external",
external: ["react", "react-dom"], // Don't bundle these
naming: "[dir]/[name]-[hash].[ext]",
define: {
"process.env.NODE_ENV": JSON.stringify("production"),
},
});
if (!result.success) {
console.error("Build failed:");
for (const log of result.logs) {
console.error(log);
}
process.exit(1);
}
for (const output of result.outputs) {
console.log(`${output.path} — ${output.size} bytes`);
}Bun supports tree-shaking for ESM:
// utils.ts
export function used() {
return "I'll be in the bundle";
}
export function unused() {
return "I'll be tree-shaken away";
}
// index.ts
import { used } from "./utils";
console.log(used());bun build ./src/index.ts --outdir ./dist --minify
# The `unused` function won't appear in the outputFor building a library or a simple web app's JS bundle, bun build is excellent. For complex app builds with CSS modules, image optimization, and custom chunk strategies, you'll still want a full bundler.
One genuinely unique feature: compile-time code execution via macros.
// build-info.ts — this file runs at BUILD TIME, not runtime
export function getBuildInfo() {
return {
builtAt: new Date().toISOString(),
gitSha: require("child_process")
.execSync("git rev-parse --short HEAD")
.toString()
.trim(),
nodeVersion: process.version,
};
}// app.ts
import { getBuildInfo } from "./build-info" with { type: "macro" };
// getBuildInfo() executes at bundle time
// The result is inlined as a static value
const info = getBuildInfo();
console.log(`Built at ${info.builtAt}, commit ${info.gitSha}`);After bundling, getBuildInfo() is replaced with the literal object — no function call at runtime, no import of child_process. The code ran during the build and the result was inlined. This is powerful for embedding build metadata, feature flags, or environment-specific configuration.
This is the question I get asked the most, so let me be very specific.
Bun as a package manager for Next.js — works perfectly:
# Use Bun to install dependencies, then use Node.js to run Next.js
bun install
bun run dev # This actually runs the "dev" script via Node.js by default
bun run build
bun run startThis is what I do for every Next.js project. The bun run <script> command reads the scripts section of package.json and executes it. By default, it uses the system's Node.js for the actual execution. You get Bun's fast package installation without changing your runtime.
Bun runtime for Next.js development:
# Force Next.js to run under Bun's runtime
bun --bun run devThis works for development in most cases. The --bun flag tells Bun to use its own runtime instead of delegating to Node.js. Hot module replacement works. API routes work. Server components work.
Bun runtime for Next.js production builds:
# Build with Bun runtime
bun --bun run build
# Start production server with Bun runtime
bun --bun run startThis works for many projects but I've encountered edge cases:
Use Bun as the package manager. Use Node.js as the runtime. This gives you the speed benefits of bun install without any compatibility risk.
{
"scripts": {
"dev": "next dev --turbopack",
"build": "next build",
"start": "next start"
}
}# Daily workflow
bun install # Fast package installation
bun run dev # Runs "next dev" via Node.js
bun run build # Runs "next build" via Node.jsWhen Bun's Node.js compatibility reaches 100% for Next.js's internal usage (it's close, but not there yet), I'll switch. Until then, the package manager alone saves me enough time to justify the install.
The official Bun Docker image is well-maintained and production-ready.
FROM oven/bun:1 AS base
WORKDIR /app
# Install dependencies
FROM base AS deps
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile --production
# Build (if needed)
FROM base AS build
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile
COPY . .
RUN bun run build
# Production
FROM base AS production
WORKDIR /app
# Don't run as root
RUN addgroup --system --gid 1001 appgroup && \
adduser --system --uid 1001 appuser
USER appuser
COPY --from=deps /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY --from=build /app/package.json ./
EXPOSE 3000
CMD ["bun", "run", "dist/server.js"]# Build stage: full Bun image with all dependencies
FROM oven/bun:1 AS builder
WORKDIR /app
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile
COPY . .
RUN bun build ./src/index.ts --target bun --outdir ./dist --minify
# Runtime stage: smaller base image
FROM oven/bun:1-slim AS runtime
WORKDIR /app
RUN addgroup --system --gid 1001 appgroup && \
adduser --system --uid 1001 appuser
USER appuser
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["bun", "run", "dist/index.js"]This is one of Bun's killer features for deployment:
# Compile your app into a single executable
bun build --compile ./src/server.ts --outfile server
# The output is a standalone binary — no Bun or Node.js needed to run it
./server# Ultra-minimal Docker image using compiled binary
FROM oven/bun:1 AS builder
WORKDIR /app
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile
COPY . .
RUN bun build --compile ./src/server.ts --outfile server
# Final image — just the binary
FROM debian:bookworm-slim
WORKDIR /app
RUN addgroup --system --gid 1001 appgroup && \
adduser --system --uid 1001 appuser
USER appuser
COPY --from=builder /app/server ./server
EXPOSE 3000
CMD ["./server"]The compiled binary is typically 50-90 MB (it bundles the Bun runtime). That's larger than a Go binary but much smaller than a full Node.js installation plus node_modules. For containerized deployments, the self-contained nature is a significant simplification.
# Node.js image
docker images | grep node
# node:20-slim ~180MB
# Bun image
docker images | grep bun
# oven/bun:1-slim ~130MB
# Compiled binary on debian:bookworm-slim
# ~80MB base + ~70MB binary = ~150MB total
# vs. Alpine with Node.js
# node:20-alpine ~130MB + node_modulesThe binary approach eliminates node_modules entirely from the final image. No npm install in production. No supply chain surface area from hundreds of packages. Just one file.
If you're considering moving to Bun, here's the incremental path I recommend:
# Replace npm/yarn/pnpm with bun install
# Change your CI pipeline:
# Before:
npm ci
# After:
bun install --frozen-lockfileNo code changes. No runtime changes. Just faster installs. If anything breaks (it won't), revert by deleting bun.lockb and running npm install.
# Use bun to run development scripts
bun run dev
bun run lint
bun run format
# Use bun for one-off scripts
bun run scripts/seed-database.ts
bun run scripts/migrate.tsStill using Node.js as the runtime for your actual application. But scripts benefit from Bun's faster startup and native TypeScript support.
# Replace vitest/jest with bun test for simple test suites
bun test
# Keep vitest for complex test setups
# (Testing Library, MSW, custom environments)Run your full test suite under bun test. If everything passes, you've eliminated a devDependency. If some tests fail due to compatibility, keep Vitest for those and use bun test for the rest.
// New microservices or APIs — start with Bun from day one
Bun.serve({
port: 3000,
fetch(req) {
// Your new service here
},
});Don't migrate existing Node.js services to Bun runtime. Instead, write new services with Bun from the start. This limits your blast radius.
# Only after thorough testing:
# Replace node with bun for existing services
# Before:
node dist/server.js
# After:
bun dist/server.jsI only recommend this for services with excellent test coverage. Run your load tests under Bun before switching production.
Bun handles .env files automatically — no dotenv package needed:
# .env
DATABASE_URL=postgresql://localhost:5432/myapp
API_KEY=sk-test-12345
PORT=3000// These are available without any import
console.log(process.env.DATABASE_URL);
console.log(process.env.API_KEY);
console.log(Bun.env.PORT); // Bun-specific alternativeBun loads .env, .env.local, .env.production, etc. automatically, following the same convention as Next.js. One less dependency in your package.json.
Bun's error output has improved significantly, but it's still not as polished as Node.js in some cases:
# Bun's debugger — works with VS Code
bun --inspect run server.ts
# Bun's inspect-brk — pause on first line
bun --inspect-brk run server.tsFor VS Code, add this to your .vscode/launch.json:
{
"version": "0.2.0",
"configurations": [
{
"type": "bun",
"request": "launch",
"name": "Debug Bun",
"program": "${workspaceFolder}/src/server.ts",
"cwd": "${workspaceFolder}",
"stopOnEntry": false,
"watchMode": false
}
]
}Stack traces in Bun are generally accurate and include source maps for TypeScript. The main debugging gap is that some Node.js-specific debugging tools (like ndb or clinic.js) don't work with Bun.
A few things to think about if you're evaluating Bun for production:
Maturity: Node.js has been in production for 15+ years. Every edge case in HTTP parsing, TLS handling, and stream processing has been found and fixed. Bun is younger. It's well-tested, but the surface area for undiscovered bugs is larger.
Security patches: The Bun team ships updates frequently, but the Node.js security team has a formal CVE process, coordinated disclosure, and a longer track record. For security-critical applications, this matters.
Supply chain: Bun's built-in features (SQLite, HTTP server, WebSockets) mean fewer npm dependencies. Fewer dependencies means a smaller supply chain attack surface. This is a genuine security advantage.
# Compare dependency counts
# A typical Express + SQLite + WebSocket project:
npm ls --all | wc -l
# ~340 packages
# The same functionality with Bun built-ins:
bun pm ls --all | wc -l
# ~12 packages (just your application code)That's a meaningful reduction in the number of packages you're trusting with your production workload.
A few Bun-specific performance tips:
// Use Bun.serve() options for production tuning
Bun.serve({
port: 3000,
// Increase max request body size (default is 128MB)
maxRequestBodySize: 1024 * 1024 * 50, // 50MB
// Enable development mode for better error pages
development: process.env.NODE_ENV !== "production",
// Reuse port (useful for zero-downtime restarts)
reusePort: true,
fetch(req) {
return new Response("OK");
},
});// Use Bun.Transpiler for runtime code transformation
const transpiler = new Bun.Transpiler({
loader: "tsx",
target: "browser",
});
const code = transpiler.transformSync(`
const App: React.FC = () => <div>Hello</div>;
export default App;
`);# Bun's memory usage flags
bun --smol run server.ts # Reduce memory footprint (slightly slower)
# Set max heap size
BUN_JSC_forceRAMSize=512000000 bun run server.ts # ~512MB limitAfter a year of using Bun, here are the things that have tripped me up:
// Node.js 18+ fetch and Bun's fetch are slightly different
// in how they handle certain headers and redirects
// Bun follows redirects by default (like browsers)
// Node.js fetch also follows redirects, but the behavior
// with certain status codes (303, 307, 308) can differ
const response = await fetch("https://api.example.com/data", {
redirect: "manual", // Be explicit about redirect handling
});// Bun exits when the event loop is empty
// Node.js sometimes keeps running due to lingering handles
// If your Bun script exits unexpectedly, something isn't
// keeping the event loop alive
// This will exit immediately in Bun:
setTimeout(() => {}, 0);
// This will keep running:
setTimeout(() => {}, 1000);
// (Bun exits after the timeout fires)// Bun has its own tsconfig defaults
// If you're sharing a project between Bun and Node.js,
// be explicit in your tsconfig.json:
{
"compilerOptions": {
"target": "ESNext",
"module": "ESNext",
"moduleResolution": "bundler",
"types": ["bun-types"] // Add Bun type definitions
}
}# Install Bun types
bun add -d @types/bun# Bun has built-in watch mode
bun --watch run server.ts
# This restarts the process on file changes
# It's not HMR (Hot Module Replacement) — it's a full restart
# But because Bun starts so fast, it feels instantbunfig.toml Configuration File## bunfig.toml — Bun's config file (optional)
[install]
# Use a private registry
registry = "https://npm.mycompany.com"
# Scoped registries
[install.scopes]
"@mycompany" = "https://npm.mycompany.com"
[test]
# Test configuration
coverage = true
coverageReporter = ["text", "lcov"]
[run]
# Shell to use for bun run
shell = "bash"After a year of production use, here's where I've settled:
Package manager for all projects — including this Next.js blog. bun install is faster, and the compatibility is essentially perfect. I see no reason to use npm or yarn anymore. pnpm is the only alternative I'd consider (for its strict dependency resolution in monorepos).
Runtime for scripts and CLI tools — Any TypeScript file I need to run once, I run with bun. No compilation step. Fast startup. Built-in .env loading. It's replaced ts-node and tsx in my workflow entirely.
Runtime for small APIs and internal tools — Bun.serve() + bun:sqlite is an incredibly productive stack for internal tools, webhook handlers, and small services. The "one binary, no dependencies" deployment model is compelling.
Test runner for simple projects — For projects with straightforward test needs, bun test is fast and requires zero configuration.
Production Next.js — Not because Bun doesn't work, but because the risk-reward doesn't justify it yet. Next.js is a complex framework with many integration points. I want the most battle-tested runtime under it.
Critical production services — My main API servers run Node.js behind PM2. The monitoring ecosystem, the debugging tools, the operational knowledge — it's all Node.js. Bun will get there, but it's not there yet.
Anything with native addons — If a dependency chain includes C++ native addons, I don't even try Bun. Not worth debugging the compatibility issues.
Teams that aren't familiar with Bun — Introducing Bun as a runtime to a team that's never used it adds cognitive overhead. As a package manager, fine. As a runtime, wait until the team is ready.
Bun's compatibility tracker — When it hits 100% for the Node.js APIs I care about, I'll reassess.
Framework support — Next.js, Remix, and SvelteKit all have varying levels of Bun support. When one of them officially supports Bun as a production runtime, that's a signal.
Enterprise adoption — Once companies with real SLAs are running Bun in production and writing about it, the maturity question is answered.
The 1.2+ release line — Bun is moving fast. Features land every week. The Bun I use today is meaningfully better than the Bun I tried a year ago.
Bun isn't a silver bullet. It won't make a slow app fast and it won't make a poorly designed API well-designed. But it is a genuine improvement in developer experience for the JavaScript ecosystem.
The thing I appreciate most about Bun isn't any single feature. It's the reduction in toolchain complexity. One binary that installs packages, runs TypeScript, bundles code, and runs tests. No tsconfig.json for scripts. No Babel. No separate test runner config. Just bun run your-file.ts and it works.
The practical advice: start with bun install. It's zero risk, immediate benefit. Then try bun run for scripts. Then evaluate the rest based on your specific needs. You don't have to go all-in. Bun works perfectly well as a partial replacement, and that's probably how most people should use it today.
The JavaScript runtime landscape is better with Bun in it. Competition is making Node.js better too — Node.js 22+ has gotten significantly faster, partly in response to Bun's pressure. Everyone wins.