Unit tests, integration tests, component tests with Testing Library, mocking strategies, coverage configuration, and the testing philosophy that actually produces better software.
Most teams write tests the way most people exercise: they know they should, they feel guilty when they don't, and when they finally do it, they push too hard on day one and quit by Friday. The codebase ends up with a scattering of shallow snapshot tests that break every time someone changes a CSS class, a few integration tests that nobody trusts, and a coverage badge in the README that lies.
I've been on both sides. I've shipped projects with zero tests and sweated through every deploy. I've also been on teams that chased 100% coverage and spent more time maintaining tests than writing features. Neither works. What works is writing the right tests, in the right places, with the right tools.
Vitest changed how I think about testing in JavaScript. Not because it invented new concepts — the fundamentals haven't changed since Kent Beck wrote about them decades ago. But because it removed enough friction that writing tests stopped feeling like a chore and started feeling like part of the development loop. When your test runner is as fast as your dev server and uses the same config, the excuses evaporate.
This post is everything I know about testing with Vitest, from initial setup to the philosophy that makes it all worthwhile.
If you've used Jest, you already know most of Vitest's API. That's by design — Vitest is Jest-compatible at the API level. describe, it, expect, beforeEach, vi.fn() — it all works. So why switch?
Jest was built for CommonJS. It can handle ESM, but it requires configuration, experimental flags, and occasional prayer. If you're using import/export syntax (which is everything modern), you've probably fought with Jest's transform pipeline.
Vitest runs on Vite. Vite understands ESM natively. There's no transform step for your source code — it just works. This matters more than it sounds. Half the Jest issues I've debugged over the years trace back to module resolution: SyntaxError: Cannot use import statement outside a module, or mocks not working because the module was already cached in a different format.
If your project uses Vite (and if you're building a React, Vue, or Svelte app in 2026, it probably does), Vitest reads your vite.config.ts automatically. Your path aliases, plugins, and environment variables work in tests without any additional configuration. With Jest, you maintain a parallel configuration that has to stay in sync with your bundler setup. Every time you add a path alias in vite.config.ts, you have to remember to add the corresponding moduleNameMapper in jest.config.ts. It's a small thing, but small things compound.
Vitest is fast. Meaningfully fast. Not "saves you two seconds" fast — "changes how you work" fast. It uses Vite's module graph to understand which tests are affected by a file change and only runs those. Its watch mode uses the same HMR infrastructure that makes Vite's dev server feel instant.
On a project with 400+ tests, switching from Jest to Vitest cut our watch-mode feedback loop from ~4 seconds to under 500ms. That's the difference between "I'll wait for the test to pass" and "I'll glance at the terminal while my fingers are still on the keyboard."
Vitest includes bench() out of the box for performance testing. No separate library needed:
import { bench, describe } from "vitest";
describe("string concatenation", () => {
bench("template literals", () => {
const name = "world";
const _result = `hello ${name}`;
});
bench("string concat", () => {
const name = "world";
const _result = "hello " + name;
});
});Run with vitest bench. It's not the main event, but it's nice to have performance testing in the same toolchain without installing benchmark.js and wiring up a separate runner.
npm install -D vitest @testing-library/react @testing-library/jest-dom @testing-library/user-event jsdomCreate vitest.config.ts at your project root:
import { defineConfig } from "vitest/config";
import react from "@vitejs/plugin-react";
import path from "path";
export default defineConfig({
plugins: [react()],
test: {
globals: true,
environment: "jsdom",
setupFiles: ["./src/test/setup.ts"],
include: ["src/**/*.{test,spec}.{ts,tsx}"],
alias: {
"@": path.resolve(__dirname, "./src"),
},
css: false,
},
});Or, if you already have a vite.config.ts, you can extend it:
/// <reference types="vitest/config" />
import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
export default defineConfig({
plugins: [react()],
resolve: {
alias: {
"@": "/src",
},
},
test: {
globals: true,
environment: "jsdom",
setupFiles: ["./src/test/setup.ts"],
},
});globals: true Decision#When globals is true, you don't need to import describe, it, expect, beforeEach, etc. — they're available everywhere, just like in Jest. When false, you import them explicitly:
// globals: false
import { describe, it, expect } from "vitest";
describe("math", () => {
it("adds numbers", () => {
expect(1 + 1).toBe(2);
});
});I use globals: true because it reduces visual noise and matches what most developers expect. If you're on a team that values explicit imports, set it to false — there's no wrong answer here.
If you use globals: true, add Vitest's types to your tsconfig.json so TypeScript recognizes them:
{
"compilerOptions": {
"types": ["vitest/globals"]
}
}Vitest lets you pick the DOM implementation per test or globally:
node — No DOM. For pure logic, utilities, API routes, and anything that doesn't touch the browser.jsdom — The standard. Full DOM implementation. Heavier but more complete.happy-dom — Lighter and faster than jsdom but less complete. Some edge cases (like Range, Selection, or IntersectionObserver) may not work.I default to jsdom globally and override per-file when I need node:
// src/lib/utils.test.ts
// @vitest-environment node
import { formatDate, slugify } from "./utils";
describe("slugify", () => {
it("converts spaces to hyphens", () => {
expect(slugify("hello world")).toBe("hello-world");
});
});The setup file runs before every test file. This is where you configure Testing Library matchers and global mocks:
// src/test/setup.ts
import "@testing-library/jest-dom/vitest";
// Mock IntersectionObserver — jsdom doesn't implement it
class MockIntersectionObserver {
observe = vi.fn();
unobserve = vi.fn();
disconnect = vi.fn();
}
Object.defineProperty(window, "IntersectionObserver", {
writable: true,
value: MockIntersectionObserver,
});
// Mock window.matchMedia — needed for responsive components
Object.defineProperty(window, "matchMedia", {
writable: true,
value: vi.fn().mockImplementation((query: string) => ({
matches: false,
media: query,
onchange: null,
addListener: vi.fn(),
removeListener: vi.fn(),
addEventListener: vi.fn(),
removeEventListener: vi.fn(),
dispatchEvent: vi.fn(),
})),
});The @testing-library/jest-dom/vitest import gives you matchers like toBeInTheDocument(), toHaveClass(), toBeVisible(), and many others. These make assertions on DOM elements readable and expressive.
Every test follows the same structure: Arrange, Act, Assert. Even when you don't write explicit comments, the structure should be visible:
it("calculates total with tax", () => {
// Arrange
const items = [
{ name: "Widget", price: 10 },
{ name: "Gadget", price: 20 },
];
const taxRate = 0.08;
// Act
const total = calculateTotal(items, taxRate);
// Assert
expect(total).toBe(32.4);
});When I see a test that mixes arrangement, action, and assertion into a single chain of method calls, I know it's going to be hard to understand when it fails. Keep the three phases visually distinct even if you don't add the comments.
There are two schools: it('should calculate total with tax') and it('calculates total with tax'). The "should" prefix is verbose without adding information. When the test fails, you'll see:
FAIL ✕ calculates total with tax
That's already a complete sentence. Adding "should" just adds noise. I prefer the direct form: it('renders loading state'), it('rejects invalid email'), it('returns empty array when no matches found').
For describe blocks, use the name of the unit under test:
describe("calculateTotal", () => {
it("sums item prices", () => { /* ... */ });
it("applies tax rate", () => { /* ... */ });
it("returns 0 for empty array", () => { /* ... */ });
it("handles negative prices", () => { /* ... */ });
});Read it out loud: "calculateTotal sums item prices." "calculateTotal applies tax rate." If the sentence works, the naming works.
The purist rule says one assertion per test. The practical rule says: one concept per test. These are different.
// This is fine — one concept, multiple assertions
it("formats user display name", () => {
const user = { firstName: "John", lastName: "Doe", title: "Dr." };
const result = formatDisplayName(user);
expect(result).toContain("John");
expect(result).toContain("Doe");
expect(result).toStartWith("Dr.");
});
// This is not fine — multiple concepts in one test
it("handles user operations", () => {
const user = createUser("John");
expect(user.id).toBeDefined();
const updated = updateUser(user.id, { name: "Jane" });
expect(updated.name).toBe("Jane");
deleteUser(user.id);
expect(getUser(user.id)).toBeNull();
});The first test has three assertions but they all verify one thing: the format of a display name. If any assertion fails, you know exactly what's broken. The second test is three separate tests crammed together. If the second assertion fails, you don't know whether creation or updating is broken, and the third assertion never runs.
Good test suites serve as living documentation. Someone unfamiliar with the code should be able to read the test descriptions and understand the feature's behavior:
describe("PasswordValidator", () => {
describe("minimum length", () => {
it("rejects passwords shorter than 8 characters", () => { /* ... */ });
it("accepts passwords with exactly 8 characters", () => { /* ... */ });
});
describe("character requirements", () => {
it("requires at least one uppercase letter", () => { /* ... */ });
it("requires at least one number", () => { /* ... */ });
it("requires at least one special character", () => { /* ... */ });
});
describe("common password check", () => {
it("rejects passwords in the common passwords list", () => { /* ... */ });
it("performs case-insensitive comparison", () => { /* ... */ });
});
});When this test suite runs, the output reads like a specification. That's the goal.
Mocking is the most powerful and most dangerous tool in your testing toolkit. Used well, it isolates the unit under test and makes tests fast and deterministic. Used poorly, it creates tests that pass no matter what the code does.
The simplest mock is a function that records its calls:
const mockCallback = vi.fn();
// Call it
mockCallback("hello", 42);
mockCallback("world");
// Assert on calls
expect(mockCallback).toHaveBeenCalledTimes(2);
expect(mockCallback).toHaveBeenCalledWith("hello", 42);
expect(mockCallback).toHaveBeenLastCalledWith("world");You can give it a return value:
const mockFetch = vi.fn().mockResolvedValue({
ok: true,
json: () => Promise.resolve({ id: 1, name: "Test" }),
});Or make it return different values on successive calls:
const mockRandom = vi.fn()
.mockReturnValueOnce(0.1)
.mockReturnValueOnce(0.5)
.mockReturnValueOnce(0.9);When you want to observe a method without replacing its behavior:
const consoleSpy = vi.spyOn(console, "warn");
validateInput("");
expect(consoleSpy).toHaveBeenCalledWith("Input cannot be empty");
consoleSpy.mockRestore();spyOn keeps the original implementation by default. You can override it with .mockImplementation() when needed but restore the original afterward with .mockRestore().
This is the big one. vi.mock() replaces an entire module:
// Mock the entire module
vi.mock("@/lib/api", () => ({
fetchUsers: vi.fn().mockResolvedValue([
{ id: 1, name: "Alice" },
{ id: 2, name: "Bob" },
]),
fetchUser: vi.fn().mockResolvedValue({ id: 1, name: "Alice" }),
}));
// The import now uses the mocked version
import { fetchUsers } from "@/lib/api";
describe("UserList", () => {
it("displays users from API", async () => {
const users = await fetchUsers();
expect(users).toHaveLength(2);
});
});Vitest hoists vi.mock() calls to the top of the file automatically. This means the mock is in place before any imports run. You don't need to worry about import order.
If you just want every export replaced with a vi.fn():
vi.mock("@/lib/analytics");
import { trackEvent, trackPageView } from "@/lib/analytics";
it("tracks form submission", () => {
submitForm();
expect(trackEvent).toHaveBeenCalledWith("form_submit", expect.any(Object));
});Without the factory function, Vitest auto-mocks all exports. Each exported function becomes a vi.fn() that returns undefined. This is useful for modules you want to silence (like analytics or logging) without specifying every function.
This trips up everyone at some point:
const mockFn = vi.fn().mockReturnValue(42);
mockFn();
// mockClear — resets call history, keeps implementation
mockFn.mockClear();
expect(mockFn).not.toHaveBeenCalled(); // true
expect(mockFn()).toBe(42); // still returns 42
// mockReset — resets call history AND implementation
mockFn.mockReset();
expect(mockFn()).toBeUndefined(); // no longer returns 42
// mockRestore — for spies, restores original implementation
const spy = vi.spyOn(Math, "random").mockReturnValue(0.5);
spy.mockRestore();
// Math.random() now works normally againIn practice, use vi.clearAllMocks() in beforeEach to reset call history between tests. Use vi.restoreAllMocks() if you're using spyOn and want originals back:
beforeEach(() => {
vi.clearAllMocks();
});
afterEach(() => {
vi.restoreAllMocks();
});This is the most important mocking advice I can give: every mock is a lie you're telling your test. When you mock a dependency, you're saying "I trust that this thing works correctly, so I'll replace it with a simplified version." If your assumption is wrong, the test passes but the feature is broken.
// Over-mocked — tests nothing useful
vi.mock("@/lib/validator");
vi.mock("@/lib/formatter");
vi.mock("@/lib/api");
vi.mock("@/lib/cache");
it("processes user input", () => {
processInput("hello");
expect(validator.validate).toHaveBeenCalledWith("hello");
expect(formatter.format).toHaveBeenCalledWith("hello");
});This test verifies that processInput calls validate and format. But what if processInput calls them in the wrong order? What if it ignores their return values? What if the validation is supposed to prevent the format step from running? The test doesn't know. You've mocked away all the interesting behavior.
The rule of thumb: mock at the boundaries, not in the middle. Mock network requests, file system access, and third-party services. Don't mock your own utility functions unless there's a compelling reason (like they're expensive to run or have side effects).
Testing Library enforces a philosophy: test components the way users interact with them. No checking internal state, no inspecting component instances, no shallow rendering. You render a component and interact with it through the DOM, just like a user would.
import { render, screen } from "@testing-library/react";
import { Button } from "@/components/ui/Button";
describe("Button", () => {
it("renders with label text", () => {
render(<Button>Click me</Button>);
expect(screen.getByRole("button", { name: "Click me" })).toBeInTheDocument();
});
it("applies variant classes", () => {
render(<Button variant="primary">Submit</Button>);
const button = screen.getByRole("button");
expect(button).toHaveClass("bg-primary");
});
it("is disabled when disabled prop is true", () => {
render(<Button disabled>Submit</Button>);
expect(screen.getByRole("button")).toBeDisabled();
});
});This is where beginners get confused. There are three query variants and each has a specific use case:
getBy* — Returns the element or throws if not found. Use when you expect the element to exist:
// Throws if no button found — test fails with a helpful error
const button = screen.getByRole("button", { name: "Submit" });queryBy* — Returns the element or null if not found. Use when you're asserting something is NOT present:
// Returns null — doesn't throw
expect(screen.queryByText("Error message")).not.toBeInTheDocument();findBy* — Returns a Promise. Use for elements that appear asynchronously:
// Waits up to 1000ms for the element to appear
const successMessage = await screen.findByText("Saved successfully");
expect(successMessage).toBeInTheDocument();Testing Library provides these queries in a deliberate priority order:
getByRole — The best query. Uses ARIA roles. If your component isn't findable by role, it might have an accessibility problem.getByLabelText — For form elements. If your input doesn't have a label, fix that first.getByPlaceholderText — Acceptable but weaker. Placeholders disappear when the user types.getByText — For non-interactive elements. Finds by visible text content.getByTestId — Last resort. Use when no semantic query works.// Prefer this
screen.getByRole("textbox", { name: "Email address" });
// Over this
screen.getByPlaceholderText("Enter your email");
// And definitely over this
screen.getByTestId("email-input");The ranking isn't arbitrary. It matches how assistive technology navigates the page. If you can find an element by its role and accessible name, screen readers can too. If you can only find it by a test ID, you might have an accessibility gap.
Don't use fireEvent. Use @testing-library/user-event. The difference matters:
import userEvent from "@testing-library/user-event";
describe("SearchInput", () => {
it("filters results as user types", async () => {
const user = userEvent.setup();
const onSearch = vi.fn();
render(<SearchInput onSearch={onSearch} />);
const input = screen.getByRole("searchbox");
await user.type(input, "vitest");
// user.type fires keydown, keypress, input, keyup for EACH character
// fireEvent.change just sets the value — skipping realistic event flow
expect(onSearch).toHaveBeenLastCalledWith("vitest");
});
it("clears input on escape key", async () => {
const user = userEvent.setup();
render(<SearchInput onSearch={vi.fn()} />);
const input = screen.getByRole("searchbox");
await user.type(input, "hello");
await user.keyboard("{Escape}");
expect(input).toHaveValue("");
});
});userEvent simulates the full event chain that a real browser would fire. fireEvent.change is a single synthetic event. If your component listens to onKeyDown or uses onInput instead of onChange, fireEvent.change won't trigger those handlers but userEvent.type will.
Always call userEvent.setup() at the beginning and use the returned user instance. This ensures proper event ordering and state tracking.
A realistic component test looks like this:
import { render, screen, within } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { TodoList } from "@/components/TodoList";
describe("TodoList", () => {
it("adds a new todo item", async () => {
const user = userEvent.setup();
render(<TodoList />);
const input = screen.getByRole("textbox", { name: /new todo/i });
const addButton = screen.getByRole("button", { name: /add/i });
await user.type(input, "Write tests");
await user.click(addButton);
expect(screen.getByText("Write tests")).toBeInTheDocument();
expect(input).toHaveValue("");
});
it("marks a todo as completed", async () => {
const user = userEvent.setup();
render(<TodoList initialItems={[{ id: "1", text: "Buy groceries", done: false }]} />);
const checkbox = screen.getByRole("checkbox", { name: /buy groceries/i });
await user.click(checkbox);
expect(checkbox).toBeChecked();
});
it("removes completed items when clear button is clicked", async () => {
const user = userEvent.setup();
render(
<TodoList
initialItems={[
{ id: "1", text: "Done task", done: true },
{ id: "2", text: "Pending task", done: false },
]}
/>
);
await user.click(screen.getByRole("button", { name: /clear completed/i }));
expect(screen.queryByText("Done task")).not.toBeInTheDocument();
expect(screen.getByText("Pending task")).toBeInTheDocument();
});
});Notice: no internal state inspection, no component.setState(), no checking implementation details. We render, we interact, we assert on what the user would see. If the component refactors its internal state management from useState to useReducer, these tests still pass. That's the point.
When a component updates asynchronously, waitFor polls until the assertion passes:
import { render, screen, waitFor } from "@testing-library/react";
it("loads and displays user profile", async () => {
render(<UserProfile userId="123" />);
// Initially shows loading
expect(screen.getByText("Loading...")).toBeInTheDocument();
// Wait for content to appear
await waitFor(() => {
expect(screen.getByText("John Doe")).toBeInTheDocument();
});
// Loading indicator should be gone
expect(screen.queryByText("Loading...")).not.toBeInTheDocument();
});waitFor retries the callback every 50ms (by default) until it passes or times out (1000ms by default). You can customize both:
await waitFor(
() => expect(screen.getByText("Done")).toBeInTheDocument(),
{ timeout: 3000, interval: 100 }
);When testing code that uses setTimeout, setInterval, or Date, fake timers let you control time:
describe("Debounce", () => {
beforeEach(() => {
vi.useFakeTimers();
});
afterEach(() => {
vi.useRealTimers();
});
it("delays execution by the specified time", () => {
const callback = vi.fn();
const debounced = debounce(callback, 300);
debounced();
expect(callback).not.toHaveBeenCalled();
vi.advanceTimersByTime(200);
expect(callback).not.toHaveBeenCalled();
vi.advanceTimersByTime(100);
expect(callback).toHaveBeenCalledOnce();
});
it("resets timer on subsequent calls", () => {
const callback = vi.fn();
const debounced = debounce(callback, 300);
debounced();
vi.advanceTimersByTime(200);
debounced(); // reset the timer
vi.advanceTimersByTime(200);
expect(callback).not.toHaveBeenCalled();
vi.advanceTimersByTime(100);
expect(callback).toHaveBeenCalledOnce();
});
});Important: always call vi.useRealTimers() in afterEach. Fake timers that leak between tests cause the most confusing failures you'll ever debug.
Combining fake timers with React component testing requires care. React's internal scheduling uses real timers, so you often need to advance timers AND flush React updates together:
import { render, screen, act } from "@testing-library/react";
describe("Notification", () => {
beforeEach(() => {
vi.useFakeTimers();
});
afterEach(() => {
vi.useRealTimers();
});
it("auto-dismisses after 5 seconds", async () => {
render(<Notification message="Saved!" autoDismiss={5000} />);
expect(screen.getByText("Saved!")).toBeInTheDocument();
// Advance timers inside act() to flush React updates
await act(async () => {
vi.advanceTimersByTime(5000);
});
expect(screen.queryByText("Saved!")).not.toBeInTheDocument();
});
});For testing data fetching, Mock Service Worker (MSW) intercepts network requests at the network level. This means your component's fetch/axios code runs exactly as it would in production — MSW just replaces the network response:
import { http, HttpResponse } from "msw";
import { setupServer } from "msw/node";
import { render, screen } from "@testing-library/react";
const server = setupServer(
http.get("/api/users", () => {
return HttpResponse.json([
{ id: 1, name: "Alice", email: "alice@example.com" },
{ id: 2, name: "Bob", email: "bob@example.com" },
]);
}),
http.get("/api/users/:id", ({ params }) => {
return HttpResponse.json({
id: Number(params.id),
name: "Alice",
email: "alice@example.com",
});
})
);
beforeAll(() => server.listen());
afterEach(() => server.resetHandlers());
afterAll(() => server.close());
describe("UserList", () => {
it("displays users from API", async () => {
render(<UserList />);
expect(await screen.findByText("Alice")).toBeInTheDocument();
expect(await screen.findByText("Bob")).toBeInTheDocument();
});
it("shows error state when API fails", async () => {
// Override the default handler for this one test
server.use(
http.get("/api/users", () => {
return new HttpResponse(null, { status: 500 });
})
);
render(<UserList />);
expect(await screen.findByText(/failed to load/i)).toBeInTheDocument();
});
});MSW is better than mocking fetch or axios directly because:
A unit test isolates a single function or component and mocks everything else. An integration test lets multiple units work together and only mocks external boundaries (network, file system, databases).
The truth is: most of the bugs I've seen in production happen at the boundaries between units, not inside them. A function works perfectly in isolation but fails because the caller passes data in a slightly different format. A component renders fine with mock data but breaks when the actual API response has an extra nesting level.
Integration tests catch these bugs. They're slower than unit tests and harder to debug when they fail, but they give more confidence per test.
import { render, screen } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { ShoppingCart } from "@/components/ShoppingCart";
import { CartProvider } from "@/contexts/CartContext";
// Only mock the API layer — everything else is real
vi.mock("@/lib/api", () => ({
checkout: vi.fn().mockResolvedValue({ orderId: "ORD-123" }),
}));
describe("Shopping Cart Flow", () => {
const renderCart = (initialItems = []) => {
return render(
<CartProvider initialItems={initialItems}>
<ShoppingCart />
</CartProvider>
);
};
it("displays item count and total", () => {
renderCart([
{ id: "1", name: "Keyboard", price: 79.99, quantity: 1 },
{ id: "2", name: "Mouse", price: 49.99, quantity: 2 },
]);
expect(screen.getByText("3 items")).toBeInTheDocument();
expect(screen.getByText("$179.97")).toBeInTheDocument();
});
it("updates quantity and recalculates total", async () => {
const user = userEvent.setup();
renderCart([
{ id: "1", name: "Keyboard", price: 79.99, quantity: 1 },
]);
const incrementButton = screen.getByRole("button", { name: /increase quantity/i });
await user.click(incrementButton);
expect(screen.getByText("$159.98")).toBeInTheDocument();
});
it("completes checkout flow", async () => {
const user = userEvent.setup();
const { checkout } = await import("@/lib/api");
renderCart([
{ id: "1", name: "Keyboard", price: 79.99, quantity: 1 },
]);
await user.click(screen.getByRole("button", { name: /checkout/i }));
expect(checkout).toHaveBeenCalledWith({
items: [{ id: "1", quantity: 1 }],
});
expect(await screen.findByText(/order confirmed/i)).toBeInTheDocument();
expect(screen.getByText("ORD-123")).toBeInTheDocument();
});
});In this test, ShoppingCart and CartProvider and their internal components (item rows, quantity selectors, totals display) all work together with real code. The only mock is the API call, because we don't want to make real network requests in tests.
Use unit tests when:
Use integration tests when:
In practice, a healthy test suite is heavy on integration tests for features and has unit tests for complex utilities. The components themselves are tested through integration tests — you don't need a separate unit test for every tiny component if the integration test exercises it.
vitest run --coverageYou'll need a coverage provider. Vitest supports two:
# V8 — faster, uses V8's built-in coverage
npm install -D @vitest/coverage-v8
# Istanbul — more mature, more configuration options
npm install -D @vitest/coverage-istanbulConfigure it in your Vitest config:
export default defineConfig({
test: {
coverage: {
provider: "v8",
reporter: ["text", "html", "lcov"],
include: ["src/**/*.{ts,tsx}"],
exclude: [
"src/**/*.test.{ts,tsx}",
"src/**/*.spec.{ts,tsx}",
"src/test/**",
"src/**/*.d.ts",
"src/**/types.ts",
],
thresholds: {
statements: 80,
branches: 75,
functions: 80,
lines: 80,
},
},
},
});V8 coverage uses the V8 engine's built-in instrumentation. It's faster because there's no code transformation step. But it can be less accurate for some edge cases, especially around ES module boundaries.
Istanbul instruments your source code with counters before running tests. It's slower but more battle-tested and gives more accurate branch coverage. If you're enforcing coverage thresholds in CI, Istanbul's accuracy might matter.
I use V8 for local development (faster feedback) and Istanbul in CI (more accurate enforcement). You can configure different providers per environment if needed.
Coverage tells you which lines of code were executed during tests. That's it. It does not tell you whether those lines were tested correctly. Consider this:
function divide(a: number, b: number): number {
return a / b;
}
it("divides numbers", () => {
divide(10, 2);
// No assertion!
});This test gives 100% coverage of the divide function. It also tests absolutely nothing. The test would pass if divide returned null, threw an error, or launched missiles.
Coverage is a useful negative indicator: low coverage means there are definitely untested paths. But high coverage doesn't mean your code is well-tested. It just means every line ran during some test.
Line coverage is the most common metric but branch coverage is more valuable:
function getDiscount(user: User): number {
if (user.isPremium) {
return user.yearsActive > 5 ? 0.2 : 0.1;
}
return 0;
}A test with getDiscount({ isPremium: true, yearsActive: 10 }) hits every line (100% line coverage) but only tests two of the three branches. The isPremium: false path and the yearsActive <= 5 path are untested.
Branch coverage catches this. It tracks every possible path through conditional logic. If you're going to enforce a coverage threshold, use branch coverage.
Some code shouldn't be counted in coverage. Generated files, type definitions, configuration — these inflate your metrics without adding value:
// vitest.config.ts
coverage: {
exclude: [
"src/**/*.d.ts",
"src/**/types.ts",
"src/**/*.stories.tsx",
"src/generated/**",
".velite/**",
],
}You can also ignore specific lines or blocks in your source code:
/* v8 ignore start */
if (process.env.NODE_ENV === "development") {
console.log("Debug info:", data);
}
/* v8 ignore stop */
// Or for Istanbul
/* istanbul ignore next */
function devOnlyHelper() { /* ... */ }Use this sparingly. If you find yourself ignoring large chunks of code, either those chunks need tests or they shouldn't be in the coverage report to begin with.
Next.js components that use useRouter, usePathname, or useSearchParams need mocks:
import { vi } from "vitest";
vi.mock("next/navigation", () => ({
useRouter: () => ({
push: vi.fn(),
replace: vi.fn(),
back: vi.fn(),
prefetch: vi.fn(),
refresh: vi.fn(),
}),
usePathname: () => "/en/blog",
useSearchParams: () => new URLSearchParams("?page=1"),
useParams: () => ({ locale: "en" }),
}));For tests that need to verify navigation was called:
import { useRouter } from "next/navigation";
vi.mock("next/navigation", () => ({
useRouter: vi.fn(),
}));
describe("LogoutButton", () => {
it("redirects to home after logout", async () => {
const mockPush = vi.fn();
vi.mocked(useRouter).mockReturnValue({
push: mockPush,
replace: vi.fn(),
back: vi.fn(),
prefetch: vi.fn(),
refresh: vi.fn(),
forward: vi.fn(),
});
const user = userEvent.setup();
render(<LogoutButton />);
await user.click(screen.getByRole("button", { name: /log out/i }));
expect(mockPush).toHaveBeenCalledWith("/");
});
});For internationalized components using next-intl:
vi.mock("next-intl", () => ({
useTranslations: () => (key: string) => key,
useLocale: () => "en",
}));This is the simplest approach — translations return the key itself, so t("hero.title") returns "hero.title". In assertions, you check for the translation key rather than the actual translated string. This makes tests language-independent.
If you need actual translations in a specific test:
vi.mock("next-intl", () => ({
useTranslations: () => {
const translations: Record<string, string> = {
"hero.title": "Welcome to My Site",
"hero.subtitle": "Building things for the web",
};
return (key: string) => translations[key] ?? key;
},
}));Next.js Route Handlers are regular functions that take a Request and return a Response. They're straightforward to test:
import { GET, POST } from "@/app/api/users/route";
import { NextRequest } from "next/server";
describe("GET /api/users", () => {
it("returns users list", async () => {
const request = new NextRequest("http://localhost:3000/api/users");
const response = await GET(request);
const data = await response.json();
expect(response.status).toBe(200);
expect(data).toEqual(
expect.arrayContaining([
expect.objectContaining({ id: expect.any(Number), name: expect.any(String) }),
])
);
});
it("supports pagination via search params", async () => {
const request = new NextRequest("http://localhost:3000/api/users?page=2&limit=10");
const response = await GET(request);
const data = await response.json();
expect(data.page).toBe(2);
expect(data.items).toHaveLength(10);
});
});
describe("POST /api/users", () => {
it("creates a new user", async () => {
const request = new NextRequest("http://localhost:3000/api/users", {
method: "POST",
body: JSON.stringify({ name: "Alice", email: "alice@test.com" }),
});
const response = await POST(request);
const data = await response.json();
expect(response.status).toBe(201);
expect(data.name).toBe("Alice");
});
it("returns 400 for invalid body", async () => {
const request = new NextRequest("http://localhost:3000/api/users", {
method: "POST",
body: JSON.stringify({ name: "" }),
});
const response = await POST(request);
expect(response.status).toBe(400);
});
});Next.js middleware runs at the edge and processes every request. Test it as a function:
import { middleware } from "@/middleware";
import { NextRequest } from "next/server";
function createRequest(path: string, headers: Record<string, string> = {}): NextRequest {
const url = new URL(path, "http://localhost:3000");
return new NextRequest(url, { headers });
}
describe("middleware", () => {
it("redirects unauthenticated users from protected routes", async () => {
const request = createRequest("/dashboard");
const response = await middleware(request);
expect(response.status).toBe(307);
expect(response.headers.get("location")).toContain("/login");
});
it("allows authenticated users through", async () => {
const request = createRequest("/dashboard", {
cookie: "session=valid-token",
});
const response = await middleware(request);
expect(response.status).toBe(200);
});
it("adds security headers", async () => {
const request = createRequest("/");
const response = await middleware(request);
expect(response.headers.get("x-frame-options")).toBe("DENY");
expect(response.headers.get("x-content-type-options")).toBe("nosniff");
});
it("handles locale detection", async () => {
const request = createRequest("/", {
"accept-language": "tr-TR,tr;q=0.9,en;q=0.8",
});
const response = await middleware(request);
expect(response.headers.get("location")).toContain("/tr");
});
});Server Actions are async functions that run on the server. Since they're just functions, you can test them directly — but you may need to mock server-only dependencies:
vi.mock("@/lib/db", () => ({
db: {
user: {
update: vi.fn().mockResolvedValue({ id: "1", name: "Updated" }),
findUnique: vi.fn().mockResolvedValue({ id: "1", name: "Original" }),
},
},
}));
vi.mock("next/cache", () => ({
revalidatePath: vi.fn(),
revalidateTag: vi.fn(),
}));
import { updateProfile } from "@/app/actions/profile";
import { revalidatePath } from "next/cache";
describe("updateProfile", () => {
it("updates user and revalidates profile page", async () => {
const formData = new FormData();
formData.set("name", "New Name");
formData.set("bio", "New bio text");
const result = await updateProfile(formData);
expect(result.success).toBe(true);
expect(revalidatePath).toHaveBeenCalledWith("/profile");
});
it("returns error for invalid data", async () => {
const formData = new FormData();
// Missing required fields
const result = await updateProfile(formData);
expect(result.success).toBe(false);
expect(result.error).toBeDefined();
});
});Most projects need the same providers wrapped around every component. Create a custom render:
// src/test/utils.tsx
import { render, type RenderOptions } from "@testing-library/react";
import { ReactElement } from "react";
import { ThemeProvider } from "@/contexts/ThemeContext";
import { CartProvider } from "@/contexts/CartContext";
interface CustomRenderOptions extends RenderOptions {
theme?: "light" | "dark";
initialCartItems?: CartItem[];
}
function AllProviders({ children, theme = "light", initialCartItems = [] }: {
children: React.ReactNode;
theme?: "light" | "dark";
initialCartItems?: CartItem[];
}) {
return (
<ThemeProvider defaultTheme={theme}>
<CartProvider initialItems={initialCartItems}>
{children}
</CartProvider>
</ThemeProvider>
);
}
export function renderWithProviders(
ui: ReactElement,
options: CustomRenderOptions = {}
) {
const { theme, initialCartItems, ...renderOptions } = options;
return render(ui, {
wrapper: ({ children }) => (
<AllProviders theme={theme} initialCartItems={initialCartItems}>
{children}
</AllProviders>
),
...renderOptions,
});
}
// Re-export everything from testing library
export * from "@testing-library/react";
export { renderWithProviders as render };Now every test file imports from your custom utils instead of @testing-library/react:
import { render, screen } from "@/test/utils";
it("renders in dark mode", () => {
render(<Header />, { theme: "dark" });
// Header and all its children have access to ThemeProvider and CartProvider
});Vitest works with @testing-library/react's renderHook:
import { renderHook, act } from "@testing-library/react";
import { useCounter } from "@/hooks/useCounter";
describe("useCounter", () => {
it("starts at initial value", () => {
const { result } = renderHook(() => useCounter(10));
expect(result.current.count).toBe(10);
});
it("increments", () => {
const { result } = renderHook(() => useCounter(0));
act(() => {
result.current.increment();
});
expect(result.current.count).toBe(1);
});
it("decrements with floor", () => {
const { result } = renderHook(() => useCounter(0, { min: 0 }));
act(() => {
result.current.decrement();
});
expect(result.current.count).toBe(0); // doesn't go below min
});
});import { render, screen } from "@testing-library/react";
import { ErrorBoundary } from "@/components/ErrorBoundary";
const ThrowingComponent = () => {
throw new Error("Test explosion");
};
describe("ErrorBoundary", () => {
// Suppress console.error for expected errors
beforeEach(() => {
vi.spyOn(console, "error").mockImplementation(() => {});
});
afterEach(() => {
vi.restoreAllMocks();
});
it("displays fallback UI when child throws", () => {
render(
<ErrorBoundary fallback={<div>Something went wrong</div>}>
<ThrowingComponent />
</ErrorBoundary>
);
expect(screen.getByText("Something went wrong")).toBeInTheDocument();
});
it("renders children when no error", () => {
render(
<ErrorBoundary fallback={<div>Error</div>}>
<div>All good</div>
</ErrorBoundary>
);
expect(screen.getByText("All good")).toBeInTheDocument();
expect(screen.queryByText("Error")).not.toBeInTheDocument();
});
});Snapshot tests have a bad reputation because people use them as a substitute for real assertions. A snapshot of an entire component's HTML output is a maintenance burden — it breaks on every CSS class change and nobody reviews the diff carefully.
But targeted snapshots can be useful:
import { render } from "@testing-library/react";
import { formatCurrency } from "@/lib/format";
// Good — small, targeted snapshot of a pure function's output
it("formats various currency values consistently", () => {
expect(formatCurrency(0)).toMatchInlineSnapshot('"$0.00"');
expect(formatCurrency(1234.5)).toMatchInlineSnapshot('"$1,234.50"');
expect(formatCurrency(-99.99)).toMatchInlineSnapshot('"-$99.99"');
expect(formatCurrency(1000000)).toMatchInlineSnapshot('"$1,000,000.00"');
});
// Bad — giant snapshot nobody will review
it("renders the dashboard", () => {
const { container } = render(<Dashboard />);
expect(container).toMatchSnapshot(); // Don't do this
});Inline snapshots (toMatchInlineSnapshot) are better than file snapshots because the expected value is visible right in the test. You can see at a glance what the function returns without opening a separate .snap file.
This principle is so important that it's worth a dedicated section. Consider two tests for the same feature:
// Implementation test — brittle, breaks on refactors
it("calls setState with new count", () => {
const setStateSpy = vi.spyOn(React, "useState");
render(<Counter />);
fireEvent.click(screen.getByText("+"));
expect(setStateSpy).toHaveBeenCalledWith(expect.any(Function));
});
// Behavior test — resilient, tests what the user sees
it("increments the displayed count when plus button is clicked", async () => {
const user = userEvent.setup();
render(<Counter />);
expect(screen.getByText("Count: 0")).toBeInTheDocument();
await user.click(screen.getByRole("button", { name: "+" }));
expect(screen.getByText("Count: 1")).toBeInTheDocument();
});The first test breaks if you switch from useState to useReducer, even though the component works exactly the same. The second test only breaks if the component's behavior actually changes. It doesn't care how the count is managed internally — only that clicking "+" makes the number go up.
The litmus test is simple: can you refactor the implementation without changing the test? If yes, you're testing behavior. If no, you're testing implementation.
Kent C. Dodds proposed the "Testing Trophy" as an alternative to the traditional testing pyramid:
╭─────────╮
│ E2E │ Few — expensive, slow, high confidence
├─────────┤
│ │
│ Integr. │ Most — good confidence-to-cost ratio
│ │
├─────────┤
│ Unit │ Some — fast, focused, low-cost
├─────────┤
│ Static │ Always — TypeScript, ESLint
╰─────────╯
The traditional pyramid puts unit tests at the bottom (lots of them) and integration tests in the middle (fewer). The trophy inverts this: integration tests are the sweet spot. Here's why:
I follow this distribution in practice: TypeScript catches most of my type errors, I write unit tests for complex utilities and algorithms, integration tests for features and user flows, and a handful of E2E tests for the critical path (sign up, purchase, core workflow).
Tests exist to give you confidence to ship. Not confidence that every line of code runs — confidence that the application works for users. These are different things.
High confidence, high value:
Low confidence, time wasters:
useState is called — testing the framework, not behavior.The question to ask before writing a test: "If this test didn't exist, what bug could slip into production?" If the answer is "none that TypeScript wouldn't catch" or "none that anyone would notice," the test probably isn't worth writing.
Hard-to-test code is usually poorly designed code. If you need to mock five things to test one function, that function has too many dependencies. If you can't render a component without setting up elaborate context providers, the component is too coupled to its environment.
Tests are a user of your code. If your tests struggle to use your API, other developers will too. When you find yourself fighting the test setup, take it as a signal to refactor the code under test, not to add more mocks.
// Hard to test — function does too much
async function processOrder(orderId: string) {
const order = await db.orders.findById(orderId);
const user = await db.users.findById(order.userId);
const inventory = await checkInventory(order.items);
if (!inventory.available) {
await sendEmail(user.email, "out-of-stock", { items: inventory.unavailable });
return { success: false, reason: "out-of-stock" };
}
const payment = await chargeCard(user.paymentMethod, order.total);
if (!payment.success) {
await sendEmail(user.email, "payment-failed", { error: payment.error });
return { success: false, reason: "payment-failed" };
}
await db.orders.update(orderId, { status: "confirmed" });
await sendEmail(user.email, "order-confirmed", { orderId });
return { success: true };
}
// Easier to test — separated concerns
function determineOrderAction(
inventory: InventoryResult,
payment: PaymentResult
): OrderAction {
if (!inventory.available) return { type: "out-of-stock", items: inventory.unavailable };
if (!payment.success) return { type: "payment-failed", error: payment.error };
return { type: "confirmed" };
}The second version is a pure function. You can test every combination of inventory and payment results without mocking a database, payment provider, or email service. The orchestration logic (fetching data, sending emails) lives in a thin layer that's tested at the integration level.
This is the real value of testing: not catching bugs after they're written, but preventing bad designs before they're committed. The discipline of writing tests pushes you toward smaller functions, clearer interfaces, and more modular architecture. The tests are the side effect. The design improvement is the main event.
The worst thing that can happen to a test suite isn't that it has gaps. It's that people stop trusting it. A test suite with a few flaky tests that randomly fail on CI teaches the team to ignore red builds. Once that happens, the test suite is worse than useless — it actively provides false security.
If a test fails intermittently, fix it or delete it. If a test is slow, speed it up or move it to a separate slow-test suite. If a test breaks on every unrelated change, rewrite it to test behavior instead of implementation.
The goal is a test suite where every failure means something real is broken. When developers trust the tests, they run them before every commit. When they don't trust the tests, they bypass them with --no-verify and deploy with crossed fingers.
Build a test suite you'd bet your weekend on. Nothing less is worth maintaining.