Next Starter Logo
Reference

Logging

How Next Starter uses Pino for structured JSON logging in server actions and API routes, with automatic sensitive data redaction in production.

Next Starter uses Pino for server-side logging. Pino is a fast, low-overhead logger that outputs structured JSON, which log aggregation services like Datadog, Logtail, and Axiom can ingest directly.

The logger is configured in lib/logger.ts and behaves differently in development versus production.

Setup

// lib/logger.ts
import pino from "pino";
import pretty from "pino-pretty";

export const logger =
  process.env.NODE_ENV === "development"
    ? pino(
        pretty({
          colorize: true,
          translateTime: "SYS:yyyy-mm-dd HH:MM:ss",
          ignore: "pid,hostname",
        }),
      )
    : pino({
        level: "info",
        redact: {
          paths: [
            "password",
            "token",
            "secret",
            "authorization",
            "cookie",
            "auth",
            "jwt",
          ],
          censor: "[REDACTED]",
        },
      });

In development, logs are formatted with pino-pretty for human-readable terminal output. In production, the logger outputs raw JSON without pretty-printing for compatibility with log aggregation tools.

Pino must be externalized from the Next.js bundle to work correctly. Configure this in next.config.ts:

// next.config.ts
const nextConfig: NextConfig = {
  serverExternalPackages: ["pino", "pino-pretty"],
  // ...
};

Without this setting, Pino's internal module loading will fail during bundling.

Log Levels

Pino supports the following log levels in order of severity:

MethodLevelUse for
logger.trace()10Very granular debugging (below default level)
logger.debug()20Debugging information (below default level)
logger.info()30Normal application events
logger.warn()40Something unexpected but recoverable
logger.error()50Errors that need attention
logger.fatal()60Unrecoverable errors, process should exit

Both loggers default to level "info", so trace and debug calls are silently dropped with no performance cost. To enable lower-level output during development, set logger.level = "debug" or "trace" after importing the logger.

Basic Usage

Import the logger singleton and call the appropriate level method:

import { logger } from "@/lib/logger";

logger.info("User signed in");
logger.warn("Rate limit approaching for IP");
logger.error("Failed to send email");

Pass structured context as the first argument, with the message as the second:

logger.info({ userId: "abc123", action: "sign-in" }, "User signed in");
logger.error({ userId: "abc123", error: err.message }, "Failed to update profile");

This produces JSON output in production that is easy to query:

{"level":30,"time":1708000000000,"userId":"abc123","action":"sign-in","msg":"User signed in"}

Using the Logger in Server Actions

Server Actions run on the server, so the logger is safe to use directly. In Next Starter, logger calls inside server actions are wrapped in after() from next/server. This defers logging until after the response has been sent, keeping action response times fast:

// app/actions/settings.ts
"use server";

import { after } from "next/server";
import { logger } from "@/lib/logger";
import { getSession } from "@/lib/server/auth-helpers";

export async function updateUserSettings(data: unknown): Promise<ApiResponse> {
  try {
    const session = await getSession();
    if (!session) return { success: false, error: "Unauthorized" };

    // ... perform update ...
    return { success: true };
  } catch (error) {
    after(() => {
      logger.error(
        { event: "settings_update_failed", err: error },
        "Failed to update user settings",
      );
    });

    return { success: false, error: "Failed to update settings" };
  }
}

The after() wrapper is the standard pattern used throughout the server action files. It ensures the logger call runs after the response is returned, so it does not block the action's return value.

Using the Logger in API Routes

For API routes that process webhooks or external events, log both the incoming event and the outcome:

export async function POST(request: Request) {
  const body = await request.text();
  const signature = request.headers.get("stripe-signature");

  logger.info({ event: "webhook.received" }, "Stripe webhook received");

  try {
    const event = stripe.webhooks.constructEvent(body, signature, secret);
    logger.info({ type: event.type, id: event.id }, "Webhook verified");
    // handle event...
  } catch (err) {
    logger.error({ error: (err as Error).message }, "Webhook verification failed");
    return new Response("Invalid signature", { status: 400 });
  }
}

Child Loggers

Use child loggers to attach persistent context to a group of related log calls. This avoids repeating the same fields on every call:

import { logger } from "@/lib/logger";

export async function processUserImport(importId: string, userId: string) {
  const log = logger.child({ importId, userId });

  log.info("Import started");

  for (const row of rows) {
    log.debug({ rowIndex: row.index }, "Processing row");
    // ...
  }

  log.info({ rowCount: rows.length }, "Import complete");
}

Every log call on the child logger automatically includes importId and userId without repeating them.

Sensitive Data Redaction

The production logger automatically redacts the following fields from any log object, replacing their values with [REDACTED]:

  • password
  • token
  • secret
  • authorization
  • cookie
  • auth
  • jwt

You can log request context or user objects that contain these fields and they will not appear in your logs. For example:

// Safe — "password" field is automatically redacted in production
logger.info({ user: { id: "123", email: "user@example.com", password: "..." } }, "User data");
// Output: { "user": { "id": "123", "email": "user@example.com", "password": "[REDACTED]" } }

To redact additional fields, add their paths to the redact.paths array in lib/logger.ts. Pino supports nested paths using dot notation:

redact: {
  paths: [
    "password",
    "token",
    "secret",
    "authorization",
    "cookie",
    "auth",
    "jwt",
    "creditCard.number",   // nested field
    "*.apiKey",            // apiKey on any top-level object
  ],
  censor: "[REDACTED]",
},

Production Logging Best Practices

Always pass structured context, not interpolated strings.

// Good
logger.error({ userId, error: err.message }, "Failed to send email");

// Avoid — harder to query and filter in log aggregation tools
logger.error(`Failed to send email to user ${userId}: ${err.message}`);

Log at the right level. Reserve error for genuine failures. Use warn for expected problems (rate limits, missing optional config). Use info for significant application events (sign-in, subscription change). Use debug for diagnostic details you only need during development.

Do not log inside loops unless absolutely necessary, as this can produce a large volume of output. Log a summary after the loop completes instead.

Use after() in server actions. Wrap logger calls in after() from next/server so that logging does not delay the response returned to the client.

Do not use the logger in Client Components. Pino is a server-only module. Importing it in a file that is bundled for the browser will cause a build error.

Connecting to a Log Aggregation Service

In production, pipe stdout to your log aggregation service. Pino outputs newline-delimited JSON, so any service that accepts JSON logs works without additional configuration.

For platforms like Railway, Render, or Fly.io, stdout is collected automatically and can be forwarded to a connected log drain. For self-hosted deployments, configure your Docker setup to forward container stdout to your preferred destination.

No code changes are needed. The production logger already outputs the correct format.

On this page