---
title: Workers Best Practices
description: Code patterns and configuration guidance for building fast, reliable, observable, and secure Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/workers/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Workers Best Practices

Best practices for Workers based on production patterns, Cloudflare's own internal usage, and common issues seen across the developer community.

## Configuration

### Keep your compatibility date current

The [compatibility\_date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) controls which runtime features and bug fixes are available to your Worker. Setting it to today's date on new projects ensures you get the latest behavior. Periodically updating it on existing projects gives you access to new APIs and fixes without changing your code.

* [  wrangler.jsonc ](#tab-panel-9003)
* [  wrangler.toml ](#tab-panel-9004)

JSONC

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-05-08",

  "compatibility_flags": ["nodejs_compat"],

}


```

TOML

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-05-08"

compatibility_flags = [ "nodejs_compat" ]


```

For more information, refer to [Compatibility dates](https://developers.cloudflare.com/workers/configuration/compatibility-dates/).

### Enable nodejs\_compat

The [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) compatibility flag gives your Worker access to Node.js built-in modules like `node:crypto`, `node:buffer`, `node:stream`, and others. Many libraries depend on these modules, and enabling this flag avoids cryptic import errors at runtime.

* [  wrangler.jsonc ](#tab-panel-9005)
* [  wrangler.toml ](#tab-panel-9006)

JSONC

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-05-08",

  "compatibility_flags": ["nodejs_compat"],

}


```

TOML

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-05-08"

compatibility_flags = [ "nodejs_compat" ]


```

For more information, refer to [Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/).

### Generate binding types with wrangler types

Do not hand-write your `Env` interface. Run [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/general/#types) to generate a type definition file that matches your actual Wrangler configuration. This catches mismatches between your config and code at compile time instead of at deploy time.

Re-run `wrangler types` whenever you add or rename a binding.

 npm  yarn  pnpm 

```
npx wrangler types
```

```
yarn wrangler types
```

```
pnpm wrangler types
```

* [  JavaScript ](#tab-panel-9015)
* [  TypeScript ](#tab-panel-9016)

src/index.js

```

// ✅ Good: Env is generated by wrangler types and always matches your config

// Do not manually define Env — it drifts from your actual bindings


export default {

  async fetch(request, env) {

    // env.MY_KV, env.MY_BUCKET, etc. are all correctly typed

    const value = await env.MY_KV.get("key");

    return new Response(value);

  },

};


```

src/index.ts

```

// ✅ Good: Env is generated by wrangler types and always matches your config

// Do not manually define Env — it drifts from your actual bindings


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    // env.MY_KV, env.MY_BUCKET, etc. are all correctly typed

    const value = await env.MY_KV.get("key");

    return new Response(value);

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/general/#types).

### Store secrets with wrangler secret, not in source

Secrets (API keys, tokens, database credentials) must never appear in your Wrangler configuration or source code. Use [wrangler secret put](https://developers.cloudflare.com/workers/configuration/secrets/) to store them securely, and access them through `env` at runtime. For local development, use a `.env` file (and make sure it is in your `.gitignore`). For more information, refer to [Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/).

* [  wrangler.jsonc ](#tab-panel-9007)
* [  wrangler.toml ](#tab-panel-9008)

JSONC

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-05-08",

  "compatibility_flags": ["nodejs_compat"],


  // ✅ Good: non-secret configuration lives in version control

  "vars": {

    "API_BASE_URL": "https://api.example.com",

  },


  // 🔴 Bad: never put secrets here

  // "API_KEY": "sk-live-abc123..."

}


```

TOML

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-05-08"

compatibility_flags = [ "nodejs_compat" ]


[vars]

API_BASE_URL = "https://api.example.com"


```

To add a secret, run the following command and provide the secret interactively when prompted:

 npm  yarn  pnpm 

```
npx wrangler secret put API_KEY
```

```
yarn wrangler secret put API_KEY
```

```
pnpm wrangler secret put API_KEY
```

You can also pipe secrets from other tools or environment variables:

Terminal window

```

# Pipe from another CLI tool

npx some-cli-tool --get-secret | npx wrangler secret put API_KEY

# Pipe from an environment variable or .env file

echo "$API_KEY" | npx wrangler secret put API_KEY


```

For more information, refer to [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/).

### Configure environments deliberately

[Wrangler environments](https://developers.cloudflare.com/workers/wrangler/environments/) let you deploy the same code to separate Workers for production, staging, and development. Each environment creates a distinct Worker named `{name}-{env}` (for example, `my-api-production` and `my-api-staging`).

Each environment is treated separately. Bindings and vars need to be declared per environment and are not inherited. Refer to [non-inheritable keys](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys). The root Worker (without an environment suffix) is a separate deployment. If you do not intend to use it, do not deploy without specifying an environment using `--env`.

* [  wrangler.jsonc ](#tab-panel-9017)
* [  wrangler.toml ](#tab-panel-9018)

JSONC

```

{

  "name": "my-api",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-05-08",

  "compatibility_flags": ["nodejs_compat"],


  // This binding only applies to the root Worker

  "kv_namespaces": [{ "binding": "CACHE", "id": "dev-kv-id" }],


  "env": {

    // Production environment: deploys as "my-api-production"

    "production": {

      "kv_namespaces": [{ "binding": "CACHE", "id": "prod-kv-id" }],

      "routes": [

        { "pattern": "api.example.com/*", "zone_name": "example.com" },

      ],

    },

    // Staging environment: deploys as "my-api-staging"

    "staging": {

      "kv_namespaces": [{ "binding": "CACHE", "id": "staging-kv-id" }],

      "routes": [

        { "pattern": "api-staging.example.com/*", "zone_name": "example.com" },

      ],

    },

  },

}


```

TOML

```

name = "my-api"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-05-08"

compatibility_flags = [ "nodejs_compat" ]


[[kv_namespaces]]

binding = "CACHE"

id = "dev-kv-id"


[[env.production.kv_namespaces]]

binding = "CACHE"

id = "prod-kv-id"


[[env.production.routes]]

pattern = "api.example.com/*"

zone_name = "example.com"


[[env.staging.kv_namespaces]]

binding = "CACHE"

id = "staging-kv-id"


[[env.staging.routes]]

pattern = "api-staging.example.com/*"

zone_name = "example.com"


```

With this configuration file, to deploy to staging:

 npm  yarn  pnpm 

```
npx wrangler deploy --env staging
```

```
yarn wrangler deploy --env staging
```

```
pnpm wrangler deploy --env staging
```

For more information, refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/).

### Set up custom domains or routes correctly

Workers support two routing mechanisms, and they serve different purposes:

* **[Custom domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/)**: The Worker **is** the origin. Cloudflare creates DNS records and SSL certificates automatically. Use this when your Worker handles all traffic for a hostname.
* **[Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/)**: The Worker runs **in front of** an existing origin server. You must have a Cloudflare proxied (orange-clouded) DNS record for the hostname before adding a route.

The most common mistake with routes is missing the DNS record. Without a proxied DNS record, requests to the hostname return `ERR_NAME_NOT_RESOLVED` and never reach your Worker. If you do not have a real origin, add a proxied `AAAA` record pointing to `100::` as a placeholder.

* [  wrangler.jsonc ](#tab-panel-9013)
* [  wrangler.toml ](#tab-panel-9014)

JSONC

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-05-08",

  "compatibility_flags": ["nodejs_compat"],


  // Option 1: Custom domain — Worker is the origin, DNS is managed automatically

  "routes": [{ "pattern": "api.example.com", "custom_domain": true }],


  // Option 2: Route — Worker runs in front of an existing origin

  // Requires a proxied DNS record for shop.example.com

  // "routes": [

  //   { "pattern": "shop.example.com/*", "zone_name": "example.com" }

  // ]

}


```

TOML

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-05-08"

compatibility_flags = [ "nodejs_compat" ]


[[routes]]

pattern = "api.example.com"

custom_domain = true


```

For more information, refer to [Routing](https://developers.cloudflare.com/workers/configuration/routing/).

## Request and response handling

### Stream request and response bodies

Regardless of memory limits, streaming large requests and responses is a best practice in any language. It reduces peak memory usage and improves time-to-first-byte. Workers have a [128 MB memory limit](https://developers.cloudflare.com/workers/platform/limits/), so buffering an entire body with `await response.text()` or `await request.arrayBuffer()` will crash your Worker on large payloads.

For request bodies you do consume entirely (JSON payloads, file uploads), enforce a maximum size before reading. This prevents clients from sending data you do not want to process.

Stream data through your Worker using `TransformStream` to pipe from a source to a destination without holding it all in memory.

* [  JavaScript ](#tab-panel-9021)
* [  TypeScript ](#tab-panel-9022)

src/index.js

```

// 🔴 Bad: buffers the entire response body in memory

const badHandler = {

  async fetch(request, env) {

    const response = await fetch("https://api.example.com/large-dataset");

    const text = await response.text();

    return new Response(text);

  },

};


// ✅ Good: stream the response body through without buffering

export default {

  async fetch(request, env) {

    const response = await fetch("https://api.example.com/large-dataset");

    return new Response(response.body, response);

  },

};


```

src/index.ts

```

// 🔴 Bad: buffers the entire response body in memory

const badHandler = {

  async fetch(request: Request, env: Env): Promise<Response> {

    const response = await fetch("https://api.example.com/large-dataset");

    const text = await response.text();

    return new Response(text);

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: stream the response body through without buffering

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const response = await fetch("https://api.example.com/large-dataset");

    return new Response(response.body, response);

  },

} satisfies ExportedHandler<Env>;


```

When you need to concatenate multiple responses (for example, fetching data from several upstream APIs), pipe each body sequentially into a single writable stream. This avoids buffering any of the responses in memory.

* [  JavaScript ](#tab-panel-9025)
* [  TypeScript ](#tab-panel-9026)

src/concat.js

```

export default {

  async fetch(request, env) {

    const urls = [

      "https://api.example.com/part-1",

      "https://api.example.com/part-2",

      "https://api.example.com/part-3",

    ];


    const { readable, writable } = new TransformStream();


    // ✅ Good: pipe each response body sequentially without buffering

    const pipeline = (async () => {

      for (const url of urls) {

        const response = await fetch(url);

        if (response.body) {

          // pipeTo with preventClose keeps the writable open for the next response

          await response.body.pipeTo(writable, {

            preventClose: true,

          });

        }

      }

      await writable.close();

    })();


    // Return the readable side immediately — data streams as it arrives

    return new Response(readable, {

      headers: { "Content-Type": "application/octet-stream" },

    });

  },

};


```

src/concat.ts

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const urls = [

      "https://api.example.com/part-1",

      "https://api.example.com/part-2",

      "https://api.example.com/part-3",

    ];


    const { readable, writable } = new TransformStream();


    // ✅ Good: pipe each response body sequentially without buffering

    const pipeline = (async () => {

      for (const url of urls) {

        const response = await fetch(url);

        if (response.body) {

          // pipeTo with preventClose keeps the writable open for the next response

          await response.body.pipeTo(writable, {

            preventClose: true,

          });

        }

      }

      await writable.close();

    })();


    // Return the readable side immediately — data streams as it arrives

    return new Response(readable, {

      headers: { "Content-Type": "application/octet-stream" },

    });

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/).

### Use waitUntil for work after the response

[ctx.waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/context/) lets you perform work after the response is sent to the client, such as analytics, cache writes, non-critical logging, or webhook notifications. This keeps your response fast while still completing background tasks.

There are two common pitfalls: destructuring `ctx` (which loses the `this` binding and throws "Illegal invocation"), and exceeding the 30-second `waitUntil` time limit after the response is sent.

* [  JavaScript ](#tab-panel-9033)
* [  TypeScript ](#tab-panel-9034)

src/index.js

```

// 🔴 Bad: destructuring ctx loses the `this` binding

const badHandler = {

  async fetch(request, env, ctx) {

    const { waitUntil } = ctx; // "Illegal invocation" at runtime

    waitUntil(fetch("https://analytics.example.com/events"));

    return new Response("OK");

  },

};


// ✅ Good: send the response immediately, do background work after

export default {

  async fetch(request, env, ctx) {

    const data = await processRequest(request);


    ctx.waitUntil(logToAnalytics(env, data));

    ctx.waitUntil(updateCache(env, data));


    return Response.json(data);

  },

};


async function logToAnalytics(env, data) {

  await fetch("https://analytics.example.com/events", {

    method: "POST",

    body: JSON.stringify(data),

  });

}


async function updateCache(env, data) {

  await env.CACHE.put("latest", JSON.stringify(data));

}


```

src/index.ts

```

// 🔴 Bad: destructuring ctx loses the `this` binding

const badHandler = {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    const { waitUntil } = ctx; // "Illegal invocation" at runtime

    waitUntil(fetch("https://analytics.example.com/events"));

    return new Response("OK");

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: send the response immediately, do background work after

export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    const data = await processRequest(request);


    ctx.waitUntil(logToAnalytics(env, data));

    ctx.waitUntil(updateCache(env, data));


    return Response.json(data);

  },

} satisfies ExportedHandler<Env>;


async function logToAnalytics(env: Env, data: unknown): Promise<void> {

  await fetch("https://analytics.example.com/events", {

    method: "POST",

    body: JSON.stringify(data),

  });

}


async function updateCache(env: Env, data: unknown): Promise<void> {

  await env.CACHE.put("latest", JSON.stringify(data));

}


```

For more information, refer to [Context](https://developers.cloudflare.com/workers/runtime-apis/context/).

## Architecture

### Use bindings for Cloudflare services, not REST APIs

Some Cloudflare services like R2, KV, D1, Queues, and Workflows are available as [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/). Bindings are direct, in-process references that require no network hop, no authentication, and no extra latency. Using the REST API from within a Worker wastes time and adds unnecessary complexity.

* [  JavaScript ](#tab-panel-9027)
* [  TypeScript ](#tab-panel-9028)

src/index.js

```

// 🔴 Bad: calling the REST API from a Worker

const badHandler = {

  async fetch(request, env) {

    const response = await fetch(

      "https://api.cloudflare.com/client/v4/accounts/ACCOUNT_ID/r2/buckets/BUCKET_NAME/objects/my-file",

      { headers: { Authorization: `Bearer ${env.CF_API_TOKEN}` } },

    );

    return new Response(response.body);

  },

};


// ✅ Good: use the binding directly — no network hop, no auth needed

export default {

  async fetch(request, env) {

    const object = await env.MY_BUCKET.get("my-file");


    if (!object) {

      return new Response("Not found", { status: 404 });

    }


    return new Response(object.body, {

      headers: {

        "Content-Type":

          object.httpMetadata?.contentType ?? "application/octet-stream",

      },

    });

  },

};


```

src/index.ts

```

// 🔴 Bad: calling the REST API from a Worker

const badHandler = {

  async fetch(request: Request, env: Env): Promise<Response> {

    const response = await fetch(

      "https://api.cloudflare.com/client/v4/accounts/ACCOUNT_ID/r2/buckets/BUCKET_NAME/objects/my-file",

      { headers: { Authorization: `Bearer ${env.CF_API_TOKEN}` } },

    );

    return new Response(response.body);

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: use the binding directly — no network hop, no auth needed

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const object = await env.MY_BUCKET.get("my-file");


    if (!object) {

      return new Response("Not found", { status: 404 });

    }


    return new Response(object.body, {

      headers: {

        "Content-Type":

          object.httpMetadata?.contentType ?? "application/octet-stream",

      },

    });

  },

} satisfies ExportedHandler<Env>;


```

### Use Queues and Workflows for async and background work

Long-running, retryable, or non-urgent tasks should not block a request. Use [Queues](https://developers.cloudflare.com/queues/) and [Workflows](https://developers.cloudflare.com/workflows/) to move work out of the critical path. They serve different purposes:

**Use Queues when** you need to decouple a producer from a consumer. Queues are a message broker: one Worker sends a message, another Worker processes it later. They are the right choice for fan-out (one event triggers many consumers), buffering and batching (aggregate messages before writing to a downstream service), and simple single-step background jobs (send an email, fire a webhook, write a log). Queues provide at-least-once delivery with configurable retries per message.

**Use Workflows when** the background work has multiple steps that depend on each other. Workflows are a durable execution engine: each step's return value is persisted, and if a step fails, only that step is retried — not the entire job. They are the right choice for multi-step processes (charge a card, then create a shipment, then send a confirmation), long-running tasks that need to pause and resume (wait hours or days for an external event or human approval via `step.waitForEvent()`), and complex conditional logic where later steps depend on earlier results. Workflows can run for hours, days, or weeks.

**Use both together** when a high-throughput entry point feeds into complex processing. For example, a Queue can buffer incoming orders, and the consumer can create a Workflow instance for each order that requires multi-step fulfillment.

* [  JavaScript ](#tab-panel-9023)
* [  TypeScript ](#tab-panel-9024)

src/index.js

```

export default {

  async fetch(request, env) {

    const order = await request.json();


    if (order.type === "simple") {

      // ✅ Queue: single-step background job — send a message for async processing

      await env.ORDER_QUEUE.send({

        orderId: order.id,

        action: "send-confirmation-email",

      });

    } else {

      // ✅ Workflow: multi-step durable process — payment, fulfillment, notification

      const instance = await env.FULFILLMENT_WORKFLOW.create({

        params: { orderId: order.id },

      });

    }


    return Response.json({ status: "accepted" }, { status: 202 });

  },

};


```

src/index.ts

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const order = await request.json<{ id: string; type: string }>();


    if (order.type === "simple") {

      // ✅ Queue: single-step background job — send a message for async processing

      await env.ORDER_QUEUE.send({

        orderId: order.id,

        action: "send-confirmation-email",

      });

    } else {

      // ✅ Workflow: multi-step durable process — payment, fulfillment, notification

      const instance = await env.FULFILLMENT_WORKFLOW.create({

        params: { orderId: order.id },

      });

    }


    return Response.json({ status: "accepted" }, { status: 202 });

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [Queues](https://developers.cloudflare.com/queues/) and [Workflows](https://developers.cloudflare.com/workflows/).

### Use service bindings for Worker-to-Worker communication

When one Worker needs to call another, use [service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) instead of making an HTTP request to a public URL. Service bindings are zero-cost, bypass the public internet, and support type-safe RPC.

* [  JavaScript ](#tab-panel-9031)
* [  TypeScript ](#tab-panel-9032)

src/index.js

```

import { WorkerEntrypoint } from "cloudflare:workers";


// The "auth" Worker exposes RPC methods

export class AuthService extends WorkerEntrypoint {

  async verifyToken(token) {

    // Token verification logic

    return { userId: "user-123", valid: true };

  }

}


// The "api" Worker calls the auth Worker via a service binding

export default {

  async fetch(request, env) {

    const token = request.headers.get("Authorization")?.replace("Bearer ", "");


    if (!token) {

      return new Response("Unauthorized", { status: 401 });

    }


    // ✅ Good: call another Worker via service binding RPC — no network hop

    const auth = await env.AUTH_SERVICE.verifyToken(token);


    if (!auth.valid) {

      return new Response("Invalid token", { status: 403 });

    }


    return Response.json({ userId: auth.userId });

  },

};


```

src/index.ts

```

import { WorkerEntrypoint } from "cloudflare:workers";


// The "auth" Worker exposes RPC methods

export class AuthService extends WorkerEntrypoint {

  async verifyToken(

    token: string,

  ): Promise<{ userId: string; valid: boolean }> {

    // Token verification logic

    return { userId: "user-123", valid: true };

  }

}


// The "api" Worker calls the auth Worker via a service binding

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const token = request.headers.get("Authorization")?.replace("Bearer ", "");


    if (!token) {

      return new Response("Unauthorized", { status: 401 });

    }


    // ✅ Good: call another Worker via service binding RPC — no network hop

    const auth = await env.AUTH_SERVICE.verifyToken(token);


    if (!auth.valid) {

      return new Response("Invalid token", { status: 403 });

    }


    return Response.json({ userId: auth.userId });

  },

} satisfies ExportedHandler<Env>;


```

### Use Hyperdrive for external database connections

Always use [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) when connecting to a remote PostgreSQL or MySQL database from a Worker. Hyperdrive maintains a regional connection pool close to your database, eliminating the per-request cost of TCP handshake, TLS negotiation, and connection setup. It also caches query results where possible.

Create a new `Client` on each request. Hyperdrive manages the underlying pool, so client creation is fast. Requires `nodejs_compat` for database driver support.

* [  wrangler.jsonc ](#tab-panel-9009)
* [  wrangler.toml ](#tab-panel-9010)

JSONC

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-05-08",

  "compatibility_flags": ["nodejs_compat"],


  "hyperdrive": [{ "binding": "HYPERDRIVE", "id": "<YOUR_HYPERDRIVE_ID>" }],

}


```

TOML

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-05-08"

compatibility_flags = [ "nodejs_compat" ]


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<YOUR_HYPERDRIVE_ID>"


```

* [  JavaScript ](#tab-panel-9039)
* [  TypeScript ](#tab-panel-9040)

src/index.js

```

import { Client } from "pg";


export default {

  async fetch(request, env) {

    // ✅ Good: create a new client per request — Hyperdrive pools the underlying connection

    const client = new Client({

      connectionString: env.HYPERDRIVE.connectionString,

    });


    try {

      await client.connect();

      const result = await client.query("SELECT id, name FROM users LIMIT 10");

      return Response.json(result.rows);

    } catch (e) {

      console.error(

        JSON.stringify({ message: "database query failed", error: String(e) }),

      );

      return Response.json({ error: "Database error" }, { status: 500 });

    }

  },

};


// 🔴 Bad: connecting directly to a remote database without Hyperdrive

// Every request pays the full TCP + TLS + auth cost (often 300-500ms)

const badHandler = {

  async fetch(request, env) {

    const client = new Client({

      connectionString: "postgres://user:pass@db.example.com:5432/mydb",

    });

    await client.connect();

    const result = await client.query("SELECT id, name FROM users LIMIT 10");

    return Response.json(result.rows);

  },

};


```

src/index.ts

```

import { Client } from "pg";


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    // ✅ Good: create a new client per request — Hyperdrive pools the underlying connection

    const client = new Client({

      connectionString: env.HYPERDRIVE.connectionString,

    });


    try {

      await client.connect();

      const result = await client.query("SELECT id, name FROM users LIMIT 10");

      return Response.json(result.rows);

    } catch (e) {

      console.error(

        JSON.stringify({ message: "database query failed", error: String(e) }),

      );

      return Response.json({ error: "Database error" }, { status: 500 });

    }

  },

} satisfies ExportedHandler<Env>;


// 🔴 Bad: connecting directly to a remote database without Hyperdrive

// Every request pays the full TCP + TLS + auth cost (often 300-500ms)

const badHandler = {

  async fetch(request: Request, env: Env): Promise<Response> {

    const client = new Client({

      connectionString: "postgres://user:pass@db.example.com:5432/mydb",

    });

    await client.connect();

    const result = await client.query("SELECT id, name FROM users LIMIT 10");

    return Response.json(result.rows);

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [Hyperdrive](https://developers.cloudflare.com/hyperdrive/).

### Use Durable Objects for WebSockets

Plain Workers can upgrade HTTP connections to WebSockets, but they lack persistent state and hibernation. If the isolate is evicted, the connection is lost because there is no persistent actor to hold it. For reliable, long-lived WebSocket connections, use [Durable Objects](https://developers.cloudflare.com/durable-objects/) with the [Hibernation API](https://developers.cloudflare.com/durable-objects/best-practices/websockets/). Durable Objects keep WebSocket connections open even while the object is evicted from memory, and automatically wake up when a message arrives.

Use `this.ctx.acceptWebSocket()` instead of `ws.accept()` to enable hibernation. Use `setWebSocketAutoResponse` for ping/pong heartbeats that do not wake the object.

* [  JavaScript ](#tab-panel-9047)
* [  TypeScript ](#tab-panel-9048)

src/index.js

```

import { DurableObject } from "cloudflare:workers";


// Parent Worker: upgrades HTTP to WebSocket and routes to a Durable Object

export default {

  async fetch(request, env) {

    if (request.headers.get("Upgrade") !== "websocket") {

      return new Response("Expected WebSocket", { status: 426 });

    }


    const stub = env.CHAT_ROOM.getByName("default-room");

    return stub.fetch(request);

  },

};


// Durable Object: manages WebSocket connections with hibernation

export class ChatRoom extends DurableObject {

  constructor(ctx, env) {

    super(ctx, env);

    // Auto ping/pong without waking the object

    this.ctx.setWebSocketAutoResponse(

      new WebSocketRequestResponsePair("ping", "pong"),

    );

  }


  async fetch(request) {

    const pair = new WebSocketPair();

    const [client, server] = Object.values(pair);


    // ✅ Good: acceptWebSocket enables hibernation

    this.ctx.acceptWebSocket(server);


    return new Response(null, { status: 101, webSocket: client });

  }


  // Called when a message arrives — the object wakes from hibernation if needed

  async webSocketMessage(ws, message) {

    for (const conn of this.ctx.getWebSockets()) {

      conn.send(typeof message === "string" ? message : "binary");

    }

  }


  async webSocketClose(ws, code, reason, wasClean) {

    // With web_socket_auto_reply_to_close (compat date >= 2026-04-07), the runtime

    // auto-replies to Close frames. Calling close() is safe but no longer required.

    ws.close(code, reason);

  }

}


```

src/index.ts

```

import { DurableObject } from "cloudflare:workers";


// Parent Worker: upgrades HTTP to WebSocket and routes to a Durable Object

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    if (request.headers.get("Upgrade") !== "websocket") {

      return new Response("Expected WebSocket", { status: 426 });

    }


    const stub = env.CHAT_ROOM.getByName("default-room");

    return stub.fetch(request);

  },

} satisfies ExportedHandler<Env>;


// Durable Object: manages WebSocket connections with hibernation

export class ChatRoom extends DurableObject {

  constructor(ctx: DurableObjectState, env: Env) {

    super(ctx, env);

    // Auto ping/pong without waking the object

    this.ctx.setWebSocketAutoResponse(

      new WebSocketRequestResponsePair("ping", "pong"),

    );

  }


  async fetch(request: Request): Promise<Response> {

    const pair = new WebSocketPair();

    const [client, server] = Object.values(pair);


    // ✅ Good: acceptWebSocket enables hibernation

    this.ctx.acceptWebSocket(server);


    return new Response(null, { status: 101, webSocket: client });

  }


  // Called when a message arrives — the object wakes from hibernation if needed

  async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer) {

    for (const conn of this.ctx.getWebSockets()) {

      conn.send(typeof message === "string" ? message : "binary");

    }

  }


  async webSocketClose(

    ws: WebSocket,

    code: number,

    reason: string,

    wasClean: boolean,

  ) {

    // With web_socket_auto_reply_to_close (compat date >= 2026-04-07), the runtime

    // auto-replies to Close frames. Calling close() is safe but no longer required.

    ws.close(code, reason);

  }

}


```

For more information, refer to [Durable Objects WebSocket best practices](https://developers.cloudflare.com/durable-objects/best-practices/websockets/).

### Use Workers Static Assets for new projects

[Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) is the recommended way to deploy static sites, single-page applications, and full-stack apps on Cloudflare. If you are starting a new project, use Workers instead of Pages. Pages continues to work, but new features and optimizations are focused on Workers.

For a purely static site, point `assets.directory` at your build output. No Worker script is needed. For a full-stack app, add a `main` entry point and an `ASSETS` binding to serve static files alongside your API.

* [  wrangler.jsonc ](#tab-panel-9011)
* [  wrangler.toml ](#tab-panel-9012)

JSONC

```

{

  // Static site — no Worker script needed

  "name": "my-static-site",

  // Set this to today's date

  "compatibility_date": "2026-05-08",

  "compatibility_flags": ["nodejs_compat"],


  "assets": {

    "directory": "./dist",

  },

}


```

TOML

```

name = "my-static-site"

# Set this to today's date

compatibility_date = "2026-05-08"

compatibility_flags = [ "nodejs_compat" ]


[assets]

directory = "./dist"


```

For more information, refer to [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/).

## Observability

### Enable Workers Logs and Traces

Production Workers without observability are a black box. Enable logs and traces before you deploy to production. When an intermittent error appears, you need data already being collected to diagnose it.

Enable them in your Wrangler configuration and use `head_sampling_rate` to control volume and manage costs. A sampling rate of `1` captures everything; lower it for high-traffic Workers.

Use structured JSON logging with `console.log` so logs are searchable and filterable. Use `console.error` for errors and `console.warn` for warnings. These appear at the correct severity level in the Workers Observability dashboard.

* [  wrangler.jsonc ](#tab-panel-9019)
* [  wrangler.toml ](#tab-panel-9020)

JSONC

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-05-08",

  "compatibility_flags": ["nodejs_compat"],


  "observability": {

    "enabled": true,

    "logs": {

      // Capture 100% of logs — lower this for high-traffic Workers

      "head_sampling_rate": 1,

    },

    "traces": {

      "enabled": true,

      "head_sampling_rate": 0.01, // Sample 1% of traces

    },

  },

}


```

TOML

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-05-08"

compatibility_flags = [ "nodejs_compat" ]


[observability]

enabled = true


  [observability.logs]

  head_sampling_rate = 1


  [observability.traces]

  enabled = true

  head_sampling_rate = 0.01


```

* [  JavaScript ](#tab-panel-9045)
* [  TypeScript ](#tab-panel-9046)

src/index.js

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);


    try {

      // ✅ Good: structured JSON — searchable and filterable in the dashboard

      console.log(

        JSON.stringify({

          message: "incoming request",

          method: request.method,

          path: url.pathname,

        }),

      );


      const result = await env.MY_KV.get(url.pathname);

      return new Response(result ?? "Not found", {

        status: result ? 200 : 404,

      });

    } catch (e) {

      // ✅ Good: console.error appears as "error" severity in Workers Observability

      console.error(

        JSON.stringify({

          message: "request failed",

          error: e instanceof Error ? e.message : String(e),

          path: url.pathname,

        }),

      );

      return Response.json({ error: "Internal server error" }, { status: 500 });

    }

  },

};


// 🔴 Bad: unstructured string logs are hard to query

const badHandler = {

  async fetch(request, env) {

    const url = new URL(request.url);

    console.log("Got a request to " + url.pathname);

    return new Response("OK");

  },

};


```

src/index.ts

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const url = new URL(request.url);


    try {

      // ✅ Good: structured JSON — searchable and filterable in the dashboard

      console.log(

        JSON.stringify({

          message: "incoming request",

          method: request.method,

          path: url.pathname,

        }),

      );


      const result = await env.MY_KV.get(url.pathname);

      return new Response(result ?? "Not found", {

        status: result ? 200 : 404,

      });

    } catch (e) {

      // ✅ Good: console.error appears as "error" severity in Workers Observability

      console.error(

        JSON.stringify({

          message: "request failed",

          error: e instanceof Error ? e.message : String(e),

          path: url.pathname,

        }),

      );

      return Response.json({ error: "Internal server error" }, { status: 500 });

    }

  },

} satisfies ExportedHandler<Env>;


// 🔴 Bad: unstructured string logs are hard to query

const badHandler = {

  async fetch(request: Request, env: Env): Promise<Response> {

    const url = new URL(request.url);

    console.log("Got a request to " + url.pathname);

    return new Response("OK");

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/) and [Traces](https://developers.cloudflare.com/workers/observability/traces/).

For more information on all available observability tools, refer to [Workers Observability](https://developers.cloudflare.com/workers/observability/).

## Code patterns

### Do not store request-scoped state in global scope

Workers reuse isolates across requests. A variable set during one request is still present during the next. This causes cross-request data leaks, stale state, and "Cannot perform I/O on behalf of a different request" errors.

Pass state through function arguments or store it on `env` bindings. Never in module-level variables.

* [  JavaScript ](#tab-panel-9041)
* [  TypeScript ](#tab-panel-9042)

src/index.js

```

// 🔴 Bad: global mutable state leaks between requests

let currentUser = null;


const badHandler = {

  async fetch(request, env, ctx) {

    // Storing request-scoped data globally means the next request sees stale data

    currentUser = request.headers.get("X-User-Id");

    const result = await handleRequest(currentUser, env);

    return Response.json(result);

  },

};


// ✅ Good: pass request-scoped data through function arguments

export default {

  async fetch(request, env, ctx) {

    const userId = request.headers.get("X-User-Id");

    const result = await handleRequest(userId, env);


    return Response.json(result);

  },

};


async function handleRequest(userId, env) {

  return { userId };

}


```

src/index.ts

```

// 🔴 Bad: global mutable state leaks between requests

let currentUser: string | null = null;


const badHandler = {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    // Storing request-scoped data globally means the next request sees stale data

    currentUser = request.headers.get("X-User-Id");

    const result = await handleRequest(currentUser, env);

    return Response.json(result);

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: pass request-scoped data through function arguments

export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    const userId = request.headers.get("X-User-Id");

    const result = await handleRequest(userId, env);


    return Response.json(result);

  },

} satisfies ExportedHandler<Env>;


async function handleRequest(userId: string | null, env: Env): Promise<object> {

  return { userId };

}


```

For more information, refer to [Workers errors](https://developers.cloudflare.com/workers/observability/errors/#cannot-perform-io-on-behalf-of-a-different-request).

### Always await or waitUntil your Promises

A `Promise` that is not `await`ed, `return`ed, or passed to `ctx.waitUntil()` is a floating promise. Floating promises cause silent bugs: dropped results, swallowed errors, and unfinished work. The Workers runtime may terminate your isolate before a floating promise completes.

Enable the `no-floating-promises` lint rule to catch these at development time. If you use ESLint, enable [@typescript-eslint/no-floating-promises ↗](https://typescript-eslint.io/rules/no-floating-promises/). If you use oxlint, enable [typescript/no-floating-promises ↗](https://oxc.rs/docs/guide/usage/linter/rules/typescript/no-floating-promises.html).

Terminal window

```

# ESLint (typescript-eslint)

npx eslint --rule '{"@typescript-eslint/no-floating-promises": "error"}' src/


# oxlint

npx oxlint --deny typescript/no-floating-promises src/


```

* [  JavaScript ](#tab-panel-9043)
* [  TypeScript ](#tab-panel-9044)

src/index.js

```

export default {

  async fetch(request, env, ctx) {

    const data = await request.json();


    // 🔴 Bad: floating promise — result is dropped, errors are swallowed

    fetch("https://api.example.com/webhook", {

      method: "POST",

      body: JSON.stringify(data),

    });


    // ✅ Good: await if you need the result before responding

    const response = await fetch("https://api.example.com/process", {

      method: "POST",

      body: JSON.stringify(data),

    });


    // ✅ Good: waitUntil if you do not need the result before responding

    ctx.waitUntil(

      fetch("https://api.example.com/webhook", {

        method: "POST",

        body: JSON.stringify(data),

      }),

    );


    return new Response("OK");

  },

};


```

src/index.ts

```

export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    const data = await request.json();


    // 🔴 Bad: floating promise — result is dropped, errors are swallowed

    fetch("https://api.example.com/webhook", {

      method: "POST",

      body: JSON.stringify(data),

    });


    // ✅ Good: await if you need the result before responding

    const response = await fetch("https://api.example.com/process", {

      method: "POST",

      body: JSON.stringify(data),

    });


    // ✅ Good: waitUntil if you do not need the result before responding

    ctx.waitUntil(

      fetch("https://api.example.com/webhook", {

        method: "POST",

        body: JSON.stringify(data),

      }),

    );


    return new Response("OK");

  },

} satisfies ExportedHandler<Env>;


```

## Security

### Use Web Crypto for secure token generation

The Workers runtime provides the [Web Crypto API](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/) for cryptographic operations. Use `crypto.randomUUID()` for unique identifiers and `crypto.getRandomValues()` for random bytes. Never use `Math.random()` for anything security-sensitive. It is not cryptographically secure.

Node.js [node:crypto](https://developers.cloudflare.com/workers/runtime-apis/nodejs/crypto/) is also fully supported when `nodejs_compat` is enabled, so you can use whichever API you or your libraries prefer.

* [  JavaScript ](#tab-panel-9029)
* [  TypeScript ](#tab-panel-9030)

src/index.js

```

export default {

  async fetch(request, env) {

    // 🔴 Bad: Math.random() is predictable and not suitable for security

    const badToken = Math.random().toString(36).substring(2);


    // ✅ Good: cryptographically secure random UUID

    const sessionId = crypto.randomUUID();


    // ✅ Good: cryptographically secure random bytes for tokens

    const tokenBytes = new Uint8Array(32);

    crypto.getRandomValues(tokenBytes);

    const token = Array.from(tokenBytes)

      .map((b) => b.toString(16).padStart(2, "0"))

      .join("");


    return Response.json({ sessionId, token });

  },

};


```

src/index.ts

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    // 🔴 Bad: Math.random() is predictable and not suitable for security

    const badToken = Math.random().toString(36).substring(2);


    // ✅ Good: cryptographically secure random UUID

    const sessionId = crypto.randomUUID();


    // ✅ Good: cryptographically secure random bytes for tokens

    const tokenBytes = new Uint8Array(32);

    crypto.getRandomValues(tokenBytes);

    const token = Array.from(tokenBytes)

      .map((b) => b.toString(16).padStart(2, "0"))

      .join("");


    return Response.json({ sessionId, token });

  },

} satisfies ExportedHandler<Env>;


```

When comparing secret values (API keys, tokens, HMAC signatures), use `crypto.subtle.timingSafeEqual()` to prevent timing side-channel attacks. Do not short-circuit on length mismatch. Encode both values to a fixed-size hash first.

* [  JavaScript ](#tab-panel-9035)
* [  TypeScript ](#tab-panel-9036)

src/verify.js

```

async function verifyToken(provided, expected) {

  const encoder = new TextEncoder();


  // ✅ Good: hash both values to a fixed size, then compare in constant time

  // This avoids leaking the length of the expected value

  const [providedHash, expectedHash] = await Promise.all([

    crypto.subtle.digest("SHA-256", encoder.encode(provided)),

    crypto.subtle.digest("SHA-256", encoder.encode(expected)),

  ]);


  return crypto.subtle.timingSafeEqual(providedHash, expectedHash);

}


// 🔴 Bad: direct string comparison leaks timing information

function verifyTokenInsecure(provided, expected) {

  return provided === expected;

}


```

src/verify.ts

```

async function verifyToken(

  provided: string,

  expected: string,

): Promise<boolean> {

  const encoder = new TextEncoder();


  // ✅ Good: hash both values to a fixed size, then compare in constant time

  // This avoids leaking the length of the expected value

  const [providedHash, expectedHash] = await Promise.all([

    crypto.subtle.digest("SHA-256", encoder.encode(provided)),

    crypto.subtle.digest("SHA-256", encoder.encode(expected)),

  ]);


  return crypto.subtle.timingSafeEqual(providedHash, expectedHash);

}


// 🔴 Bad: direct string comparison leaks timing information

function verifyTokenInsecure(provided: string, expected: string): boolean {

  return provided === expected;

}


```

### Do not use passThroughOnException as error handling

`passThroughOnException()` is a fail-open mechanism that sends requests to your origin when your Worker throws an unhandled exception. While it can be useful during migration from an origin server, it hides bugs and makes debugging difficult. Use explicit `try...catch` blocks with structured error responses instead.

* [  JavaScript ](#tab-panel-9049)
* [  TypeScript ](#tab-panel-9050)

src/index.js

```

// 🔴 Bad: hides errors by falling through to origin

const badHandler = {

  async fetch(request, env, ctx) {

    ctx.passThroughOnException();

    const result = await handleRequest(request, env);

    return Response.json(result);

  },

};


// ✅ Good: explicit error handling with structured responses

export default {

  async fetch(request, env, ctx) {

    try {

      const result = await handleRequest(request, env);

      return Response.json(result);

    } catch (error) {

      const message = error instanceof Error ? error.message : "Unknown error";


      console.error(

        JSON.stringify({

          message: "unhandled error",

          error: message,

          path: new URL(request.url).pathname,

        }),

      );


      return Response.json({ error: "Internal server error" }, { status: 500 });

    }

  },

};


async function handleRequest(request, env) {

  return { status: "ok" };

}


```

src/index.ts

```

// 🔴 Bad: hides errors by falling through to origin

const badHandler = {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    ctx.passThroughOnException();

    const result = await handleRequest(request, env);

    return Response.json(result);

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: explicit error handling with structured responses

export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    try {

      const result = await handleRequest(request, env);

      return Response.json(result);

    } catch (error) {

      const message = error instanceof Error ? error.message : "Unknown error";


      console.error(

        JSON.stringify({

          message: "unhandled error",

          error: message,

          path: new URL(request.url).pathname,

        }),

      );


      return Response.json({ error: "Internal server error" }, { status: 500 });

    }

  },

} satisfies ExportedHandler<Env>;


async function handleRequest(request: Request, env: Env): Promise<object> {

  return { status: "ok" };

}


```

## Development and testing

### Test with @cloudflare/vitest-pool-workers

The [@cloudflare/vitest-pool-workers](https://developers.cloudflare.com/workers/testing/vitest-integration/) package runs your tests inside the Workers runtime, giving you access to real bindings (KV, R2, D1, Durable Objects) during tests. This catches issues that Node.js-based tests miss, like unsupported APIs or missing compatibility flags.

One known pitfall: the Vitest pool automatically injects `nodejs_compat`, so tests pass even if your Wrangler configuration does not have the flag. Always confirm your `wrangler.jsonc` includes `nodejs_compat` if your code depends on Node.js built-in modules.

* [  JavaScript ](#tab-panel-9037)
* [  TypeScript ](#tab-panel-9038)

test/index.test.js

```

import { describe, it, expect } from "vitest";

import { env } from "cloudflare:workers";


describe("KV operations", () => {

  it("should store and retrieve a value", async () => {

    await env.MY_KV.put("key", "value");

    const result = await env.MY_KV.get("key");

    expect(result).toBe("value");

  });


  it("should return null for missing keys", async () => {

    const result = await env.MY_KV.get("nonexistent");

    // ✅ Good: test the null case explicitly

    expect(result).toBeNull();

  });

});


```

test/index.test.ts

```

import { describe, it, expect } from "vitest";

import { env } from "cloudflare:workers";


describe("KV operations", () => {

  it("should store and retrieve a value", async () => {

    await env.MY_KV.put("key", "value");

    const result = await env.MY_KV.get("key");

    expect(result).toBe("value");

  });


  it("should return null for missing keys", async () => {

    const result = await env.MY_KV.get("nonexistent");

    // ✅ Good: test the null case explicitly

    expect(result).toBeNull();

  });

});


```

For more information, refer to [Testing with Vitest](https://developers.cloudflare.com/workers/testing/vitest-integration/).

## Related resources

* [Rules of Durable Objects](https://developers.cloudflare.com/durable-objects/best-practices/rules-of-durable-objects/): best practices for stateful, coordinated applications.
* [Rules of Workflows](https://developers.cloudflare.com/workflows/build/rules-of-workflows/): best practices for durable, multi-step Workflows.
* [Platform limits](https://developers.cloudflare.com/workers/platform/limits/): CPU time, memory, subrequest, and other limits.
* [Workers errors](https://developers.cloudflare.com/workers/observability/errors/): error codes and debugging guidance.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/best-practices/","name":"Best practices"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/best-practices/workers-best-practices/","name":"Workers Best Practices"}}]}
```
