---
title: Think
description: Opinionated chat agent framework with built-in tools, persistent memory, lifecycle hooks, streaming, and sub-agent RPC.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/agents/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

# Think

`@cloudflare/think` is an opinionated chat agent base class for Cloudflare Workers. It handles the full chat lifecycle — agentic loop, message persistence, streaming, tool execution, client tools, stream resumption, and extensions — all backed by Durable Object SQLite.

Think works as both a **top-level agent** (WebSocket chat to browser clients via `useAgentChat`) and a **sub-agent** (RPC streaming from a parent agent via `chat()`).

## Quick start

### Install

Terminal window

```

npm install @cloudflare/think @cloudflare/ai-chat agents ai @cloudflare/shell zod workers-ai-provider


```

### Server

* [  JavaScript ](#tab-panel-3952)
* [  TypeScript ](#tab-panel-3953)

JavaScript

```

import { Think } from "@cloudflare/think";

import { createWorkersAI } from "workers-ai-provider";

import { routeAgentRequest } from "agents";


export class MyAgent extends Think {

  getModel() {

    return createWorkersAI({ binding: this.env.AI })(

      "@cf/moonshotai/kimi-k2.6",

    );

  }

}


export default {

  async fetch(request, env) {

    return (

      (await routeAgentRequest(request, env)) ||

      new Response("Not found", { status: 404 })

    );

  },

};


```

TypeScript

```

import { Think } from "@cloudflare/think";

import { createWorkersAI } from "workers-ai-provider";

import { routeAgentRequest } from "agents";


export class MyAgent extends Think<Env> {

  getModel() {

    return createWorkersAI({ binding: this.env.AI })(

      "@cf/moonshotai/kimi-k2.6",

    );

  }

}


export default {

  async fetch(request: Request, env: Env) {

    return (

      (await routeAgentRequest(request, env)) ||

      new Response("Not found", { status: 404 })

    );

  },

} satisfies ExportedHandler<Env>;


```

That is it. Think handles the WebSocket chat protocol, message persistence, the agentic loop, message sanitization, stream resumption, client tool support, and workspace file tools.

### Client

* [  JavaScript ](#tab-panel-3968)
* [  TypeScript ](#tab-panel-3969)

JavaScript

```

import { useAgent } from "agents/react";

import { useAgentChat } from "@cloudflare/ai-chat/react";


function Chat() {

  const agent = useAgent({ agent: "MyAgent" });

  const { messages, sendMessage, status } = useAgentChat({ agent });


  return (

    <div>

      {messages.map((msg) => (

        <div key={msg.id}>

          <strong>{msg.role}:</strong>

          {msg.parts.map((part, i) =>

            part.type === "text" ? <span key={i}>{part.text}</span> : null,

          )}

        </div>

      ))}


      <form

        onSubmit={(e) => {

          e.preventDefault();

          const input = e.currentTarget.elements.namedItem("input");

          sendMessage({ text: input.value });

          input.value = "";

        }}

      >

        <input name="input" placeholder="Send a message..." />

        <button type="submit">Send</button>

      </form>

    </div>

  );

}


```

TypeScript

```

import { useAgent } from "agents/react";

import { useAgentChat } from "@cloudflare/ai-chat/react";


function Chat() {

  const agent = useAgent({ agent: "MyAgent" });

  const { messages, sendMessage, status } = useAgentChat({ agent });


  return (

    <div>

      {messages.map((msg) => (

        <div key={msg.id}>

          <strong>{msg.role}:</strong>

          {msg.parts.map((part, i) =>

            part.type === "text" ? <span key={i}>{part.text}</span> : null,

          )}

        </div>

      ))}


      <form

        onSubmit={(e) => {

          e.preventDefault();

          const input = e.currentTarget.elements.namedItem(

            "input",

          ) as HTMLInputElement;

          sendMessage({ text: input.value });

          input.value = "";

        }}

      >

        <input name="input" placeholder="Send a message..." />

        <button type="submit">Send</button>

      </form>

    </div>

  );

}


```

### Configuration

* [  wrangler.jsonc ](#tab-panel-3942)
* [  wrangler.toml ](#tab-panel-3943)

JSONC

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  // Set this to today's date

  "compatibility_date": "2026-05-08",

  "compatibility_flags": [

    "nodejs_compat"

  ],

  "ai": {

    "binding": "AI"

  },

  "durable_objects": {

    "bindings": [

      {

        "class_name": "MyAgent",

        "name": "MyAgent"

      }

    ]

  },

  "migrations": [

    {

      "new_sqlite_classes": [

        "MyAgent"

      ],

      "tag": "v1"

    }

  ]

}


```

TOML

```

# Set this to today's date

compatibility_date = "2026-05-08"

compatibility_flags = ["nodejs_compat"]


[ai]

binding = "AI"


[[durable_objects.bindings]]

class_name = "MyAgent"

name = "MyAgent"


[[migrations]]

new_sqlite_classes = ["MyAgent"]

tag = "v1"


```

## Think vs AIChatAgent

Both Think and [AIChatAgent](https://developers.cloudflare.com/agents/api-reference/chat-agents/) extend `Agent` and speak the same `cf_agent_chat_*` WebSocket protocol. They serve different goals.

**AIChatAgent** is a protocol adapter. You override `onChatMessage` and are responsible for calling `streamText`, wiring tools, converting messages, and returning a `Response`. AIChatAgent handles the plumbing — message persistence, streaming, abort, resume — but the LLM call is entirely your concern.

**Think** is an opinionated framework. It makes decisions for you: `getModel()` returns the model, `getSystemPrompt()` or `configureSession()` sets the prompt, `getTools()` returns tools. The default `onChatMessage` runs the complete agentic loop. You override individual pieces, not the whole pipeline.

| Concern                | AIChatAgent                                                      | Think                                                               |
| ---------------------- | ---------------------------------------------------------------- | ------------------------------------------------------------------- |
| **Minimal subclass**   | \~15 lines (wire streamText \+ tools + system prompt + response) | 3 lines (getModel() only)                                           |
| **Storage**            | Flat SQL table                                                   | Session: tree-structured messages, context blocks, compaction, FTS5 |
| **Regeneration**       | Destructive (old response deleted)                               | Non-destructive branching (old responses preserved)                 |
| **Context management** | Manual                                                           | Context blocks with LLM-writable persistent memory                  |
| **Sub-agent RPC**      | Not built in                                                     | chat() with StreamCallback                                          |
| **Programmatic turns** | saveMessages()                                                   | saveMessages() \+ continueLastTurn()                                |
| **Compaction**         | maxPersistedMessages (deletes oldest)                            | Non-destructive summaries via overlays                              |
| **Search**             | Not available                                                    | FTS5 full-text search per-session and cross-session                 |

### When to use AIChatAgent

* You need full control over the LLM call (RAG, multi-model, custom streaming)
* You want the `Response` return type for HTTP middleware or testing
* You are building a simple chatbot with no memory requirements

### When to use Think

* You want to ship fast (3-line subclass with everything wired)
* You need persistent memory (context blocks the model can read and write)
* You need long conversations (non-destructive compaction)
* You need conversation search (FTS5)
* You are building a sub-agent system (parent-child RPC with streaming)
* You need proactive agents (programmatic turns from scheduled tasks or webhooks)

## Configuration overrides

| Method / Property     | Default                        | Description                                                                                                                            |
| --------------------- | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------- |
| getModel()            | throws                         | Return the LanguageModel to use                                                                                                        |
| getSystemPrompt()     | "You are a helpful assistant." | System prompt (fallback when no context blocks)                                                                                        |
| getTools()            | {}                             | AI SDK ToolSet for the agentic loop                                                                                                    |
| maxSteps              | 10                             | Max tool-call rounds per turn                                                                                                          |
| sendReasoning         | true                           | Send reasoning chunks to chat clients                                                                                                  |
| configureSession()    | identity                       | Add context blocks, compaction, search, skills — refer to [Sessions](https://developers.cloudflare.com/agents/api-reference/sessions/) |
| messageConcurrency    | "queue"                        | How overlapping submits behave — refer to [Message concurrency](#message-concurrency)                                                  |
| waitForMcpConnections | false                          | Wait for MCP servers before inference                                                                                                  |
| chatRecovery          | true                           | Wrap WebSocket, programmatic, and continuation turns in runFiber for durable execution                                                 |

## Dynamic configuration

Think's class generics match `Agent<Env, State, Props>`. Persisted runtime configuration is typed at the `configure<T>()` and `getConfig<T>()` call sites, stored in SQLite, and survives hibernation and restarts.

* [  JavaScript ](#tab-panel-3948)
* [  TypeScript ](#tab-panel-3949)

JavaScript

```

export class MyAgent extends Think {

  getModel() {

    const tier = this.getConfig()?.modelTier ?? "fast";

    const models = {

      fast: "@cf/moonshotai/kimi-k2.6",

      capable: "@cf/meta/llama-4-scout-17b-16e-instruct",

    };

    return createWorkersAI({ binding: this.env.AI })(models[tier]);

  }

}


```

TypeScript

```

type MyConfig = { modelTier: "fast" | "capable"; theme: string };


export class MyAgent extends Think<Env> {

  getModel() {

    const tier = this.getConfig<MyConfig>()?.modelTier ?? "fast";

    const models = {

      fast: "@cf/moonshotai/kimi-k2.6",

      capable: "@cf/meta/llama-4-scout-17b-16e-instruct",

    };

    return createWorkersAI({ binding: this.env.AI })(models[tier]);

  }

}


```

| Method                    | Description                                                   |
| ------------------------- | ------------------------------------------------------------- |
| configure<T>(config: T)   | Persist a typed configuration object                          |
| getConfig<T>(): T \| null | Read the persisted configuration, or null if never configured |

Expose configuration to the client via `@callable`:

* [  JavaScript ](#tab-panel-3950)
* [  TypeScript ](#tab-panel-3951)

JavaScript

```

import { callable } from "agents";


export class MyAgent extends Think {

  getModel() {

    /* ... */

  }


  @callable()

  updateConfig(config) {

    this.configure(config);

  }

}


```

TypeScript

```

import { callable } from "agents";


export class MyAgent extends Think<Env> {

  getModel() {

    /* ... */

  }


  @callable()

  updateConfig(config: MyConfig) {

    this.configure<MyConfig>(config);

  }

}


```

## Session integration

Think uses [Session](https://developers.cloudflare.com/agents/api-reference/sessions/) for conversation storage. Override `configureSession` to add persistent memory, compaction, search, and skills:

* [  JavaScript ](#tab-panel-3958)
* [  TypeScript ](#tab-panel-3959)

JavaScript

```

import { Think, Session } from "@cloudflare/think";


export class MyAgent extends Think {

  getModel() {

    /* ... */

  }


  configureSession(session) {

    return session

      .withContext("soul", {

        provider: { get: async () => "You are a helpful coding assistant." },

      })

      .withContext("memory", {

        description: "Important facts learned during conversation.",

        maxTokens: 2000,

      })

      .withCachedPrompt();

  }

}


```

TypeScript

```

import { Think, Session } from "@cloudflare/think";


export class MyAgent extends Think<Env> {

  getModel() {

    /* ... */

  }


  configureSession(session: Session) {

    return session

      .withContext("soul", {

        provider: { get: async () => "You are a helpful coding assistant." },

      })

      .withContext("memory", {

        description: "Important facts learned during conversation.",

        maxTokens: 2000,

      })

      .withCachedPrompt();

  }

}


```

When `configureSession` adds context blocks, Think builds the system prompt from those blocks instead of using `getSystemPrompt()`. Think's `this.messages` getter reads directly from Session's tree-structured storage.

For the full Session API — context blocks, compaction, search, skills, and multi-session support — refer to the [Sessions documentation](https://developers.cloudflare.com/agents/api-reference/sessions/).

## Tools

Think provides built-in workspace file tools on every turn, plus integration points for custom tools, code execution, and dynamic extensions.

### Tool merge order

On every turn, Think merges tools from multiple sources. Later sources override earlier ones if names collide:

1. **Workspace tools** — `read`, `write`, `edit`, `list`, `find`, `grep`, `delete` (built-in)
2. **`getTools()`** — your custom server-side tools
3. **Extension tools** — tools from loaded extensions (prefixed by extension name)
4. **Session tools** — `set_context`, `load_context`, `search_context` (from `configureSession`)
5. **MCP tools** — from connected MCP servers
6. **Client tools** — from the browser (refer to [Client tools](#client-tools))
7. **Caller tools** — from `chat()` options when used as a sub-agent

### Built-in workspace tools

Every Think agent gets `this.workspace` — a virtual filesystem backed by Durable Object SQLite. Workspace tools are automatically available to the model with no configuration.

| Tool   | Description                                                                 |
| ------ | --------------------------------------------------------------------------- |
| read   | Read a file's content                                                       |
| write  | Write content to a file (creates parent directories)                        |
| edit   | Apply a find-and-replace edit to an existing file (supports fuzzy matching) |
| list   | List files and directories in a path                                        |
| find   | Find files matching a glob pattern                                          |
| grep   | Search file contents by regex or fixed string                               |
| delete | Delete a file or directory                                                  |

#### R2 spillover

By default, the workspace stores everything in SQLite. For large files, override `workspace` to add R2 spillover:

* [  JavaScript ](#tab-panel-3956)
* [  TypeScript ](#tab-panel-3957)

JavaScript

```

import { Think } from "@cloudflare/think";

import { Workspace } from "@cloudflare/shell";


export class MyAgent extends Think {

  workspace = new Workspace({

    sql: this.ctx.storage.sql,

    r2: this.env.R2,

    name: () => this.name,

  });


  getModel() {

    /* ... */

  }

}


```

TypeScript

```

import { Think } from "@cloudflare/think";

import { Workspace } from "@cloudflare/shell";


export class MyAgent extends Think<Env> {

  override workspace = new Workspace({

    sql: this.ctx.storage.sql,

    r2: this.env.R2,

    name: () => this.name,

  });


  getModel() {

    /* ... */

  }

}


```

This requires an R2 bucket binding:

* [  wrangler.jsonc ](#tab-panel-3940)
* [  wrangler.toml ](#tab-panel-3941)

JSONC

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "r2_buckets": [

    {

      "binding": "R2",

      "bucket_name": "agent-files"

    }

  ]

}


```

TOML

```

[[r2_buckets]]

binding = "R2"

bucket_name = "agent-files"


```

### Custom tools

Override `getTools()` to add your own tools. These are standard AI SDK `tool()` definitions with Zod schemas:

* [  JavaScript ](#tab-panel-3972)
* [  TypeScript ](#tab-panel-3973)

JavaScript

```

import { Think } from "@cloudflare/think";

import { tool } from "ai";


import { z } from "zod";


export class MyAgent extends Think {

  getModel() {

    /* ... */

  }


  getTools() {

    return {

      getWeather: tool({

        description: "Get the current weather for a city",

        inputSchema: z.object({

          city: z.string().describe("City name"),

        }),

        execute: async ({ city }) => {

          const res = await fetch(

            `https://api.weather.com/v1/current?q=${city}&key=${this.env.WEATHER_KEY}`,

          );

          return res.json();

        },

      }),

    };

  }

}


```

TypeScript

```

import { Think } from "@cloudflare/think";

import { tool } from "ai";

import type { ToolSet } from "ai";

import { z } from "zod";


export class MyAgent extends Think<Env> {

  getModel() {

    /* ... */

  }


  getTools(): ToolSet {

    return {

      getWeather: tool({

        description: "Get the current weather for a city",

        inputSchema: z.object({

          city: z.string().describe("City name"),

        }),

        execute: async ({ city }) => {

          const res = await fetch(

            `https://api.weather.com/v1/current?q=${city}&key=${this.env.WEATHER_KEY}`,

          );

          return res.json();

        },

      }),

    };

  }

}


```

Custom tools are merged with workspace tools automatically. If a custom tool has the same name as a workspace tool, the custom tool wins.

### Tool approval

Tools can require user approval before execution using the `needsApproval` option:

TypeScript

```

getTools(): ToolSet {

  return {

    deleteFile: tool({

      description: "Delete a file from the system",

      inputSchema: z.object({ path: z.string() }),

      needsApproval: async ({ path }) => path.startsWith("/important/"),

      execute: async ({ path }) => {

        await this.workspace.rm(path);

        return { deleted: path };

      },

    }),

  };

}


```

When `needsApproval` returns `true`, the tool call is sent to the client for approval. The conversation pauses until the client responds with `CF_AGENT_TOOL_APPROVAL`.

### Per-turn tool overrides

The `beforeTurn` hook can restrict or add tools for a specific turn:

TypeScript

```

beforeTurn(ctx: TurnContext) {

  return {

    activeTools: ["read", "write", "getWeather"],

    tools: { emergencyTool: this.createEmergencyTool() },

  };

}


```

`activeTools` limits which tools the model can call. `tools` adds extra tools for this turn only (merged on top of existing tools).

### MCP tools

Think inherits MCP client support from the `Agent` base class. MCP tools from connected servers are automatically merged into every turn.

Set `waitForMcpConnections` to ensure MCP servers are connected before inference runs:

* [  JavaScript ](#tab-panel-3954)
* [  TypeScript ](#tab-panel-3955)

JavaScript

```

export class MyAgent extends Think {

  waitForMcpConnections = true; // default 10s timeout

  // or: waitForMcpConnections = { timeout: 5000 };


  getModel() {

    /* ... */

  }

}


```

TypeScript

```

export class MyAgent extends Think<Env> {

  waitForMcpConnections = true; // default 10s timeout

  // or: waitForMcpConnections = { timeout: 5000 };


  getModel() {

    /* ... */

  }

}


```

Add MCP servers programmatically or via `@callable` methods:

* [  JavaScript ](#tab-panel-3964)
* [  TypeScript ](#tab-panel-3965)

JavaScript

```

import { callable } from "agents";


export class MyAgent extends Think {

  getModel() {

    /* ... */

  }


  @callable()

  async addServer(name, url) {

    return await this.addMcpServer(name, url);

  }


  @callable()

  async removeServer(serverId) {

    await this.removeMcpServer(serverId);

  }

}


```

TypeScript

```

import { callable } from "agents";


export class MyAgent extends Think<Env> {

  getModel() {

    /* ... */

  }


  @callable()

  async addServer(name: string, url: string) {

    return await this.addMcpServer(name, url);

  }


  @callable()

  async removeServer(serverId: string) {

    await this.removeMcpServer(serverId);

  }

}


```

### Code execution tool

Let the LLM write and run JavaScript in a sandboxed Worker. Requires `@cloudflare/codemode` and a `worker_loaders` binding.

Terminal window

```

npm install @cloudflare/codemode


```

* [  JavaScript ](#tab-panel-3970)
* [  TypeScript ](#tab-panel-3971)

JavaScript

```

import { Think } from "@cloudflare/think";

import { createExecuteTool } from "@cloudflare/think/tools/execute";

import { createWorkspaceTools } from "@cloudflare/think/tools/workspace";


export class MyAgent extends Think {

  getModel() {

    /* ... */

  }


  getTools() {

    return {

      execute: createExecuteTool({

        tools: createWorkspaceTools(this.workspace),

        loader: this.env.LOADER,

      }),

    };

  }

}


```

TypeScript

```

import { Think } from "@cloudflare/think";

import { createExecuteTool } from "@cloudflare/think/tools/execute";

import { createWorkspaceTools } from "@cloudflare/think/tools/workspace";


export class MyAgent extends Think<Env> {

  getModel() {

    /* ... */

  }


  getTools() {

    return {

      execute: createExecuteTool({

        tools: createWorkspaceTools(this.workspace),

        loader: this.env.LOADER,

      }),

    };

  }

}


```

* [  wrangler.jsonc ](#tab-panel-3944)
* [  wrangler.toml ](#tab-panel-3945)

JSONC

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "worker_loaders": [

    {

      "binding": "LOADER"

    }

  ]

}


```

TOML

```

[[worker_loaders]]

binding = "LOADER"


```

For richer filesystem access, pass a `state` backend:

* [  JavaScript ](#tab-panel-3960)
* [  TypeScript ](#tab-panel-3961)

JavaScript

```

import { createWorkspaceStateBackend } from "@cloudflare/shell";


createExecuteTool({

  tools: myDomainTools,

  state: createWorkspaceStateBackend(this.workspace),

  loader: this.env.LOADER,

});


```

TypeScript

```

import { createWorkspaceStateBackend } from "@cloudflare/shell";


createExecuteTool({

  tools: myDomainTools,

  state: createWorkspaceStateBackend(this.workspace),

  loader: this.env.LOADER,

});


```

### Browser tools

Give your agent access to the Chrome DevTools Protocol (CDP) for web page inspection, scraping, screenshots, and debugging. Requires `@cloudflare/codemode` and a Browser Run binding.

* [  JavaScript ](#tab-panel-3974)
* [  TypeScript ](#tab-panel-3975)

JavaScript

```

import { Think } from "@cloudflare/think";

import { createBrowserTools } from "@cloudflare/think/tools/browser";


export class MyAgent extends Think {

  getModel() {

    /* ... */

  }


  getTools() {

    return {

      ...createBrowserTools({

        browser: this.env.BROWSER,

        loader: this.env.LOADER,

      }),

    };

  }

}


```

TypeScript

```

import { Think } from "@cloudflare/think";

import { createBrowserTools } from "@cloudflare/think/tools/browser";


export class MyAgent extends Think<Env> {

  getModel() {

    /* ... */

  }


  getTools() {

    return {

      ...createBrowserTools({

        browser: this.env.BROWSER,

        loader: this.env.LOADER,

      }),

    };

  }

}


```

* [  wrangler.jsonc ](#tab-panel-3946)
* [  wrangler.toml ](#tab-panel-3947)

JSONC

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "browser": {

    "binding": "BROWSER"

  },

  "worker_loaders": [

    {

      "binding": "LOADER"

    }

  ]

}


```

TOML

```

[browser]

binding = "BROWSER"


[[worker_loaders]]

binding = "LOADER"


```

This adds two tools:

| Tool             | Description                                                                             |
| ---------------- | --------------------------------------------------------------------------------------- |
| browser\_search  | Query the CDP protocol spec to discover commands, events, and types                     |
| browser\_execute | Run CDP commands against a live browser session (screenshots, DOM reads, JS evaluation) |

For a custom Chrome endpoint, pass `cdpUrl` instead of `browser`:

* [  JavaScript ](#tab-panel-3962)
* [  TypeScript ](#tab-panel-3963)

JavaScript

```

createBrowserTools({

  cdpUrl: "http://localhost:9222",

  loader: this.env.LOADER,

});


```

TypeScript

```

createBrowserTools({

  cdpUrl: "http://localhost:9222",

  loader: this.env.LOADER,

});


```

For the full CDP helper API, refer to [Browse the web](https://developers.cloudflare.com/agents/api-reference/browse-the-web/).

### Extensions

Extensions are dynamically loaded sandboxed Workers that add tools at runtime. The LLM can write extension source code, load it, and use the new tools on the next turn.

Extensions require a `worker_loaders` binding:

* [  JavaScript ](#tab-panel-3966)
* [  TypeScript ](#tab-panel-3967)

JavaScript

```

import { Think } from "@cloudflare/think";


export class MyAgent extends Think {

  extensionLoader = this.env.LOADER;


  getModel() {

    /* ... */

  }

}


```

TypeScript

```

import { Think } from "@cloudflare/think";


export class MyAgent extends Think<Env> {

  extensionLoader = this.env.LOADER;


  getModel() {

    /* ... */

  }

}


```

#### Static extensions

Define extensions that load at startup:

* [  JavaScript ](#tab-panel-3992)
* [  TypeScript ](#tab-panel-3993)

JavaScript

```

export class MyAgent extends Think {

  extensionLoader = this.env.LOADER;


  getModel() {

    /* ... */

  }


  getExtensions() {

    return [

      {

        manifest: {

          name: "math",

          version: "1.0.0",

          permissions: { network: false },

        },

        source: `({

          tools: {

            add: {

              description: "Add two numbers",

              parameters: { a: { type: "number" }, b: { type: "number" } },

              execute: async ({ a, b }) => ({ result: a + b })

            }

          }

        })`,

      },

    ];

  }

}


```

TypeScript

```

export class MyAgent extends Think<Env> {

  extensionLoader = this.env.LOADER;


  getModel() {

    /* ... */

  }


  getExtensions() {

    return [

      {

        manifest: {

          name: "math",

          version: "1.0.0",

          permissions: { network: false },

        },

        source: `({

          tools: {

            add: {

              description: "Add two numbers",

              parameters: { a: { type: "number" }, b: { type: "number" } },

              execute: async ({ a, b }) => ({ result: a + b })

            }

          }

        })`,

      },

    ];

  }

}


```

Extension tools are namespaced — a `math` extension with an `add` tool becomes `math_add` in the model's tool set.

#### LLM-driven extensions

Give the model `createExtensionTools` so it can load extensions dynamically:

* [  JavaScript ](#tab-panel-3980)
* [  TypeScript ](#tab-panel-3981)

JavaScript

```

import { createExtensionTools } from "@cloudflare/think/tools/extensions";


export class MyAgent extends Think {

  extensionLoader = this.env.LOADER;


  getModel() {

    /* ... */

  }


  getTools() {

    return {

      ...createExtensionTools({ manager: this.extensionManager }),

      ...this.extensionManager.getTools(),

    };

  }

}


```

TypeScript

```

import { createExtensionTools } from "@cloudflare/think/tools/extensions";


export class MyAgent extends Think<Env> {

  extensionLoader = this.env.LOADER;


  getModel() {

    /* ... */

  }


  getTools() {

    return {

      ...createExtensionTools({ manager: this.extensionManager! }),

      ...this.extensionManager!.getTools(),

    };

  }

}


```

This gives the model two tools:

* `load_extension` — load a new extension from JavaScript source
* `list_extensions` — list currently loaded extensions

#### Extension context blocks

Extensions can declare context blocks in their manifest. These are automatically registered with the Session:

TypeScript

```

getExtensions() {

  return [{

    manifest: {

      name: "notes",

      version: "1.0.0",

      permissions: { network: false },

      context: [

        { label: "scratchpad", description: "Extension scratch space", maxTokens: 500 },

      ],

    },

    source: `({ tools: { /* ... */ } })`,

  }];

}


```

The context block is registered as `notes_scratchpad` (namespaced by extension name).

### Custom workspace backends

The individual tool factories are exported for use with custom storage backends:

* [  JavaScript ](#tab-panel-3976)
* [  TypeScript ](#tab-panel-3977)

JavaScript

```

import {

  createReadTool,

  createWriteTool,

  createEditTool,

  createListTool,

  createFindTool,

  createGrepTool,

  createDeleteTool,

  createWorkspaceTools,

} from "@cloudflare/think/tools/workspace";


```

TypeScript

```

import {

  createReadTool,

  createWriteTool,

  createEditTool,

  createListTool,

  createFindTool,

  createGrepTool,

  createDeleteTool,

  createWorkspaceTools,

} from "@cloudflare/think/tools/workspace";


```

Implement the operations interface for your storage backend:

* [  JavaScript ](#tab-panel-3978)
* [  TypeScript ](#tab-panel-3979)

JavaScript

```

const myReadOps = {

  readFile: async (path) => fetchFromMyStorage(path),

  stat: async (path) => getFileInfo(path),

};


const readTool = createReadTool({ ops: myReadOps });


```

TypeScript

```

import type { ReadOperations } from "@cloudflare/think/tools/workspace";


const myReadOps: ReadOperations = {

  readFile: async (path) => fetchFromMyStorage(path),

  stat: async (path) => getFileInfo(path),

};


const readTool = createReadTool({ ops: myReadOps });


```

## Lifecycle hooks

Think owns the `streamText` call and provides hooks at each stage of the chat turn. Hooks fire on every turn regardless of entry path — WebSocket chat, sub-agent `chat()`, `saveMessages`, and auto-continuation after tool results.

### Hook summary

| Hook                      | When it fires                                 | Return                   | Async |
| ------------------------- | --------------------------------------------- | ------------------------ | ----- |
| configureSession(session) | Once during onStart                           | Session                  | yes   |
| beforeTurn(ctx)           | Before streamText                             | TurnConfig or void       | yes   |
| beforeStep(ctx)           | Before each model step                        | StepConfig or void       | yes   |
| beforeToolCall(ctx)       | Before a server-side tool executes            | ToolCallDecision or void | yes   |
| afterToolCall(ctx)        | After a tool outcome is known                 | void                     | yes   |
| onStepFinish(ctx)         | After each step completes                     | void                     | yes   |
| onChunk(ctx)              | Per streaming chunk                           | void                     | yes   |
| onChatResponse(result)    | After turn completes and message is persisted | void                     | yes   |
| onChatError(error)        | On error during a turn                        | error to propagate       | no    |

### Execution order

For a turn with two tool calls:

```

configureSession()          ← once at startup, not per-turn

      │

beforeTurn()                ← inspect assembled context, override model/tools/prompt

      │

  ┌── streamText ───────────────────────────────────┐

  │   beforeStep()                                  │

  │       │                                         │

  │   onChunk()  onChunk()  onChunk()  ...          │

  │       │                                         │

  │   beforeToolCall()  →  tool executes            │

  │                        afterToolCall()           │

  │       │                                         │

  │   onStepFinish()                                │

  │       │                                         │

  │   beforeStep()                                  │

  │       │                                         │

  │   onChunk()  onChunk()  ...                     │

  │       │                                         │

  │   beforeToolCall()  →  tool executes            │

  │                        afterToolCall()           │

  │       │                                         │

  │   onStepFinish()                                │

  └─────────────────────────────────────────────────┘

      │

onChatResponse()            ← message persisted, turn lock released


```

### beforeTurn

Called before `streamText`. Receives the fully assembled context — system prompt, converted messages, merged tools, and model. Return a `TurnConfig` to override any part, or void to accept defaults.

TypeScript

```

beforeTurn(ctx: TurnContext): TurnConfig | void | Promise<TurnConfig | void>


```

#### TurnContext

| Field        | Type                    | Description                                                                           |
| ------------ | ----------------------- | ------------------------------------------------------------------------------------- |
| system       | string                  | Assembled system prompt (from context blocks or getSystemPrompt())                    |
| messages     | ModelMessage\[\]        | Assembled model messages (truncated, pruned)                                          |
| tools        | ToolSet                 | Merged tool set (workspace + getTools + extensions + session + MCP + client + caller) |
| model        | LanguageModel           | The model from getModel()                                                             |
| continuation | boolean                 | Whether this is a continuation turn (auto-continue after tool result)                 |
| body         | Record<string, unknown> | Custom body fields from the client request                                            |

#### TurnConfig

All fields are optional. Return only what you want to change.

| Field                   | Type                    | Description                             |
| ----------------------- | ----------------------- | --------------------------------------- |
| model                   | LanguageModel           | Override the model for this turn        |
| system                  | string                  | Override the system prompt              |
| messages                | ModelMessage\[\]        | Override the assembled messages         |
| tools                   | ToolSet                 | Extra tools to merge (additive)         |
| activeTools             | string\[\]              | Limit which tools the model can call    |
| toolChoice              | ToolChoice              | Force a specific tool call              |
| maxSteps                | number                  | Override maxSteps for this turn         |
| sendReasoning           | boolean                 | Send reasoning chunks for this turn     |
| output                  | Output                  | Request structured output for this turn |
| providerOptions         | Record<string, unknown> | Provider-specific options               |
| experimental\_telemetry | object                  | AI SDK telemetry settings for this turn |

#### Examples

Switch to a cheaper model for continuation turns:

TypeScript

```

beforeTurn(ctx: TurnContext) {

  if (ctx.continuation) {

    return { model: this.cheapModel };

  }

}


```

Restrict which tools the model can call:

TypeScript

```

beforeTurn(ctx: TurnContext) {

  return { activeTools: ["read", "write", "getWeather"] };

}


```

Add per-turn context from the client body:

TypeScript

```

beforeTurn(ctx: TurnContext) {

  if (ctx.body?.selectedFile) {

    return {

      system: ctx.system + `\n\nUser is editing: ${ctx.body.selectedFile}`,

    };

  }

}


```

Hide reasoning for internal continuation turns:

TypeScript

```

beforeTurn(ctx: TurnContext) {

  if (ctx.continuation) {

    return { sendReasoning: false };

  }

}


```

Force structured output for a turn:

TypeScript

```

import { Output } from "ai";

import { z } from "zod";


const ResultSchema = z.object({ severity: z.enum(["low", "high"]) });


beforeTurn(ctx: TurnContext) {

  if (ctx.body?.mode === "structured-answer") {

    return {

      output: Output.object({ schema: ResultSchema }),

      activeTools: [],

    };

  }

}


```

`output` is a turn-level setting only. The AI SDK's `prepareStep` does not accept an `output` override, so `beforeStep` cannot toggle structured output on a single step.

### beforeStep

Called before each AI SDK step in the agentic loop. Think forwards this hook to `streamText` as `prepareStep`, so it receives the AI SDK's full prepare-step context and can return per-step overrides. Use `beforeTurn` for turn-wide assembly and `beforeStep` when the decision depends on the step number or previous step results.

TypeScript

```

beforeStep(ctx: PrepareStepContext): StepConfig | void {

  if (ctx.stepNumber > 0) {

    return { activeTools: [] };

  }

}


```

### beforeToolCall

Called before a server-side tool's `execute` function runs. Think wraps each server-side tool so the hook can allow, modify, block, or substitute the call before the model receives the tool result.

TypeScript

```

beforeToolCall(ctx: ToolCallContext): ToolCallDecision | void {

  if (ctx.toolName === "delete" && this.isReadOnlyMode) {

    return { action: "block", reason: "delete is disabled in read-only mode" };

  }


  if (ctx.toolName === "weather") {

    const cached = this.weatherCache.get(JSON.stringify(ctx.input));

    if (cached) return { action: "substitute", output: cached };

  }

}


```

| Field       | Type                     | Description                                |
| ----------- | ------------------------ | ------------------------------------------ |
| toolName    | string                   | Name of the tool being called              |
| input       | unknown                  | Input the model provided                   |
| toolCallId  | string                   | ID for this tool call                      |
| messages    | ModelMessage\[\]         | Messages visible at tool execution time    |
| abortSignal | AbortSignal \| undefined | Signal that aborts if the turn is canceled |

Return a `ToolCallDecision` to control execution:

| Decision                         | Behavior                                                    |
| -------------------------------- | ----------------------------------------------------------- |
| void or { action: "allow" }      | Run the original tool with the original input               |
| { action: "allow", input }       | Run the original tool with modified input                   |
| { action: "block", reason }      | Skip the original tool and return reason as the tool result |
| { action: "substitute", output } | Skip the original tool and return output as the tool result |

If a wrapped tool returns an `AsyncIterable` for preliminary tool results, Think collapses the iterable to its final yielded value after `beforeToolCall` runs. If you need true preliminary streaming from that tool, avoid intercepting it with `beforeToolCall`.

### afterToolCall

Called after a tool outcome is known. This includes real executions, blocked calls, substituted calls, and thrown tool errors.

TypeScript

```

afterToolCall(ctx: ToolCallResultContext) {

  if (!ctx.success) return;


  this.env.ANALYTICS.writeDataPoint({

    blobs: [ctx.toolName],

    doubles: [JSON.stringify(ctx.output).length],

  });

}


```

| Field      | Type             | Description                                          |
| ---------- | ---------------- | ---------------------------------------------------- |
| toolName   | string           | Name of the tool that was called                     |
| input      | unknown          | Input the model provided                             |
| toolCallId | string           | ID for this tool call                                |
| messages   | ModelMessage\[\] | Messages visible at tool execution time              |
| durationMs | number           | Tool execution duration in milliseconds              |
| success    | boolean          | Whether the model received a successful tool outcome |
| output     | unknown          | Present when success is true                         |
| error      | unknown          | Present when success is false                        |

For blocked and substituted tool calls, `success` is `true` because the model receives a valid tool result. Only thrown errors from the original tool execution surface as `success: false`.

### onStepFinish

Called after each step completes in the agentic loop. `StepContext` is the AI SDK's step-finish event, so it includes the full step record: generated text, reasoning, files, sources, typed tool calls and results, usage, warnings, request and response metadata, and provider metadata.

TypeScript

```

onStepFinish(ctx: StepContext) {

  console.log(

    `Step ${ctx.stepNumber} (${ctx.finishReason}): ` +

      `${ctx.usage.inputTokens}in/${ctx.usage.outputTokens}out`,

  );

}


```

| Field            | Description                                       |
| ---------------- | ------------------------------------------------- |
| stepNumber       | Zero-based index of the step                      |
| text             | Text generated in this step                       |
| reasoning        | Reasoning parts emitted by the model              |
| files            | Files generated during the step                   |
| sources          | Citations or sources used by the model            |
| toolCalls        | Typed tool calls made in this step                |
| toolResults      | Typed tool results received in this step          |
| finishReason     | Why the step ended                                |
| usage            | Token usage, including cache and reasoning tokens |
| providerMetadata | Provider-specific metadata                        |

### onChunk

Called for each streaming chunk. High-frequency — fires per token. Use for streaming analytics, progress indicators, or token counting. Observational only.

### onChatResponse

Called after a chat turn produces and persists an assistant message. The turn lock is released before this hook runs, so it is safe to call `saveMessages` or other methods from inside.

Fires for all turn paths that persist an assistant message: WebSocket, sub-agent RPC, `saveMessages`, and auto-continuation. If a turn fails before producing any assistant parts, `onChatError` handles the error instead.

TypeScript

```

onChatResponse(result: ChatResponseResult) {

  if (result.status === "completed") {

    console.log(`Turn ${result.requestId}: ${result.message.parts.length} parts`);

  }

}


```

| Field        | Type                   | Description                            |                    |
| ------------ | ---------------------- | -------------------------------------- | ------------------ |
| message      | UIMessage              | The persisted assistant message        |                    |
| requestId    | string                 | Unique ID for this turn                |                    |
| continuation | boolean                | Whether this was a continuation turn   |                    |
| status       | "completed" \| "error" | "aborted"                              | How the turn ended |
| error        | string?                | Error message (when status is "error") |                    |

### onChatError

Called when an error occurs during a chat turn. Return the error to propagate it, or return a different error. The partial assistant message (if any) is persisted before this hook fires.

TypeScript

```

onChatError(error: unknown) {

  console.error("Chat turn failed:", error);

  return new Error("Something went wrong. Please try again.");

}


```

## Client tools

Think supports tools that execute in the browser. The client sends serializable tool schemas in the chat request body, Think merges them with server tools, and when the LLM calls a client tool, the call is routed to the client for execution.

### Defining client tools

For dynamic client-side tools, pass `tools` to `useAgentChat`. Tools with an `execute` function are registered with the server as client-executed tools:

* [  JavaScript ](#tab-panel-3988)
* [  TypeScript ](#tab-panel-3989)

JavaScript

```

const { messages, sendMessage } = useAgentChat({

  agent,

  tools: {

    getUserTimezone: {

      description: "Get the user's timezone from their browser",

      parameters: {},

      execute: async () => {

        return Intl.DateTimeFormat().resolvedOptions().timeZone;

      },

    },

    getClipboard: {

      description: "Read text from the user's clipboard",

      parameters: {},

      execute: async () => {

        return navigator.clipboard.readText();

      },

    },

  },

});


```

TypeScript

```

const { messages, sendMessage } = useAgentChat({

  agent,

  tools: {

    getUserTimezone: {

      description: "Get the user's timezone from their browser",

      parameters: {},

      execute: async () => {

        return Intl.DateTimeFormat().resolvedOptions().timeZone;

      },

    },

    getClipboard: {

      description: "Read text from the user's clipboard",

      parameters: {},

      execute: async () => {

        return navigator.clipboard.readText();

      },

    },

  },

});


```

Client tools are tools without an `execute` function on the server — they only have a schema. When the LLM produces a tool call for one, Think routes it to the client.

For most apps, prefer defining tools on the server and using `onToolCall` for browser-only execution. The `tools` option is most useful for SDKs or platforms where the browser decides the available tool surface at runtime.

### Approval flow

Handle browser-side tool execution on the client with `onToolCall`:

* [  JavaScript ](#tab-panel-3984)
* [  TypeScript ](#tab-panel-3985)

JavaScript

```

useAgentChat({

  agent,

  onToolCall: async ({ toolCall, addToolOutput }) => {

    if (toolCall.toolName === "read") {

      const result = await readFromBrowser(toolCall.input);

      addToolOutput({

        toolCallId: toolCall.toolCallId,

        output: result,

      });

    }

  },

});


```

TypeScript

```

useAgentChat({

  agent,

  onToolCall: async ({ toolCall, addToolOutput }) => {

    if (toolCall.toolName === "read") {

      const result = await readFromBrowser(toolCall.input);

      addToolOutput({

        toolCallId: toolCall.toolCallId,

        output: result,

      });

    }

  },

});


```

### Auto-continuation

After a client tool result is received, Think automatically continues the conversation without a new user message. The continuation turn has `continuation: true` in the `TurnContext`, which you can use in `beforeTurn` to adjust model or tool selection.

### Message concurrency

The `messageConcurrency` property controls how overlapping user submits behave when a chat turn is already active.

| Strategy                                      | Behavior                                                                                                                            |
| --------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| "queue"                                       | Queue every submit and process them in order. Default.                                                                              |
| "latest"                                      | Keep only the latest overlapping submission; superseded submissions still persist their user messages but do not start a model turn |
| "merge"                                       | Queue overlapping submissions, then collapse their trailing user messages into one combined turn before the latest queued turn runs |
| "drop"                                        | Ignore overlapping submits entirely. Messages are not persisted.                                                                    |
| { strategy: "debounce", debounceMs?: number } | Trailing-edge latest with a quiet window (default 750ms).                                                                           |

* [  JavaScript ](#tab-panel-3982)
* [  TypeScript ](#tab-panel-3983)

JavaScript

```

import { Think } from "@cloudflare/think";


export class SearchAgent extends Think {

  messageConcurrency = "latest";

  getModel() {

    /* ... */

  }

}


```

TypeScript

```

import { Think } from "@cloudflare/think";

import type { MessageConcurrency } from "@cloudflare/think";


export class SearchAgent extends Think<Env> {

  override messageConcurrency: MessageConcurrency = "latest";

  getModel() {

    /* ... */

  }

}


```

### Multi-tab broadcast

Think broadcasts streaming responses to all connected WebSocket clients. When multiple browser tabs are connected to the same agent, all tabs see the streamed response in real time. Tool call states (pending, result, approval) are broadcast to all tabs.

## Agent tools

Use [Agent tools](https://developers.cloudflare.com/agents/api-reference/agent-tools/) when a parent chat agent should run a Think sub-agent as a retained, streaming tool. The parent uses `agentTool()` or `runAgentTool()` from `agents/agent-tools`; the child Think instance keeps its own messages, tools, resumable stream, and storage.

## Sub-agent RPC and programmatic turns

Think works as both a top-level agent and a sub-agent. When used as a sub-agent, the `chat()` method runs a full turn and streams events via a callback.

### chat

TypeScript

```

async chat(

  userMessage: string | UIMessage,

  callback: StreamCallback,

  options?: ChatOptions,

): Promise<void>


```

#### StreamCallback

| Method           | When it fires                                                   |
| ---------------- | --------------------------------------------------------------- |
| onEvent(json)    | For each streaming chunk (JSON-serialized UIMessageChunk)       |
| onDone()         | After the turn completes and the assistant message is persisted |
| onError(message) | On error during the turn (if not provided, the error is thrown) |

#### ChatOptions

| Field  | Description                                                 |
| ------ | ----------------------------------------------------------- |
| signal | AbortSignal to cancel the turn mid-stream                   |
| tools  | Extra tools to merge for this turn (highest merge priority) |

#### Example: parent calling a child

* [  JavaScript ](#tab-panel-4006)
* [  TypeScript ](#tab-panel-4007)

JavaScript

```

import { Think } from "@cloudflare/think";


export class ParentAgent extends Think {

  getModel() {

    /* ... */

  }


  async delegateToChild(task) {

    const child = await this.subAgent(ChildAgent, "child-1");


    const chunks = [];

    await child.chat(task, {

      onEvent: (json) => {

        chunks.push(json);

      },

      onDone: () => {

        console.log("Child completed");

      },

      onError: (error) => {

        console.error("Child failed:", error);

      },

    });


    return chunks;

  }

}


export class ChildAgent extends Think {

  getModel() {

    /* ... */

  }


  getSystemPrompt() {

    return "You are a research assistant. Analyze data and report findings.";

  }

}


```

TypeScript

```

import { Think } from "@cloudflare/think";


export class ParentAgent extends Think<Env> {

  getModel() {

    /* ... */

  }


  async delegateToChild(task: string) {

    const child = await this.subAgent(ChildAgent, "child-1");


    const chunks: string[] = [];

    await child.chat(task, {

      onEvent: (json) => {

        chunks.push(json);

      },

      onDone: () => {

        console.log("Child completed");

      },

      onError: (error) => {

        console.error("Child failed:", error);

      },

    });


    return chunks;

  }

}


export class ChildAgent extends Think<Env> {

  getModel() {

    /* ... */

  }


  getSystemPrompt() {

    return "You are a research assistant. Analyze data and report findings.";

  }

}


```

#### Passing extra tools

The `tools` option adds tools for this turn only, with the highest merge priority:

* [  JavaScript ](#tab-panel-3994)
* [  TypeScript ](#tab-panel-3995)

JavaScript

```

import { tool } from "ai";

import { z } from "zod";


await child.chat("Summarize the report", callback, {

  tools: {

    fetchReport: tool({

      description: "Fetch the report data",

      inputSchema: z.object({}),

      execute: async () => this.getReportData(),

    }),

  },

});


```

TypeScript

```

import { tool } from "ai";

import { z } from "zod";


await child.chat("Summarize the report", callback, {

  tools: {

    fetchReport: tool({

      description: "Fetch the report data",

      inputSchema: z.object({}),

      execute: async () => this.getReportData(),

    }),

  },

});


```

#### Aborting a sub-agent turn

Pass an `AbortSignal` to cancel mid-stream. When aborted, the partial assistant message is still persisted.

* [  JavaScript ](#tab-panel-3986)
* [  TypeScript ](#tab-panel-3987)

JavaScript

```

const controller = new AbortController();

setTimeout(() => controller.abort(), 30_000);


await child.chat("Long analysis task", callback, {

  signal: controller.signal,

});


```

TypeScript

```

const controller = new AbortController();

setTimeout(() => controller.abort(), 30_000);


await child.chat("Long analysis task", callback, {

  signal: controller.signal,

});


```

### saveMessages

Inject messages and trigger a model turn without a WebSocket connection. Use for scheduled responses, webhook-triggered turns, proactive agents, or chaining from `onChatResponse`.

TypeScript

```

async saveMessages(

  messages:

    | UIMessage[]

    | ((current: UIMessage[]) => UIMessage[] | Promise<UIMessage[]>),

  options?: SaveMessagesOptions,

): Promise<SaveMessagesResult>


```

Returns `{ requestId, status }` where `status` is `"completed"`, `"skipped"`, or `"aborted"`.

Pass `options.signal` to cancel a programmatic turn from the Durable Object that starts it. `AbortSignal` cannot cross Durable Object RPC boundaries, and the signal is not persisted across hibernation.

#### Static messages

* [  JavaScript ](#tab-panel-3990)
* [  TypeScript ](#tab-panel-3991)

JavaScript

```

await this.saveMessages([

  {

    id: crypto.randomUUID(),

    role: "user",

    parts: [{ type: "text", text: "Time for your daily summary." }],

  },

]);


```

TypeScript

```

await this.saveMessages([

  {

    id: crypto.randomUUID(),

    role: "user",

    parts: [{ type: "text", text: "Time for your daily summary." }],

  },

]);


```

#### Function form

When multiple `saveMessages` calls queue up, the function form runs with the latest messages when the turn actually starts:

* [  JavaScript ](#tab-panel-3996)
* [  TypeScript ](#tab-panel-3997)

JavaScript

```

await this.saveMessages((current) => [

  ...current,

  {

    id: crypto.randomUUID(),

    role: "user",

    parts: [{ type: "text", text: "Continue your analysis." }],

  },

]);


```

TypeScript

```

await this.saveMessages((current) => [

  ...current,

  {

    id: crypto.randomUUID(),

    role: "user",

    parts: [{ type: "text", text: "Continue your analysis." }],

  },

]);


```

#### Scheduled responses

Trigger a turn from a cron schedule:

* [  JavaScript ](#tab-panel-4000)
* [  TypeScript ](#tab-panel-4001)

JavaScript

```

export class MyAgent extends Think {

  getModel() {

    /* ... */

  }


  async onScheduled() {

    await this.saveMessages([

      {

        id: crypto.randomUUID(),

        role: "user",

        parts: [{ type: "text", text: "Generate the daily report." }],

      },

    ]);

  }

}


```

TypeScript

```

export class MyAgent extends Think<Env> {

  getModel() {

    /* ... */

  }


  async onScheduled() {

    await this.saveMessages([

      {

        id: crypto.randomUUID(),

        role: "user",

        parts: [{ type: "text", text: "Generate the daily report." }],

      },

    ]);

  }

}


```

#### Chaining from onChatResponse

Start a follow-up turn after the current one completes:

TypeScript

```

async onChatResponse(result: ChatResponseResult) {

  if (result.status === "completed" && this.needsFollowUp(result.message)) {

    await this.saveMessages([{

      id: crypto.randomUUID(),

      role: "user",

      parts: [{ type: "text", text: "Now summarize what you found." }],

    }]);

  }

}


```

### continueLastTurn

Run another model call after the latest assistant message without injecting a new user message. Think persists the result as a new assistant message with `continuation: true`; it does not append chunks to the existing assistant message.

TypeScript

```

protected async continueLastTurn(

  body?: Record<string, unknown>,

  options?: SaveMessagesOptions,

): Promise<SaveMessagesResult>


```

Returns `{ requestId, status: "skipped" }` if the last message is not an assistant message. The optional `body` parameter overrides the stored body for this continuation. Pass `options.signal` to cancel the continuation while it is running.

### abortRequest and abortAllRequests

Cancel in-flight chat turns from inside the Durable Object:

TypeScript

```

protected abortRequest(requestId: string, reason?: unknown): void

protected abortAllRequests(): void


```

Use `abortRequest()` when you know the request ID. Use `abortAllRequests()` for single-purpose helpers that should cancel whatever turn is currently running. Prefer `SaveMessagesOptions.signal` for programmatic turns when you can pass a signal at the call site.

### Chat recovery

Think can wrap chat turns in Durable Object fibers for durable execution. When a DO is evicted mid-turn, the turn can be recovered on restart.

* [  JavaScript ](#tab-panel-3998)
* [  TypeScript ](#tab-panel-3999)

JavaScript

```

export class MyAgent extends Think {

  chatRecovery = true;


  getModel() {

    /* ... */

  }

}


```

TypeScript

```

export class MyAgent extends Think<Env> {

  chatRecovery = true;


  getModel() {

    /* ... */

  }

}


```

When `chatRecovery` is `true`, all four turn paths (WebSocket, auto-continuation, `saveMessages`, `continueLastTurn`) are wrapped in `runFiber`.

#### onChatRecovery

When an interrupted chat fiber is detected after DO restart, Think calls `onChatRecovery`:

* [  JavaScript ](#tab-panel-4004)
* [  TypeScript ](#tab-panel-4005)

JavaScript

```

export class MyAgent extends Think {

  chatRecovery = true;


  getModel() {

    /* ... */

  }


  onChatRecovery(ctx) {

    console.log(

      `Recovering turn ${ctx.requestId}, partial: ${ctx.partialText.length} chars`,

    );

    return {

      persist: true,

      continue: true,

    };

  }

}


```

TypeScript

```

export class MyAgent extends Think<Env> {

  chatRecovery = true;


  getModel() {

    /* ... */

  }


  onChatRecovery(ctx: ChatRecoveryContext) {

    console.log(

      `Recovering turn ${ctx.requestId}, partial: ${ctx.partialText.length} chars`,

    );

    return {

      persist: true,

      continue: true,

    };

  }

}


```

#### ChatRecoveryContext

| Field           | Type                     | Description                               |
| --------------- | ------------------------ | ----------------------------------------- |
| streamId        | string                   | The stream ID of the interrupted turn     |
| requestId       | string                   | The request ID of the interrupted turn    |
| partialText     | string                   | Text generated before the interruption    |
| partialParts    | MessagePart\[\]          | Parts accumulated before the interruption |
| recoveryData    | unknown \| null          | Data from this.stash() during the turn    |
| messages        | UIMessage\[\]            | Current conversation history              |
| lastBody        | Record<string, unknown>? | Body from the interrupted turn            |
| lastClientTools | ClientToolSchema\[\]?    | Client tools from the interrupted turn    |
| createdAt       | number                   | Epoch milliseconds when the turn started  |

#### ChatRecoveryOptions

| Field    | Type     | Description                                      |
| -------- | -------- | ------------------------------------------------ |
| persist  | boolean? | Whether to persist the partial assistant message |
| continue | boolean? | Whether to auto-continue with a new turn         |

With `persist: true`, the partial message is saved. With `continue: true`, Think calls `continueLastTurn()` after the agent reaches a stable state.

Use `ctx.createdAt` to skip stale recoveries. For example, if the interrupted turn is older than a few minutes, return `{ continue: false }` so the partial response is preserved without starting an old continuation.

### Stability detection

Think provides methods to check if the agent is in a stable state — no pending tool results, no pending approvals, no active turns.

#### hasPendingInteraction

Returns `true` if any assistant message has pending tool calls (tools without results or pending approvals).

TypeScript

```

protected hasPendingInteraction(): boolean


```

#### waitUntilStable

Returns a promise that resolves to `true` when the agent reaches a stable state, or `false` if the timeout is exceeded.

* [  JavaScript ](#tab-panel-4002)
* [  TypeScript ](#tab-panel-4003)

JavaScript

```

const stable = await this.waitUntilStable({ timeout: 30_000 });

if (stable) {

  await this.saveMessages([

    {

      id: crypto.randomUUID(),

      role: "user",

      parts: [{ type: "text", text: "Now that you are done, summarize." }],

    },

  ]);

}


```

TypeScript

```

const stable = await this.waitUntilStable({ timeout: 30_000 });

if (stable) {

  await this.saveMessages([

    {

      id: crypto.randomUUID(),

      role: "user",

      parts: [{ type: "text", text: "Now that you are done, summarize." }],

    },

  ]);

}


```

## Package exports

| Export                             | Description                                                 |
| ---------------------------------- | ----------------------------------------------------------- |
| @cloudflare/think                  | Think, Session, Workspace — main class and re-exports       |
| @cloudflare/think/tools/workspace  | createWorkspaceTools() — for custom storage backends        |
| @cloudflare/think/tools/execute    | createExecuteTool() — sandboxed code execution via codemode |
| @cloudflare/think/tools/browser    | createBrowserTools() — Chrome DevTools Protocol tools       |
| @cloudflare/think/tools/extensions | createExtensionTools() — LLM-driven extension loading       |
| @cloudflare/think/tools/sandbox    | createSandboxTools() — placeholder export; returns no tools |
| @cloudflare/think/extensions       | ExtensionManager, HostBridgeLoopback — extension runtime    |

## Peer dependencies

| Package              | Required | Notes                  |
| -------------------- | -------- | ---------------------- |
| agents               | yes      | Cloudflare Agents SDK  |
| ai                   | yes      | AI SDK v6              |
| zod                  | yes      | Schema validation (v4) |
| @cloudflare/shell    | yes      | Workspace filesystem   |
| @cloudflare/codemode | optional | For createExecuteTool  |

## Acknowledgments

Think's design is inspired by [Pi ↗](https://pi.dev).

## Example

[ Assistant example ](https://github.com/cloudflare/agents/tree/main/examples/assistant) Explore a multi-session Think assistant with sub-agent routing, shared workspace, MCP, chat recovery, and GitHub OAuth. 

## Related

* [Sessions](https://developers.cloudflare.com/agents/api-reference/sessions/) — context blocks, compaction, search, multi-session (the storage layer Think builds on)
* [Sub-agents](https://developers.cloudflare.com/agents/api-reference/sub-agents/) — `subAgent()`, `abortSubAgent()`, `deleteSubAgent()` (the base Agent methods for spawning children)
* [Chat agents](https://developers.cloudflare.com/agents/api-reference/chat-agents/) — `AIChatAgent` for when you need full control over the LLM call
* [Long-running agents](https://developers.cloudflare.com/agents/concepts/long-running-agents/) — sub-agent delegation patterns for multi-week agent lifetimes
* [Durable execution](https://developers.cloudflare.com/agents/api-reference/durable-execution/) — `runFiber()` and crash recovery (used by `chatRecovery`)
* [Browse the web](https://developers.cloudflare.com/agents/api-reference/browse-the-web/) — full CDP helper API reference

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/agents/","name":"Agents"}},{"@type":"ListItem","position":3,"item":{"@id":"/agents/api-reference/","name":"API Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/agents/api-reference/think/","name":"Think"}}]}
```
