---
title: AI
description: Run AI models on Cloudflare's global network using Workers AI, AI Gateway, and other integrated AI products.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/ai/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

# AI

Run AI models on Cloudflare's global network.

 Available on all plans 

Cloudflare AI provides a unified platform for running AI models, whether hosted on Cloudflare infrastructure (Workers AI) or proxied through AI Gateway to external providers.

## Get started

###  Models 

Explore all AI models available through Cloudflare, including hosted models on Workers AI and external providers through AI Gateway.

[ Browse models ](https://developers.cloudflare.com/ai/models/) 

## Related products

**[Workers AI](https://developers.cloudflare.com/workers-ai/)** 

Run machine learning models, powered by serverless GPUs, on Cloudflare's global network.

**[AI Gateway](https://developers.cloudflare.com/ai-gateway/)** 

Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more.

**[Vectorize](https://developers.cloudflare.com/vectorize/)** 

Build full-stack AI applications with Vectorize, Cloudflare's vector database.

**[Agents](https://developers.cloudflare.com/agents/)** 

Build AI-powered agents to perform tasks, persist state, and interact with external services.

**[AI Search](https://developers.cloudflare.com/ai-search/)** 

Create fully managed RAG pipelines for your AI applications.

**[AI Crawl Control](https://developers.cloudflare.com/ai-crawl-control/)** 

Analyze and control third-party AI crawlers on your website.

**[Browser Rendering](https://developers.cloudflare.com/browser-run/)** 

Control and interact with headless browser instances for AI data extraction.

**[Cloudflare Agent](https://developers.cloudflare.com/cloudflare-agent/)** 

An AI-powered assistant that helps you navigate, configure, and manage Cloudflare.

**[Dynamic Workers](https://developers.cloudflare.com/dynamic-workers/)** 

Spin up isolated Workers on demand to execute code.

**[Sandbox SDK](https://developers.cloudflare.com/sandbox-sdk/)** 

Build secure, isolated code execution environments.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/ai/","name":"AI"}}]}
```

---

---
title: Models
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/ai/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Models

Can't find what you're looking for? 

View all models available through AI Gateway, including third-party providers like Anthropic, OpenAI, and more.[Browse supported models for the REST API](https://developers.cloudflare.com/ai-gateway/supported-models/).

Task TypesCapabilitiesAuthorsNewest first

We found 136 models

[📌![Moonshot AI logo](https://developers.cloudflare.com/_astro/moonshotai.D9EBG7kx.svg)kimi-k2.6Text Generation • Moonshot AI • HostedKimi K2.6 is a frontier-scale open-source 1T parameter model with a 262.1k context window, multi-turn tool calling, vision inputs, and structured outputs for agentic workloads.Function callingReasoningVision](https://developers.cloudflare.com/ai/models/@cf/moonshotai/kimi-k2.6/)[📌![Zhipu AI logo](https://developers.cloudflare.com/_astro/zai.Dj2vcayE.svg)glm-4.7-flashText Generation • Zhipu AI • HostedGLM-4.7-Flash is a fast and efficient multilingual text generation model with a 131,072 token context window. Optimized for dialogue, instruction-following, and multi-turn tool calling across 100+ languages.Function callingReasoning](https://developers.cloudflare.com/ai/models/@cf/zai-org/glm-4.7-flash/)[📌![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-oss-120bText Generation • OpenAI • HostedOpenAI's open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases – gpt-oss-120b is for production, general purpose, high reasoning use-cases.Function callingReasoning](https://developers.cloudflare.com/ai/models/@cf/openai/gpt-oss-120b/)[📌![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-4-scout-17b-16e-instructText Generation • Meta • HostedMeta's Llama 4 Scout is a 17 billion parameter model with 16 experts that is natively multimodal. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.BatchFunction callingVision](https://developers.cloudflare.com/ai/models/@cf/meta/llama-4-scout-17b-16e-instruct/)[![Inworld logo](https://developers.cloudflare.com/_astro/inworld.BDwMAXI2.svg)tts-2Text-to-Speech • Inworld • ProxiedInworld's most powerful and expressive text-to-speech model. Builds on TTS 1.5 with rich expressive speech, real-time latency, natural language steering (e.g. \[whisper\], \[say excitedly\]), and stronger multilingual support across 15 production languages plus 90+ experimental languages.](https://developers.cloudflare.com/ai/models/inworld/tts-2/)[![Alibaba logo](https://developers.cloudflare.com/_astro/alibaba.C3THgr9s.svg)hh1-t2vText-to-Video • Alibaba • ProxiedAlibaba's HappyHorse 1.0 text-to-video model. Generates videos from a text prompt with configurable resolution, aspect ratio, and duration (3-15s).](https://developers.cloudflare.com/ai/models/alibaba/hh1-t2v/)[![Alibaba logo](https://developers.cloudflare.com/_astro/alibaba.C3THgr9s.svg)hh1-i2vImage-to-Video • Alibaba • ProxiedAlibaba's HappyHorse 1.0 image-to-video model. Animates a reference image with an optional text prompt. Supports 720P and 1080P output with durations from 3 to 15 seconds.](https://developers.cloudflare.com/ai/models/alibaba/hh1-i2v/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-5.4-proText Generation • OpenAI • ProxiedGPT-5.4 Pro uses OpenAI's Responses API with built-in tools, improved reasoning, and stateful context management.](https://developers.cloudflare.com/ai/models/openai/gpt-5.4-pro/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-5.5Text Generation • OpenAI • ProxiedGPT-5.5 is OpenAI's flagship model with strong coding, reasoning, and multimodal capabilities.](https://developers.cloudflare.com/ai/models/openai/gpt-5.5/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-image-2Text-to-Image • OpenAI • ProxiedOpenAI's next-generation image model that creates and edits images from text prompts, with support for multiple quality levels, sizes, and output formats. Note: transparent backgrounds are not supported — use openai/gpt-image-1.5 for transparent PNGs.](https://developers.cloudflare.com/ai/models/openai/gpt-image-2/)[![Anthropic logo](https://developers.cloudflare.com/_astro/anthropic.DbRqBIjP.svg)claude-opus-4.7Text Generation • Anthropic • ProxiedClaude Opus 4.7 is Anthropic's most capable generally available model, with a step-change improvement in agentic coding over Claude Opus 4.6\. It uses adaptive thinking to calibrate reasoning per task and supports a one million token context window at standard pricing.](https://developers.cloudflare.com/ai/models/anthropic/claude-opus-4.7/)[![Alibaba logo](https://developers.cloudflare.com/_astro/alibaba.C3THgr9s.svg)qwen3.5-397b-a17bText Generation • Alibaba • ProxiedAlibaba's Qwen 3.5 is a 397B-parameter mixture-of-experts model with 17B active parameters, offering strong reasoning capabilities with efficient inference.](https://developers.cloudflare.com/ai/models/alibaba/qwen3.5-397b-a17b/)[![Alibaba logo](https://developers.cloudflare.com/_astro/alibaba.C3THgr9s.svg)qwen3-maxText Generation • Alibaba • ProxiedAlibaba's Qwen 3 Max is a large language model with strong coding, reasoning, and multilingual capabilities, served via DashScope's OpenAI-compatible endpoint.](https://developers.cloudflare.com/ai/models/alibaba/qwen3-max/)[![PixVerse logo](https://developers.cloudflare.com/_astro/pixverse.DSyGEAYR.svg)v6Text-to-Video • PixVerse • ProxiedPixverse v6 is the latest Pixverse video model with support for up to 15-second videos, customizable duration from 1 to 15 seconds, and audio generation.](https://developers.cloudflare.com/ai/models/pixverse/v6/)[![PixVerse logo](https://developers.cloudflare.com/_astro/pixverse.DSyGEAYR.svg)v5.6Text-to-Video • PixVerse • ProxiedPixverse v5.6 is a video generation model supporting text-to-video and image-to-video with audio generation, customizable aspect ratios, and up to 1080p output.](https://developers.cloudflare.com/ai/models/pixverse/v5.6/)[![Vidu logo](https://developers.cloudflare.com/_astro/vidu._WEx0U8r.svg)q3-turboText-to-Video • Vidu • ProxiedVidu Q3 Turbo is a faster version of Vidu Q3 optimized for lower latency video generation while maintaining audio support and up to 16-second clips.](https://developers.cloudflare.com/ai/models/vidu/q3-turbo/)[![Vidu logo](https://developers.cloudflare.com/_astro/vidu._WEx0U8r.svg)q3-proText-to-Video • Vidu • ProxiedVidu Q3 Pro is a high-quality video generation model supporting text-to-video, image-to-video, and start/end-frame-to-video workflows with audio and up to 16-second clips.](https://developers.cloudflare.com/ai/models/vidu/q3-pro/)[![Alibaba logo](https://developers.cloudflare.com/_astro/alibaba.C3THgr9s.svg)wan-2.6-imageText-to-Image • Alibaba • ProxiedAlibaba's Wan 2.6 text-to-image model generating images from text prompts with optional negative prompts and customizable dimensions.](https://developers.cloudflare.com/ai/models/alibaba/wan-2.6-image/)[![RunwayML logo](https://developers.cloudflare.com/_astro/runway.Cq8Cjov4.svg)gen-4.5Text-to-Video • RunwayML • ProxiedRunwayML's video generation model supporting both text-to-video and image-to-video with customizable duration, aspect ratio, and content moderation controls.](https://developers.cloudflare.com/ai/models/runwayml/gen-4.5/)[![MiniMax logo](https://developers.cloudflare.com/_astro/minimax.DPZX-zZI.svg)music-2.6Music Generation • MiniMax • ProxiedMiniMax's music generation model that creates full-length songs with vocals from text prompts and lyrics, or instrumental tracks. Supports BPM/key control and auto-generated lyrics.](https://developers.cloudflare.com/ai/models/minimax/music-2.6/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-image-1.5Text-to-Image • OpenAI • ProxiedOpenAI's image generation model that creates and edits images from text prompts, supporting multiple quality levels and output sizes.](https://developers.cloudflare.com/ai/models/openai/gpt-image-1.5/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)imagen-4Text-to-Image • Google • ProxiedGoogle's latest image generation model producing high-quality, photorealistic images from text prompts with support for multiple aspect ratios.](https://developers.cloudflare.com/ai/models/google/imagen-4/)[![AssemblyAI logo](https://developers.cloudflare.com/_astro/assemblyai.DKrad3Z3.svg)universal-3-proAutomatic Speech Recognition • AssemblyAI • ProxiedAssemblyAI's Universal 3 Pro speech recognition model for high-accuracy transcription.](https://developers.cloudflare.com/ai/models/assemblyai/universal-3-pro/)[![Inworld logo](https://developers.cloudflare.com/_astro/inworld.BDwMAXI2.svg)tts-1.5-miniText-to-Speech • Inworld • ProxiedUltra-fast, cost-efficient text-to-speech with approximately 120ms latency and 15-language support.](https://developers.cloudflare.com/ai/models/inworld/tts-1.5-mini/)[![Inworld logo](https://developers.cloudflare.com/_astro/inworld.BDwMAXI2.svg)tts-1.5-maxText-to-Speech • Inworld • ProxiedHighest-quality text-to-speech with under 200ms latency, emotion control, and 15-language support.](https://developers.cloudflare.com/ai/models/inworld/tts-1.5-max/)[![MiniMax logo](https://developers.cloudflare.com/_astro/minimax.DPZX-zZI.svg)speech-2.8-turboText-to-Speech • MiniMax • ProxiedMiniMax Speech 2.8 Turbo turns text into natural, expressive speech with voice cloning, emotion control, and 40+ language support at faster speeds.](https://developers.cloudflare.com/ai/models/minimax/speech-2.8-turbo/)[![MiniMax logo](https://developers.cloudflare.com/_astro/minimax.DPZX-zZI.svg)m2.7Text Generation • MiniMax • ProxiedMiniMax's M2.7 language model with multilingual capabilities.](https://developers.cloudflare.com/ai/models/minimax/m2.7/)[![MiniMax logo](https://developers.cloudflare.com/_astro/minimax.DPZX-zZI.svg)speech-2.8-hdText-to-Speech • MiniMax • ProxiedMiniMax Speech 2.8 HD focuses on studio-grade audio generation with emotion control, multilingual support (40+ languages), and voice cloning.](https://developers.cloudflare.com/ai/models/minimax/speech-2.8-hd/)[![MiniMax logo](https://developers.cloudflare.com/_astro/minimax.DPZX-zZI.svg)hailuo-2.3-fastText-to-Video • MiniMax • ProxiedA lower-latency version of Hailuo 2.3 that preserves core motion quality, visual consistency, and stylization while enabling faster iteration.](https://developers.cloudflare.com/ai/models/minimax/hailuo-2.3-fast/)[![MiniMax logo](https://developers.cloudflare.com/_astro/minimax.DPZX-zZI.svg)hailuo-2.3Text-to-Video • MiniMax • ProxiedA high-fidelity video generation model optimized for realistic human motion, cinematic VFX, expressive characters, and strong prompt and style adherence across text-to-video and image-to-video workflows.](https://developers.cloudflare.com/ai/models/minimax/hailuo-2.3/)[![Recraft logo](https://developers.cloudflare.com/_astro/recraft.BhhnJczi.svg)recraftv4-pro-vectorText-to-Image • Recraft • ProxiedGenerate detailed, production-ready SVG vector graphics from text prompts with fine geometry, scalable to any size for print and design work.](https://developers.cloudflare.com/ai/models/recraft/recraftv4-pro-vector/)[![Recraft logo](https://developers.cloudflare.com/_astro/recraft.BhhnJczi.svg)recraftv4-vectorText-to-Image • Recraft • ProxiedGenerate production-ready SVG vector graphics from text prompts with clean geometry, structured layers, and editable paths.](https://developers.cloudflare.com/ai/models/recraft/recraftv4-vector/)[![Recraft logo](https://developers.cloudflare.com/_astro/recraft.BhhnJczi.svg)recraftv4Text-to-Image • Recraft • ProxiedRecraft V4 generates art-directed images with strong composition, accurate text rendering, and design taste built in. Fast and cost-efficient at standard resolution.](https://developers.cloudflare.com/ai/models/recraft/recraftv4/)[![Recraft logo](https://developers.cloudflare.com/_astro/recraft.BhhnJczi.svg)recraftv4-proText-to-Image • Recraft • ProxiedRecraft V4 Pro generates high-resolution, art-directed images at 2048px+ with strong composition, text rendering, and design taste. Built for print and production work.](https://developers.cloudflare.com/ai/models/recraft/recraftv4-pro/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)gemini-3-flashText Generation • Google • ProxiedGemini 3 Flash is Google's fast multimodal model with frontier intelligence, superior search, and grounding capabilities.](https://developers.cloudflare.com/ai/models/google/gemini-3-flash/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)gemini-3.1-flash-liteText Generation • Google • ProxiedGoogle's lightest and most cost-efficient Gemini model for high-throughput tasks.](https://developers.cloudflare.com/ai/models/google/gemini-3.1-flash-lite/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)gemini-3.1-proText Generation • Google • ProxiedGoogle's most intelligent Gemini model with improved reasoning, a medium thinking level, and a 1M token context window.](https://developers.cloudflare.com/ai/models/google/gemini-3.1-pro/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)tts-1Text-to-Speech • OpenAI • ProxiedOpenAI's text-to-speech model optimized for real-time use with low latency.](https://developers.cloudflare.com/ai/models/openai/tts-1/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)tts-1-hdText-to-Speech • OpenAI • ProxiedOpenAI's high-definition text-to-speech model producing higher quality audio output.](https://developers.cloudflare.com/ai/models/openai/tts-1-hd/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-4o-transcribeAutomatic Speech Recognition • OpenAI • ProxiedA speech-to-text model that uses GPT-4o to transcribe audio with improved word error rate and better language recognition compared to original Whisper models.](https://developers.cloudflare.com/ai/models/openai/gpt-4o-transcribe/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)o4-miniText Generation • OpenAI • ProxiedOpenAI's fast, lightweight reasoning model optimized for multi-step problem solving at lower cost.](https://developers.cloudflare.com/ai/models/openai/o4-mini/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-4.1Text Generation • OpenAI • ProxiedOpenAI's flagship GPT model for complex tasks with a million-token context window.](https://developers.cloudflare.com/ai/models/openai/gpt-4.1/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-4.1-miniText Generation • OpenAI • ProxiedFast, affordable version of GPT-4.1 with a million-token context window.](https://developers.cloudflare.com/ai/models/openai/gpt-4.1-mini/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-5Text Generation • OpenAI • ProxiedOpenAI's model excelling at coding, writing, and reasoning.](https://developers.cloudflare.com/ai/models/openai/gpt-5/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-5.4-nanoText Generation • OpenAI • ProxiedGPT-5.4 Nano is OpenAI's smallest and fastest model, optimized for edge and low-latency use cases.](https://developers.cloudflare.com/ai/models/openai/gpt-5.4-nano/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-5.4-miniText Generation • OpenAI • ProxiedGPT-5.4 Mini is a smaller, faster, and more cost-efficient version of GPT-5.4 for lightweight tasks.](https://developers.cloudflare.com/ai/models/openai/gpt-5.4-mini/)[![Anthropic logo](https://developers.cloudflare.com/_astro/anthropic.DbRqBIjP.svg)claude-haiku-4.5Text Generation • Anthropic • ProxiedClaude Haiku 4.5 delivers similar levels of coding performance at one-third the cost and more than twice the speed of larger models.](https://developers.cloudflare.com/ai/models/anthropic/claude-haiku-4.5/)[![Anthropic logo](https://developers.cloudflare.com/_astro/anthropic.DbRqBIjP.svg)claude-sonnet-4Text Generation • Anthropic • ProxiedClaude Sonnet 4 delivers superior coding and reasoning while responding more precisely to instructions, a significant upgrade over previous versions.](https://developers.cloudflare.com/ai/models/anthropic/claude-sonnet-4/)[![Anthropic logo](https://developers.cloudflare.com/_astro/anthropic.DbRqBIjP.svg)claude-sonnet-4.5Text Generation • Anthropic • ProxiedClaude Sonnet 4.5 is the best coding model to date, with significant improvements across the entire development lifecycle.](https://developers.cloudflare.com/ai/models/anthropic/claude-sonnet-4.5/)[![Anthropic logo](https://developers.cloudflare.com/_astro/anthropic.DbRqBIjP.svg)claude-sonnet-4.6Text Generation • Anthropic • ProxiedClaude Sonnet 4.6 is Anthropic's latest balanced model offering strong coding, reasoning, and agentic capabilities with improved instruction following.](https://developers.cloudflare.com/ai/models/anthropic/claude-sonnet-4.6/)[![Anthropic logo](https://developers.cloudflare.com/_astro/anthropic.DbRqBIjP.svg)claude-opus-4.6Text Generation • Anthropic • ProxiedClaude Opus 4.6 is Anthropic's flagship language model built for complex, multi-step work in coding, financial analysis, and legal reasoning. It uses extended thinking to work through complex problems carefully and features a one million token context window.](https://developers.cloudflare.com/ai/models/anthropic/claude-opus-4.6/)[![ByteDance logo](https://developers.cloudflare.com/_astro/bytedance.T1uiROQ6.svg)seedream-5-liteText-to-Image • ByteDance • ProxiedSeedream 5 Lite is a lighter, faster version of the Seedream 5 family with multi-reference and batch generation support.](https://developers.cloudflare.com/ai/models/bytedance/seedream-5-lite/)[![ByteDance logo](https://developers.cloudflare.com/_astro/bytedance.T1uiROQ6.svg)seedream-4.5Text-to-Image • ByteDance • ProxiedSeedream 4.5 builds on 4.0 with multi-reference image support, batch generation, and sequential image generation.](https://developers.cloudflare.com/ai/models/bytedance/seedream-4.5/)[![ByteDance logo](https://developers.cloudflare.com/_astro/bytedance.T1uiROQ6.svg)seedream-4.0Text-to-Image • ByteDance • ProxiedSeedream 4.0 is ByteDance's image creation model that combines text-to-image generation and image editing into a single architecture, offering fast, high-resolution output up to 4K.](https://developers.cloudflare.com/ai/models/bytedance/seedream-4.0/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)nano-banana-2Text-to-Image • Google • ProxiedGoogle's second-generation image generation model with improved quality and speed.](https://developers.cloudflare.com/ai/models/google/nano-banana-2/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)nano-banana-proText-to-Image • Google • ProxiedGoogle's higher-quality image generation model with improved detail and prompt adherence.](https://developers.cloudflare.com/ai/models/google/nano-banana-pro/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)nano-bananaText-to-Image • Google • ProxiedGoogle's fast image generation model producing high-quality images from text prompts.](https://developers.cloudflare.com/ai/models/google/nano-banana/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)veo-3.1-fastText-to-Video • Google • ProxiedA faster version of Veo 3.1 optimized for lower latency while maintaining high-quality video and audio output.](https://developers.cloudflare.com/ai/models/google/veo-3.1-fast/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)veo-3-fastText-to-Video • Google • ProxiedA faster version of Veo 3 optimized for lower latency video generation with audio support.](https://developers.cloudflare.com/ai/models/google/veo-3-fast/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)veo-3.1Text-to-Video • Google • ProxiedGoogle's latest video generation model with improved quality, motion, and audio generation.](https://developers.cloudflare.com/ai/models/google/veo-3.1/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)veo-3Text-to-Video • Google • ProxiedGoogle's video generation model capable of producing high-quality videos with optional audio from text prompts.](https://developers.cloudflare.com/ai/models/google/veo-3/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-5.4Text Generation • OpenAI • ProxiedGPT-5.4 is OpenAI's flagship model with strong coding, reasoning, and multimodal capabilities.](https://developers.cloudflare.com/ai/models/openai/gpt-5.4/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)gemma-4-26b-a4b-itText Generation • Google • HostedGemma 4 is Google's most intelligent family of open models, built from Gemini 3 research to maximize intelligence-per-parameter.Function callingReasoningVision](https://developers.cloudflare.com/ai/models/@cf/google/gemma-4-26b-a4b-it/)[![NVIDIA logo](https://developers.cloudflare.com/_astro/nvidia.y1O6VlZA.svg)nemotron-3-120b-a12bText Generation • NVIDIA • HostedNVIDIA Nemotron 3 Super is a hybrid MoE model with leading accuracy for multi-agent applications and specialized agentic AI systems.Function callingReasoning](https://developers.cloudflare.com/ai/models/@cf/nvidia/nemotron-3-120b-a12b/)[![Moonshot AI logo](https://developers.cloudflare.com/_astro/moonshotai.D9EBG7kx.svg)kimi-k2.5Text Generation • Moonshot AI • HostedKimi K2.5 is a frontier-scale open-source model with a 256k context window, multi-turn tool calling, vision inputs, and structured outputs for agentic workloads.Function callingPlanned deprecationReasoningVision](https://developers.cloudflare.com/ai/models/@cf/moonshotai/kimi-k2.5/)[![Black Forest Labs logo](https://developers.cloudflare.com/_astro/blackforestlabs.Ccs-Y4-D.svg)flux-2-klein-9bText-to-Image • Black Forest Labs • HostedFLUX.2 \[klein\] 9B is an ultra-fast, distilled image model with enhanced quality. It unifies image generation and editing in a single model, delivering state-of-the-art quality enabling interactive workflows, real-time previews, and latency-critical applications.Partner](https://developers.cloudflare.com/ai/models/@cf/black-forest-labs/flux-2-klein-9b/)[![Black Forest Labs logo](https://developers.cloudflare.com/_astro/blackforestlabs.Ccs-Y4-D.svg)flux-2-klein-4bText-to-Image • Black Forest Labs • HostedFLUX.2 \[klein\] is an ultra-fast, distilled image model. It unifies image generation and editing in a single model, delivering state-of-the-art quality enabling interactive workflows, real-time previews, and latency-critical applications.Partner](https://developers.cloudflare.com/ai/models/@cf/black-forest-labs/flux-2-klein-4b/)[![Black Forest Labs logo](https://developers.cloudflare.com/_astro/blackforestlabs.Ccs-Y4-D.svg)flux-2-devText-to-Image • Black Forest Labs • HostedFLUX.2 \[dev\] is an image model from Black Forest Labs where you can generate highly realistic and detailed images, with multi-reference support.Partner](https://developers.cloudflare.com/ai/models/@cf/black-forest-labs/flux-2-dev/)[![Deepgram logo](https://developers.cloudflare.com/_astro/deepgram.BYzW8KfF.svg)aura-2-esText-to-Speech • Deepgram • HostedAura-2 is a context-aware text-to-speech (TTS) model that applies natural pacing, expressiveness, and fillers based on the context of the provided text. The quality of your text input directly impacts the naturalness of the audio output.BatchPartnerReal-time](https://developers.cloudflare.com/ai/models/@cf/deepgram/aura-2-es/)[![Deepgram logo](https://developers.cloudflare.com/_astro/deepgram.BYzW8KfF.svg)aura-2-enText-to-Speech • Deepgram • HostedAura-2 is a context-aware text-to-speech (TTS) model that applies natural pacing, expressiveness, and fillers based on the context of the provided text. The quality of your text input directly impacts the naturalness of the audio output.BatchPartnerReal-time](https://developers.cloudflare.com/ai/models/@cf/deepgram/aura-2-en/)[![IBM logo](https://developers.cloudflare.com/_astro/ibm.CNSuznmO.svg)granite-4.0-h-microText Generation • IBM • HostedGranite 4.0 instruct models deliver strong performance across benchmarks, achieving industry-leading results in key agentic tasks like instruction following and function calling. These efficiencies make the models well-suited for a wide range of use cases like retrieval-augmented generation (RAG), multi-agent workflows, and edge deployments.Function calling](https://developers.cloudflare.com/ai/models/@cf/ibm-granite/granite-4.0-h-micro/)[![Deepgram logo](https://developers.cloudflare.com/_astro/deepgram.BYzW8KfF.svg)fluxAutomatic Speech Recognition • Deepgram • HostedFlux is the first conversational speech recognition model built specifically for voice agents.PartnerReal-time](https://developers.cloudflare.com/ai/models/@cf/deepgram/flux/)[pplamo-embedding-1bText Embeddings • pfnet • HostedPLaMo-Embedding-1B is a Japanese text embedding model developed by Preferred Networks, Inc. It can convert Japanese text input into numerical vectors and can be used for a wide range of applications, including information retrieval, text classification, and clustering.](https://developers.cloudflare.com/ai/models/@cf/pfnet/plamo-embedding-1b/)[agemma-sea-lion-v4-27b-itText Generation • aisingapore • HostedSEA-LION stands for Southeast Asian Languages In One Network, which is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.](https://developers.cloudflare.com/ai/models/@cf/aisingapore/gemma-sea-lion-v4-27b-it/)[aindictrans2-en-indic-1BTranslation • ai4bharat • HostedIndicTrans2 is the first open-source transformer-based multilingual NMT model that supports high-quality translations across all the 22 scheduled Indic languages](https://developers.cloudflare.com/ai/models/@cf/ai4bharat/indictrans2-en-indic-1B/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)embeddinggemma-300mText Embeddings • Google • HostedEmbeddingGemma is a 300M parameter, state-of-the-art for its size, open embedding model from Google, built from Gemma 3 (with T5Gemma initialization) and the same research and technology used to create Gemini models. EmbeddingGemma produces vector representations of text, making it well-suited for search and retrieval tasks, including classification, clustering, and semantic similarity search. This model was trained with data in 100+ spoken languages.](https://developers.cloudflare.com/ai/models/@cf/google/embeddinggemma-300m/)[![Deepgram logo](https://developers.cloudflare.com/_astro/deepgram.BYzW8KfF.svg)aura-1Text-to-Speech • Deepgram • HostedAura is a context-aware text-to-speech (TTS) model that applies natural pacing, expressiveness, and fillers based on the context of the provided text. The quality of your text input directly impacts the naturalness of the audio output.BatchPartnerReal-time](https://developers.cloudflare.com/ai/models/@cf/deepgram/aura-1/)[![Leonardo logo](https://developers.cloudflare.com/_astro/leonardo.Ch-T5rST.svg)lucid-originText-to-Image • Leonardo • HostedLucid Origin from Leonardo.AI is their most adaptable and prompt-responsive model to date. Whether you're generating images with sharp graphic design, stunning full-HD renders, or highly specific creative direction, it adheres closely to your prompts, renders text with accuracy, and supports a wide array of visual styles and aesthetics – from stylized concept art to crisp product mockups.Partner](https://developers.cloudflare.com/ai/models/@cf/leonardo/lucid-origin/)[![Leonardo logo](https://developers.cloudflare.com/_astro/leonardo.Ch-T5rST.svg)phoenix-1.0Text-to-Image • Leonardo • HostedPhoenix 1.0 is a model by Leonardo.Ai that generates images with exceptional prompt adherence and coherent text.Partner](https://developers.cloudflare.com/ai/models/@cf/leonardo/phoenix-1.0/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)gpt-oss-20bText Generation • OpenAI • HostedOpenAI's open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases – gpt-oss-20b is for lower latency, and local or specialized use-cases.Function callingReasoning](https://developers.cloudflare.com/ai/models/@cf/openai/gpt-oss-20b/)[![Pipecat logo](https://developers.cloudflare.com/_astro/pipecat.B-PNBdef.svg)smart-turn-v2Voice Activity Detection • Pipecat • HostedAn open source, community-driven, native audio turn detection model in 2nd versionBatchReal-time](https://developers.cloudflare.com/ai/models/@cf/pipecat-ai/smart-turn-v2/)[![Qwen logo](https://developers.cloudflare.com/_astro/qwen.CVqFFn5h.svg)qwen3-embedding-0.6bText Embeddings • Qwen • HostedThe Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. ](https://developers.cloudflare.com/ai/models/@cf/qwen/qwen3-embedding-0.6b/)[![Deepgram logo](https://developers.cloudflare.com/_astro/deepgram.BYzW8KfF.svg)nova-3Automatic Speech Recognition • Deepgram • HostedTranscribe audio using Deepgram’s speech-to-text modelBatchPartnerReal-time](https://developers.cloudflare.com/ai/models/@cf/deepgram/nova-3/)[![Qwen logo](https://developers.cloudflare.com/_astro/qwen.CVqFFn5h.svg)qwen3-30b-a3b-fp8Text Generation • Qwen • HostedQwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support.BatchFunction callingReasoning](https://developers.cloudflare.com/ai/models/@cf/qwen/qwen3-30b-a3b-fp8/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)gemma-3-12b-itText Generation • Google • HostedGemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Gemma 3 models are multimodal, handling text and image input and generating text output, with a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions.LoRAPlanned deprecation](https://developers.cloudflare.com/ai/models/@cf/google/gemma-3-12b-it/)[![MistralAI logo](https://developers.cloudflare.com/_astro/mistralai.Bn9UMUMu.svg)mistral-small-3.1-24b-instructText Generation • MistralAI • HostedBuilding upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance. With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks.Function calling](https://developers.cloudflare.com/ai/models/@cf/mistralai/mistral-small-3.1-24b-instruct/)[![Qwen logo](https://developers.cloudflare.com/_astro/qwen.CVqFFn5h.svg)qwq-32bText Generation • Qwen • HostedQwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.LoRAReasoning](https://developers.cloudflare.com/ai/models/@cf/qwen/qwq-32b/)[![Qwen logo](https://developers.cloudflare.com/_astro/qwen.CVqFFn5h.svg)qwen2.5-coder-32b-instructText Generation • Qwen • HostedQwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:LoRA](https://developers.cloudflare.com/ai/models/@cf/qwen/qwen2.5-coder-32b-instruct/)[![BAAI logo](https://developers.cloudflare.com/_astro/baai.mOtdbKlV.svg)bge-reranker-baseText Classification • BAAI • HostedDifferent from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. And the score can be mapped to a float value in \[0,1\] by sigmoid function.](https://developers.cloudflare.com/ai/models/@cf/baai/bge-reranker-base/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-guard-3-8bText Generation • Meta • HostedLlama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.LoRA](https://developers.cloudflare.com/ai/models/@cf/meta/llama-guard-3-8b/)[![DeepSeek logo](https://developers.cloudflare.com/_astro/deepseek.nPIT6fwR.svg)deepseek-r1-distill-qwen-32bText Generation • DeepSeek • HostedDeepSeek-R1-Distill-Qwen-32B is a model distilled from DeepSeek-R1 based on Qwen2.5\. It outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.Reasoning](https://developers.cloudflare.com/ai/models/@cf/deepseek-ai/deepseek-r1-distill-qwen-32b/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3.3-70b-instruct-fp8-fastText Generation • Meta • HostedLlama 3.3 70B quantized to fp8 precision, optimized to be faster.BatchFunction calling](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3.3-70b-instruct-fp8-fast/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3.2-1b-instructText Generation • Meta • HostedThe Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks.](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3.2-1b-instruct/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3.2-3b-instructText Generation • Meta • HostedThe Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks.](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3.2-3b-instruct/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3.2-11b-vision-instructText Generation • Meta • Hosted The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image.LoRAVision](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3.2-11b-vision-instruct/)[![Black Forest Labs logo](https://developers.cloudflare.com/_astro/blackforestlabs.Ccs-Y4-D.svg)flux-1-schnellText-to-Image • Black Forest Labs • HostedFLUX.1 \[schnell\] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. ](https://developers.cloudflare.com/ai/models/@cf/black-forest-labs/flux-1-schnell/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3.1-8b-instruct-awqText Generation • Meta • HostedQuantized (int4) generative text model with 8 billion parameters from Meta.Planned deprecation](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3.1-8b-instruct-awq/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3.1-8b-instruct-fp8Text Generation • Meta • HostedLlama 3.1 8B quantized to FP8 precision](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3.1-8b-instruct-fp8/)[![MyShell logo](https://developers.cloudflare.com/_astro/myshell.BpTDMxd2.svg)melottsText-to-Speech • MyShell • HostedMeloTTS is a high-quality multi-lingual text-to-speech library by MyShell.ai.](https://developers.cloudflare.com/ai/models/@cf/myshell-ai/melotts/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3.1-8b-instructText Generation • Meta • HostedThe Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models. The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.Planned deprecation](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3.1-8b-instruct/)[![BAAI logo](https://developers.cloudflare.com/_astro/baai.mOtdbKlV.svg)bge-m3Text Embeddings • BAAI • HostedMulti-Functionality, Multi-Linguality, and Multi-Granularity embeddings model.](https://developers.cloudflare.com/ai/models/@cf/baai/bge-m3/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)meta-llama-3-8b-instructText Generation • Meta • HostedGeneration over generation, Meta Llama 3 demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning. Planned deprecation](https://developers.cloudflare.com/ai/models/@hf/meta-llama/meta-llama-3-8b-instruct/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)whisper-large-v3-turboAutomatic Speech Recognition • OpenAI • HostedWhisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Batch](https://developers.cloudflare.com/ai/models/@cf/openai/whisper-large-v3-turbo/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3-8b-instruct-awqText Generation • Meta • HostedQuantized (int4) generative text model with 8 billion parameters from Meta.Planned deprecation](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3-8b-instruct-awq/)[lllava-1.5-7b-hfBetaImage-to-Text • llava-hf • HostedLLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.](https://developers.cloudflare.com/ai/models/@cf/llava-hf/llava-1.5-7b-hf/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)whisper-tiny-enBetaAutomatic Speech Recognition • OpenAI • HostedWhisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. This is the English-only version of the Whisper Tiny model which was trained on the task of speech recognition.](https://developers.cloudflare.com/ai/models/@cf/openai/whisper-tiny-en/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3-8b-instructText Generation • Meta • HostedGeneration over generation, Meta Llama 3 demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning.Planned deprecation](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3-8b-instruct/)[![MistralAI logo](https://developers.cloudflare.com/_astro/mistralai.Bn9UMUMu.svg)mistral-7b-instruct-v0.2BetaText Generation • MistralAI • HostedThe Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2\. Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1: 32k context window (vs 8k context in v0.1), rope-theta = 1e6, and no Sliding-Window Attention.LoRAPlanned deprecation](https://developers.cloudflare.com/ai/models/@hf/mistral/mistral-7b-instruct-v0.2/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)gemma-7b-it-loraBetaText Generation • Google • Hosted This is a Gemma-7B base model that Cloudflare dedicates for inference with LoRA adapters. Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.LoRA](https://developers.cloudflare.com/ai/models/@cf/google/gemma-7b-it-lora/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)gemma-2b-it-loraBetaText Generation • Google • HostedThis is a Gemma-2B base model that Cloudflare dedicates for inference with LoRA adapters. Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.LoRA](https://developers.cloudflare.com/ai/models/@cf/google/gemma-2b-it-lora/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-2-7b-chat-hf-loraBetaText Generation • Meta • HostedThis is a Llama2 base model that Cloudflare dedicated for inference with LoRA adapters. Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. LoRA](https://developers.cloudflare.com/ai/models/@cf/meta-llama/llama-2-7b-chat-hf-lora/)[![Google logo](https://developers.cloudflare.com/_astro/google.DyXKPTPP.svg)gemma-7b-itBetaText Generation • Google • HostedGemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants.LoRAPlanned deprecation](https://developers.cloudflare.com/ai/models/@hf/google/gemma-7b-it/)[nhermes-2-pro-mistral-7bBetaText Generation • nousresearch • HostedHermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.Function callingPlanned deprecation](https://developers.cloudflare.com/ai/models/@hf/nousresearch/hermes-2-pro-mistral-7b/)[![MistralAI logo](https://developers.cloudflare.com/_astro/mistralai.Bn9UMUMu.svg)mistral-7b-instruct-v0.2-loraBetaText Generation • MistralAI • HostedThe Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.LoRA](https://developers.cloudflare.com/ai/models/@cf/mistral/mistral-7b-instruct-v0.2-lora/)[![Unum logo](https://developers.cloudflare.com/_astro/unum.Cjjoj0_o.svg)uform-gen2-qwen-500mBetaImage-to-Text • Unum • HostedUForm-Gen is a small generative vision-language model primarily designed for Image Captioning and Visual Question Answering. The model was pre-trained on the internal image captioning dataset and fine-tuned on public instructions datasets: SVIT, LVIS, VQAs datasets.Planned deprecation](https://developers.cloudflare.com/ai/models/@cf/unum/uform-gen2-qwen-500m/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)bart-large-cnnBetaSummarization • Meta • HostedBART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. You can use this model for text summarization.Planned deprecation](https://developers.cloudflare.com/ai/models/@cf/facebook/bart-large-cnn/)[![Microsoft logo](https://developers.cloudflare.com/_astro/microsoft.LujcDJ--.svg)phi-2BetaText Generation • Microsoft • HostedPhi-2 is a Transformer-based model with a next-word prediction objective, trained on 1.4T tokens from multiple passes on a mixture of Synthetic and Web datasets for NLP and coding.Planned deprecation](https://developers.cloudflare.com/ai/models/@cf/microsoft/phi-2/)[![Defog logo](https://developers.cloudflare.com/_astro/defog.BeLrxE1p.svg)sqlcoder-7b-2BetaText Generation • Defog • HostedThis model is intended to be used by non-technical users to understand data inside their SQL databases. Planned deprecation](https://developers.cloudflare.com/ai/models/@cf/defog/sqlcoder-7b-2/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)detr-resnet-50BetaObject Detection • Meta • HostedDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images).](https://developers.cloudflare.com/ai/models/@cf/facebook/detr-resnet-50/)[![ByteDance logo](https://developers.cloudflare.com/_astro/bytedance.T1uiROQ6.svg)stable-diffusion-xl-lightningBetaText-to-Image • ByteDance • HostedSDXL-Lightning is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.](https://developers.cloudflare.com/ai/models/@cf/bytedance/stable-diffusion-xl-lightning/)[ldreamshaper-8-lcmText-to-Image • lykon • HostedStable Diffusion model that has been fine-tuned to be better at photorealism without sacrificing range.](https://developers.cloudflare.com/ai/models/@cf/lykon/dreamshaper-8-lcm/)[![RunwayML logo](https://developers.cloudflare.com/_astro/runway.Cq8Cjov4.svg)stable-diffusion-v1-5-img2imgBetaText-to-Image • RunwayML • HostedStable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images. Img2img generate a new image from an input image with Stable Diffusion. ](https://developers.cloudflare.com/ai/models/@cf/runwayml/stable-diffusion-v1-5-img2img/)[![RunwayML logo](https://developers.cloudflare.com/_astro/runway.Cq8Cjov4.svg)stable-diffusion-v1-5-inpaintingBetaText-to-Image • RunwayML • HostedStable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask.](https://developers.cloudflare.com/ai/models/@cf/runwayml/stable-diffusion-v1-5-inpainting/)[![Stability.ai logo](https://developers.cloudflare.com/_astro/stabilityai.CmlmNdqR.svg)stable-diffusion-xl-base-1.0BetaText-to-Image • Stability.ai • HostedDiffusion-based text-to-image generative model by Stability AI. Generates and modify images based on text prompts.](https://developers.cloudflare.com/ai/models/@cf/stabilityai/stable-diffusion-xl-base-1.0/)[![BAAI logo](https://developers.cloudflare.com/_astro/baai.mOtdbKlV.svg)bge-large-en-v1.5Text Embeddings • BAAI • HostedBAAI general embedding (Large) model that transforms any given text into a 1024-dimensional vectorBatch](https://developers.cloudflare.com/ai/models/@cf/baai/bge-large-en-v1.5/)[![BAAI logo](https://developers.cloudflare.com/_astro/baai.mOtdbKlV.svg)bge-small-en-v1.5Text Embeddings • BAAI • HostedBAAI general embedding (Small) model that transforms any given text into a 384-dimensional vectorBatch](https://developers.cloudflare.com/ai/models/@cf/baai/bge-small-en-v1.5/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-2-7b-chat-fp16Text Generation • Meta • HostedFull precision (fp16) generative text model with 7 billion parameters from MetaPlanned deprecation](https://developers.cloudflare.com/ai/models/@cf/meta/llama-2-7b-chat-fp16/)[![MistralAI logo](https://developers.cloudflare.com/_astro/mistralai.Bn9UMUMu.svg)mistral-7b-instruct-v0.1Text Generation • MistralAI • HostedInstruct fine-tuned version of the Mistral-7b generative text model with 7 billion parametersLoRAPlanned deprecation](https://developers.cloudflare.com/ai/models/@cf/mistral/mistral-7b-instruct-v0.1/)[![BAAI logo](https://developers.cloudflare.com/_astro/baai.mOtdbKlV.svg)bge-base-en-v1.5Text Embeddings • BAAI • HostedBAAI general embedding (Base) model that transforms any given text into a 768-dimensional vectorBatch](https://developers.cloudflare.com/ai/models/@cf/baai/bge-base-en-v1.5/)[![HuggingFace logo](https://developers.cloudflare.com/_astro/huggingface.ngjt5u2J.svg)distilbert-sst-2-int8Text Classification • HuggingFace • HostedDistilled BERT model that was finetuned on SST-2 for sentiment classification](https://developers.cloudflare.com/ai/models/@cf/huggingface/distilbert-sst-2-int8/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-2-7b-chat-int8Text Generation • Meta • HostedQuantized (int8) generative text model with 7 billion parameters from MetaPlanned deprecation](https://developers.cloudflare.com/ai/models/@cf/meta/llama-2-7b-chat-int8/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)m2m100-1.2bTranslation • Meta • HostedMultilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translationBatch](https://developers.cloudflare.com/ai/models/@cf/meta/m2m100-1.2b/)[![Microsoft logo](https://developers.cloudflare.com/_astro/microsoft.LujcDJ--.svg)resnet-50Image Classification • Microsoft • Hosted50 layers deep image classification CNN trained on more than 1M images from ImageNet](https://developers.cloudflare.com/ai/models/@cf/microsoft/resnet-50/)[![OpenAI logo](https://developers.cloudflare.com/_astro/openai.BI8PEEzI.svg)whisperAutomatic Speech Recognition • OpenAI • HostedWhisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.](https://developers.cloudflare.com/ai/models/@cf/openai/whisper/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3.1-70b-instructText Generation • Meta • HostedThe Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models. The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.Planned deprecation](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3.1-70b-instruct/)[![Meta logo](https://developers.cloudflare.com/_astro/meta.BR4nfp35.svg)llama-3.1-8b-instruct-fastText Generation • Meta • Hosted\[Fast version\] The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models. The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.](https://developers.cloudflare.com/ai/models/@cf/meta/llama-3.1-8b-instruct-fast/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/ai/","name":"AI"}},{"@type":"ListItem","position":3,"item":{"@id":"/ai/models/","name":"Models"}}]}
```

---

---
title: Related products
description: Explore Cloudflare products that complement AI, including Workers AI, AI Gateway, Vectorize, and more.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/ai/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Related products

**[Workers AI](https://developers.cloudflare.com/workers-ai/)** 

Run machine learning models on Cloudflare's GPU-powered infrastructure with serverless inference.

**[AI Gateway](https://developers.cloudflare.com/ai-gateway/)** 

Observe and control your AI applications with caching, rate limiting, and analytics.

**[Agents](https://developers.cloudflare.com/agents/)** 

Build AI-powered agents to perform tasks, persist state, and interact with external services.

**[AI Search](https://developers.cloudflare.com/ai-search/)** 

Create fully managed RAG pipelines for your AI applications.

**[Vectorize](https://developers.cloudflare.com/vectorize/)** 

Store, query, and manage high-dimensional vector databases for AI embeddings.

**[AI Crawl Control](https://developers.cloudflare.com/ai-crawl-control/)** 

Analyze and control third-party AI crawlers on your website.

**[Browser Rendering](https://developers.cloudflare.com/browser-run/)** 

Control and interact with headless browser instances for AI data extraction.

**[Cloudflare Agent](https://developers.cloudflare.com/cloudflare-agent/)** 

An AI-powered assistant that helps you navigate, configure, and manage Cloudflare.

**[Dynamic Workers](https://developers.cloudflare.com/dynamic-workers/)** 

Spin up isolated Workers on demand to execute code.

**[Sandbox SDK](https://developers.cloudflare.com/sandbox-sdk/)** 

Build secure, isolated code execution environments.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/ai/","name":"AI"}},{"@type":"ListItem","position":3,"item":{"@id":"/ai/related-products/","name":"Related products"}}]}
```

---

---
title: Build Agents on Cloudflare
description: Create stateful AI agents with persistent memory, real-time WebSocket connections, and scheduled tasks using the Cloudflare Agents SDK.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/agents/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

# Build Agents on Cloudflare

Most AI applications today are stateless — they process a request, return a response, and forget everything. Real agents need more. They need to remember conversations, act on schedules, call tools, coordinate with other agents, and stay connected to users in real-time. The Agents SDK gives you all of this as a TypeScript class.

Each agent runs on a [Durable Object](https://developers.cloudflare.com/durable-objects/) — a stateful micro-server with its own SQL database, WebSocket connections, and scheduling. Deploy once and Cloudflare runs your agents across its global network, scaling to tens of millions of instances. No infrastructure to manage, no sessions to reconstruct, no state to externalize.

The mental model is simple: define a TypeScript class, give each real-world thing a stable name, and route requests or WebSocket connections to that named instance. The instance wakes when something happens, reads its durable state, does work, and hibernates when idle.

### Get started

Three commands to a running agent. No API keys required — the starter uses [Workers AI](https://developers.cloudflare.com/workers-ai/) by default.

Terminal window

```

npx create-cloudflare@latest --template cloudflare/agents-starter

cd agents-starter && npm install

npm run dev


```

The starter includes streaming AI chat, server-side and client-side tools, human-in-the-loop approval, and task scheduling — a foundation you can build on or tear apart. You can also swap in [OpenAI, Anthropic, Google Gemini, or any other provider](https://developers.cloudflare.com/agents/api-reference/using-ai-models/).

[ Build a chat agent ](https://developers.cloudflare.com/agents/getting-started/build-a-chat-agent/) Step-by-step tutorial that walks through the starter and shows how to customize it. 

[ Add to an existing project ](https://developers.cloudflare.com/agents/getting-started/add-to-existing-project/) Install the agents package into a Workers project and wire up routing. 

### What agents can do

* **Remember everything** — Every agent has a built-in [SQL database](https://developers.cloudflare.com/agents/api-reference/store-and-sync-state/) and key-value state that syncs to connected clients in real-time. State survives restarts, deploys, and hibernation.
* **Build AI chat** — [AIChatAgent](https://developers.cloudflare.com/agents/api-reference/chat-agents/) gives you streaming AI chat with automatic message persistence, resumable streams, and tool support. Pair it with the [useAgentChat](https://developers.cloudflare.com/agents/api-reference/chat-agents/) React hook to build chat UIs in minutes.
* **Think with any model** — Call [any AI model](https://developers.cloudflare.com/agents/api-reference/using-ai-models/) — Workers AI, OpenAI, Anthropic, Gemini — and stream responses over [WebSockets](https://developers.cloudflare.com/agents/api-reference/websockets/) or [Server-Sent Events](https://developers.cloudflare.com/agents/api-reference/http-sse/). Long-running reasoning models that take minutes to respond work out of the box.
* **Use and serve tools** — Define server-side tools, client-side tools that run in the browser, and [human-in-the-loop](https://developers.cloudflare.com/agents/concepts/human-in-the-loop/) approval flows. Expose your agent's tools to other agents and LLMs via [MCP](https://developers.cloudflare.com/agents/api-reference/mcp-agent-api/).
* **Act on their own** — [Schedule tasks](https://developers.cloudflare.com/agents/api-reference/schedule-tasks/) on a delay, at a specific time, or on a cron. Agents can wake themselves up, do work, and go back to sleep — without a user present.
* **Browse the web** — Give your agents [browser tools](https://developers.cloudflare.com/agents/api-reference/browse-the-web/) powered by the Chrome DevTools Protocol to scrape, screenshot, debug, and interact with web pages.
* **Talk to users** — Build real-time [voice agents](https://developers.cloudflare.com/agents/api-reference/voice/) with speech-to-text, text-to-speech, and conversation persistence — audio streams over WebSocket.
* **Orchestrate work** — Run multi-step [workflows](https://developers.cloudflare.com/agents/api-reference/run-workflows/) with automatic retries, coordinate across [sub-agents](https://developers.cloudflare.com/agents/api-reference/sub-agents/), or run chat-capable [agent tools](https://developers.cloudflare.com/agents/api-reference/agent-tools/) with retained streaming timelines.
* **React to events** — Handle [inbound email](https://developers.cloudflare.com/agents/api-reference/email/) (see the [email agent example ↗](https://github.com/cloudflare/agents/tree/main/examples/email-agent)), HTTP requests, WebSocket messages, and state changes — all from the same class.

### How it works

An agent is a TypeScript class. Methods marked with `@callable()` become typed RPC that clients can call directly over WebSocket.

* [  JavaScript ](#tab-panel-3100)
* [  TypeScript ](#tab-panel-3101)

JavaScript

```

import { Agent, callable } from "agents";


export class CounterAgent extends Agent {

  initialState = { count: 0 };


  @callable()

  increment() {

    this.setState({ count: this.state.count + 1 });

    return this.state.count;

  }

}


```

TypeScript

```

import { Agent, callable } from "agents";


export class CounterAgent extends Agent<Env, { count: number }> {

  initialState = { count: 0 };


  @callable()

  increment() {

    this.setState({ count: this.state.count + 1 });

    return this.state.count;

  }

}


```

```

import { useAgent } from "agents/react";


function Counter() {

  const [count, setCount] = useState(0);

  const agent = useAgent({

    agent: "CounterAgent",

    onStateUpdate: (state) => setCount(state.count),

  });


  return <button onClick={() => agent.stub.increment()}>{count}</button>;

}


```

For AI chat, extend `AIChatAgent` instead. Messages are persisted automatically, streams resume on disconnect, and the React hook handles the UI.

* [  JavaScript ](#tab-panel-3102)
* [  TypeScript ](#tab-panel-3103)

JavaScript

```

import { AIChatAgent } from "@cloudflare/ai-chat";

import { createWorkersAI } from "workers-ai-provider";

import { streamText, convertToModelMessages } from "ai";


export class ChatAgent extends AIChatAgent {

  async onChatMessage() {

    const workersai = createWorkersAI({ binding: this.env.AI });

    const result = streamText({

      model: workersai("@cf/zai-org/glm-4.7-flash"),

      messages: await convertToModelMessages(this.messages),

    });

    return result.toUIMessageStreamResponse();

  }

}


```

TypeScript

```

import { AIChatAgent } from "@cloudflare/ai-chat";

import { createWorkersAI } from "workers-ai-provider";

import { streamText, convertToModelMessages } from "ai";


export class ChatAgent extends AIChatAgent {

  async onChatMessage() {

    const workersai = createWorkersAI({ binding: this.env.AI });

    const result = streamText({

      model: workersai("@cf/zai-org/glm-4.7-flash"),

      messages: await convertToModelMessages(this.messages),

    });

    return result.toUIMessageStreamResponse();

  }

}


```

Refer to the [quick start](https://developers.cloudflare.com/agents/getting-started/quick-start/) for a full walkthrough, the [chat agents guide](https://developers.cloudflare.com/agents/api-reference/chat-agents/) for the full chat API, or the [Agents API reference](https://developers.cloudflare.com/agents/api-reference/agents-api/) for the complete SDK.

---

### Build on the Cloudflare Platform

**[Workers AI](https://developers.cloudflare.com/workers-ai/)** 

Run machine learning models, powered by serverless GPUs, on Cloudflare's global network. No API keys required.

**[Workers](https://developers.cloudflare.com/workers/)** 

Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.

**[AI Gateway](https://developers.cloudflare.com/ai-gateway/)** 

Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more.

**[Vectorize](https://developers.cloudflare.com/vectorize/)** 

Build full-stack AI applications with Vectorize, Cloudflare's vector database for semantic search, recommendations, and providing context to LLMs.

**[Workflows](https://developers.cloudflare.com/workflows/)** 

Build stateful agents that guarantee executions, including automatic retries, persistent state that runs for minutes, hours, days, or weeks.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/agents/","name":"Agents"}}]}
```

---

---
title: AI Crawl Control
description: Monitor and control how AI services access your website content.
image: https://developers.cloudflare.com/core-services-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/ai-crawl-control/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

# AI Crawl Control

 Available on all plans 

Monitor and control how AI services access your website content.

AI companies use web content to train their models and power AI applications. AI Crawl Control (formerly AI Audit) gives you visibility into which AI services are accessing your content, and provides tools to manage access according to your preferences.

With AI Crawl Control, you can:

* **See which AI services access your content** \- Monitor the dashboard to see crawler activity and request patterns
* **Control access with granular policies** \- Set allow or block rules for individual crawlers
* **Monitor robots.txt compliance** \- Track which crawlers follow your directives and create enforcement rules
* **Explore monetization options** \- Set up pay per crawl pricing for content access [(private beta)](https://developers.cloudflare.com/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/)
* **Deploy with zero configuration** \- Works automatically on all Cloudflare plans
[ Get started ](https://developers.cloudflare.com/ai-crawl-control/get-started/) 

---

## Features

###  Manage AI crawlers 

Control how AI crawlers interact with your domain.

[ Manage AI crawlers ](https://developers.cloudflare.com/ai-crawl-control/features/manage-ai-crawlers/) 

###  Analyze AI traffic 

Gain insight into how AI crawlers are interacting with your pages.

[ Analyze AI traffic ](https://developers.cloudflare.com/ai-crawl-control/features/analyze-ai-traffic/) 

###  Track robots.txt 

Track the health of `robots.txt` files and identify which crawlers are violating your directives.

[ Track robots.txt ](https://developers.cloudflare.com/ai-crawl-control/features/track-robots-txt/) 

###  Pay Per Crawl 

Allow AI crawlers to access content by paying per crawl.

[ Pay per crawl ](https://developers.cloudflare.com/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/) 

---

## Use cases

Publishers and content creators 

Publishers and content creators can monitor which AI crawlers are accessing their articles and educational content. Set policies to allow beneficial crawlers while blocking others.

E-commerce and business sites 

E-commerce and business sites can identify AI crawler activity on product pages and business information. Control access to sensitive data like pricing and inventory.

Documentation sites 

Documentation sites can track how AI crawlers are accessing their technical documentation. Gain insight into how AI crawlers are engaging with your site.

---

## Related Products

**[Bots](https://developers.cloudflare.com/bots/)** 

Identify and mitigate automated traffic to protect your domain from bad bots.

**[Web Application Firewall](https://developers.cloudflare.com/waf/)** 

Get automatic protection from vulnerabilities and the flexibility to create custom rules.

**[Analytics](https://developers.cloudflare.com/analytics/)** 

View and analyze traffic on your domain.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/ai-crawl-control/","name":"AI Crawl Control"}}]}
```

---

---
title: Cloudflare AI Gateway
description: Observe and control your AI applications with analytics, caching, rate limiting, and model fallback through AI Gateway.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/ai-gateway/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

# Cloudflare AI Gateway

Observe and control your AI applications.

 Available on all plans 

Cloudflare's AI Gateway allows you to gain visibility and control over your AI apps. By connecting your apps to AI Gateway, you can gather insights on how people are using your application with analytics and logging and then control how your application scales with features such as caching, rate limiting, as well as request retries, model fallback, and more. Better yet - it only takes one line of code to get started.

Check out the [Get started guide](https://developers.cloudflare.com/ai-gateway/get-started/) to learn how to configure your applications with AI Gateway.

## Features

###  Models 

Explore all AI models available through AI Gateway, including OpenAI, Anthropic, Google, and more.

[ Browse models ](https://developers.cloudflare.com/ai/models/) 

###  Analytics 

View metrics such as the number of requests, tokens, and the cost it takes to run your application.

[ View Analytics ](https://developers.cloudflare.com/ai-gateway/observability/analytics/) 

###  Logging 

Gain insight on requests and errors.

[ View Logging ](https://developers.cloudflare.com/ai-gateway/observability/logging/) 

###  Caching 

Serve requests directly from Cloudflare's cache instead of the original model provider for faster requests and cost savings.

[ Use Caching ](https://developers.cloudflare.com/ai-gateway/features/caching/) 

###  Rate limiting 

Control how your application scales by limiting the number of requests your application receives.

[ Use Rate limiting ](https://developers.cloudflare.com/ai-gateway/features/rate-limiting/) 

###  Request retry and fallback 

Improve resilience by defining request retry and model fallbacks in case of an error.

[ Use Request retry and fallback ](https://developers.cloudflare.com/ai-gateway/features/dynamic-routing/) 

###  Your favorite providers 

Workers AI, Anthropic, Google Gemini, OpenAI, Replicate, and more work with AI Gateway.

[ Use Your favorite providers ](https://developers.cloudflare.com/ai-gateway/usage/providers/) 

---

## Related products

**[Workers AI](https://developers.cloudflare.com/workers-ai/)** 

Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network.

**[Vectorize](https://developers.cloudflare.com/vectorize/)** 

Build full-stack AI applications with Vectorize, Cloudflare's vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM.

## More resources

[Developer Discord](https://discord.cloudflare.com) 

Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.

[Use cases](https://developers.cloudflare.com/use-cases/ai/) 

Learn how you can build and deploy ambitious AI applications to Cloudflare's global network.

[@CloudflareDev](https://x.com/cloudflaredev) 

Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/ai-gateway/","name":"AI Gateway"}}]}
```

---

---
title: Overview
description: Cloudflare AI Search is a managed search service. Index your content and query it with natural language from a Workers binding, REST API, or MCP server.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/ai-search/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

# Overview

The search primitive for your applications and agents.

 Available on all plans 

AI Search lets you add search to any application or agent without having to build an entire retrieval infrastructure. Create an instance, give it your data, and search it with natural language.

You can use AI Search for:

* Documentation and knowledge base search
* AI agent tool use and memory
* Per-tenant or per-agent file search

[ Get started ](https://developers.cloudflare.com/ai-search/get-started/)[ Watch AI Search demo ](https://www.youtube.com/watch?v=JUFdbkiDN2U)

Latest update

New AI Search instances created after April 16, 2026 include [managed storage](https://developers.cloudflare.com/ai-search/configuration/data-source/built-in-storage/), vector index, and web crawling. [View limits and pricing](https://developers.cloudflare.com/ai-search/platform/limits-pricing/).

---

## Features

###  Automated indexing 

Automatically and continuously index your data source, keeping your content fresh without manual reprocessing.

[ View indexing ](https://developers.cloudflare.com/ai-search/configuration/indexing/syncing/) 

###  Metadata filtering 

Define custom metadata fields and filter search results by category, version, language, or any attribute you define.

[ Add filters ](https://developers.cloudflare.com/ai-search/configuration/indexing/metadata/) 

###  Hybrid search 

Combine semantic and keyword matching in the same query for more accurate results.

[ Configure hybrid search ](https://developers.cloudflare.com/ai-search/configuration/indexing/hybrid-search/) 

###  MCP and UI snippets 

Every instance includes a built-in MCP endpoint for AI agents and embeddable search components for your website.

[ Connect agents ](https://developers.cloudflare.com/ai-search/api/search/mcp/) 

---

## Related products

**[Workers AI](https://developers.cloudflare.com/workers-ai/)** 

Run machine learning models, powered by serverless GPUs, on Cloudflare's global network.

**[AI Gateway](https://developers.cloudflare.com/ai-gateway/)** 

Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more.

**[Vectorize](https://developers.cloudflare.com/vectorize/)** 

Build full-stack AI applications with Vectorize, Cloudflare's vector database.

**[Workers](https://developers.cloudflare.com/workers/)** 

Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.

**[R2](https://developers.cloudflare.com/r2/)** 

Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.

---

## More resources

[Get started](https://developers.cloudflare.com/ai-search/get-started/) 

Create your first AI Search instance and run your first query.

[Developer Discord](https://discord.cloudflare.com) 

Connect with the Workers community on Discord to ask questions, share what you are building, and discuss the platform with other developers.

[@CloudflareDev](https://x.com/cloudflaredev) 

Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/ai-search/","name":"AI Search"}}]}
```

---

---
title: Browser Run
description: Control headless browsers with Cloudflare's Workers Browser Run API. Automate tasks, take screenshots, convert pages to PDFs, and test web apps.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/browser-run/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Browser Run

Run headless Chrome on [Cloudflare's global network](https://developers.cloudflare.com/workers/) for browser automation, web scraping, testing, and content generation.

 Available on Free and Paid plans 

Browser Run, formerly known as Browser Rendering, enables developers to programmatically control and interact with headless browser instances running on Cloudflare’s global network.

## Use cases

Programmatically load and fully render dynamic webpages or raw HTML and capture specific outputs such as:

* [Markdown](https://developers.cloudflare.com/browser-run/quick-actions/markdown-endpoint/)
* [Screenshots](https://developers.cloudflare.com/browser-run/quick-actions/screenshot-endpoint/)
* [PDFs](https://developers.cloudflare.com/browser-run/quick-actions/pdf-endpoint/)
* [Snapshots](https://developers.cloudflare.com/browser-run/quick-actions/snapshot/)
* [Links](https://developers.cloudflare.com/browser-run/quick-actions/links-endpoint/)
* [HTML elements](https://developers.cloudflare.com/browser-run/quick-actions/scrape-endpoint/)
* [Structured data](https://developers.cloudflare.com/browser-run/quick-actions/json-endpoint/)
* [Crawled web content](https://developers.cloudflare.com/browser-run/quick-actions/crawl-endpoint/)

## Integration methods

Browser Run offers two categories of integration methods:

* **[Quick Actions](https://developers.cloudflare.com/browser-run/quick-actions/)**: Simple, stateless browser tasks like screenshots, PDFs, and scraping. No code deployment needed.
* **Browser Sessions**: Direct browser control via [Puppeteer](https://developers.cloudflare.com/browser-run/puppeteer/), [Playwright](https://developers.cloudflare.com/browser-run/playwright/), [CDP](https://developers.cloudflare.com/browser-run/cdp/), or [Stagehand](https://developers.cloudflare.com/browser-run/stagehand/). Deploy within Cloudflare Workers or connect from any environment via CDP.

| Use case                                    | Recommended                                                                                                                                                                                                  | Why                                                              |
| ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------- |
| Simple screenshot, PDF, or scrape           | [Quick Actions](https://developers.cloudflare.com/browser-run/quick-actions/)                                                                                                                                | No code deployment; single HTTP request                          |
| Browser automation                          | [Playwright](https://developers.cloudflare.com/browser-run/playwright/), [Puppeteer](https://developers.cloudflare.com/browser-run/puppeteer/), or [CDP](https://developers.cloudflare.com/browser-run/cdp/) | Full browser control with scripting                              |
| Porting existing scripts                    | [Puppeteer](https://developers.cloudflare.com/browser-run/puppeteer/), [Playwright](https://developers.cloudflare.com/browser-run/playwright/), or [CDP](https://developers.cloudflare.com/browser-run/cdp/) | Minimal code changes from standard libraries                     |
| AI-powered data extraction                  | [JSON endpoint](https://developers.cloudflare.com/browser-run/quick-actions/json-endpoint/)                                                                                                                  | Structured data via natural language prompts                     |
| Site-wide crawling                          | [Crawl endpoint](https://developers.cloudflare.com/browser-run/quick-actions/crawl-endpoint/)                                                                                                                | Multi-page content extraction with async results                 |
| AI agent browsing                           | [Playwright MCP](https://developers.cloudflare.com/browser-run/playwright/playwright-mcp/) or [CDP with MCP clients](https://developers.cloudflare.com/browser-run/cdp/mcp-clients/)                         | LLMs control browsers via MCP                                    |
| Resilient scraping                          | [Stagehand](https://developers.cloudflare.com/browser-run/stagehand/)                                                                                                                                        | AI finds elements by intent, not selectors                       |
| Direct browser control from any environment | [CDP](https://developers.cloudflare.com/browser-run/cdp/)                                                                                                                                                    | WebSocket access from local machines, CI/CD, or external servers |

## Key features

* **Scale to thousands of browsers**: Instant access to a global pool of browsers with low cold-start time, ideal for high-volume screenshot generation, data extraction, or automation at scale
* **Global by default**: Browser sessions run on Cloudflare's edge network, opening close to your users for better speed and availability worldwide
* **Easy to integrate**: [Quick Actions](https://developers.cloudflare.com/browser-run/quick-actions/) for common tasks, [Puppeteer](https://developers.cloudflare.com/browser-run/puppeteer/) and [Playwright](https://developers.cloudflare.com/browser-run/playwright/) for complex workflows, and [CDP](https://developers.cloudflare.com/browser-run/cdp/) for direct browser control from any environment
* **Session management**: [Reuse browser sessions](https://developers.cloudflare.com/browser-run/features/reuse-sessions/) across requests to improve performance and reduce cold-start overhead
* **Flexible pricing**: Pay only for browser time used with generous free tier ([view pricing](https://developers.cloudflare.com/browser-run/pricing/))

## Related products

**[Workers](https://developers.cloudflare.com/workers/)** 

Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.

**[Durable Objects](https://developers.cloudflare.com/durable-objects/)** 

A globally distributed coordination API with strongly consistent storage. Using Durable Objects to [persist browser sessions](https://developers.cloudflare.com/browser-run/how-to/browser-run-with-do/) improves performance by eliminating the time that it takes to spin up a new browser session.

**[Agents](https://developers.cloudflare.com/agents/)** 

Build AI-powered agents that autonomously navigate websites and perform tasks using [Playwright MCP](https://developers.cloudflare.com/browser-run/playwright/playwright-mcp/) or [Stagehand](https://developers.cloudflare.com/browser-run/stagehand/).

## More resources

[Get started](https://developers.cloudflare.com/browser-run/get-started/) 

Choose an integration method and deploy your first project.

[Limits](https://developers.cloudflare.com/browser-run/limits/) 

Learn about Browser Run limits.

[Pricing](https://developers.cloudflare.com/browser-run/pricing/) 

Learn about Browser Run pricing.

[Playwright API](https://developers.cloudflare.com/browser-run/playwright/) 

Use Cloudflare's fork of Playwright for testing and automation.

[Developer Discord](https://discord.cloudflare.com) 

Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.

[@CloudflareDev](https://x.com/cloudflaredev) 

Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/browser-run/","name":"Browser Run"}}]}
```

---

---
title: Agent Lee
description: Ask questions, run diagnostics, and take actions across your Cloudflare account using an AI-powered dashboard assistant.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/agent-lee/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

# Agent Lee

An AI co-pilot built into the Cloudflare dashboard. Ask questions about your account, take actions, and run diagnostics, all in plain language.

Beta

Agent Lee is currently in beta and only available to accounts on a Free plan. Features and behaviors may change.

With Agent Lee, you can:

* Ask questions about your account configuration and get answers based on your actual data.
* Make changes to DNS records, zone settings, and security rules, with your approval required before anything executes.
* Run network diagnostics like DNS lookups and certificate checks.
* Generate inline charts and visualizations from your account analytics.

To get started, log in to the [Cloudflare dashboard ↗](https://dash.cloudflare.com) and select **Ask AI** in the upper-right corner of any dashboard page.

---

## Capabilities

### Account-aware answers

Agent Lee answers based on your actual account data, not just documentation. When you ask a question, it fetches your zone configuration, DNS records, and security settings before responding.

### Write operations

You can ask Agent Lee to create, update, or delete resources across your account using natural language. Every write operation requires your explicit approval before it executes, Agent Lee shows you exactly what it plans to do and waits for confirmation.

Example requests:

* "Add an A record for blog.example.com pointing to 192.0.2.10."
* "Enable Always Use HTTPS on my zone."
* "Set the SSL mode for example.com to Full (strict)."

### Network diagnostics

Run diagnostic commands to troubleshoot connectivity and configuration issues:

* **DNS lookups**: Query DNS records for any domain
* **Certificate checks**: Inspect TLS/SSL certificates
* **Domain information**: Look up WHOIS and RDAP registration data

### Generative UI

Agent Lee renders inline charts and data visualizations directly in the chat panel based on your account analytics. Example requests:

* "Show me a chart of my traffic over the last 7 days."
* "What does my error rate look like for the past 24 hours?"

---

## Data access and privacy

### What Agent Lee can access

* Zone settings, DNS records, firewall and WAF rules
* Workers scripts, routes, and bindings
* R2 bucket names, Cloudflare Tunnel configuration, cache rules
* Registrar domain data, account plan and usage metadata

Agent Lee fetches this data on demand when your question requires it.

### What Agent Lee cannot access

* Payment methods, billing history, or invoice details
* Account passwords, login credentials, or API tokens
* Raw log data or Logpush datasets
* Data from other Cloudflare accounts

### Conversation storage

Conversations are stored per user using [Durable Objects](https://developers.cloudflare.com/durable-objects/), isolated to your account. Conversation data is retained for one year in accordance with Cloudflare's data retention policy. Agent Lee does not currently reference previous conversation context when responding.

### Data usage

Agent Lee does not currently use your conversations, prompts, or account data to train AI models, nor do we share your data with other Cloudflare customers. Should these practices change in the future, we will provide advance notice to keep you informed. For Cloudflare's authoritative data handling commitments, refer to the [Cloudflare Privacy Policy ↗](https://www.cloudflare.com/privacypolicy/).

---

## Limitations

Agent Lee cannot:

* Write Workers scripts or generate application code
* Replace [Cloudflare Support ↗](https://support.cloudflare.com) for billing issues, account recovery, or outages
* Access payment methods, billing history, or API tokens
* Operate across multiple accounts: sessions are scoped to your authenticated account
* Remember previous conversations: each session starts fresh
* Query raw log data or Logpush datasets
* Execute write operations without your explicit approval

Agent Lee is entirely optional. If you do not open the Ask AI panel, none of your data is sent to or processed by it.

---

## Built on Cloudflare

Agent Lee is built on Cloudflare's own developer platform using the same primitives available to any Cloudflare developer.

| Component                                                                                      | Role                                                  |
| ---------------------------------------------------------------------------------------------- | ----------------------------------------------------- |
| [Agents SDK](https://developers.cloudflare.com/agents/)                                        | Agent lifecycle, state management, and scheduling     |
| [Durable Objects](https://developers.cloudflare.com/durable-objects/)                          | Per-user conversation storage and write approval gate |
| [Workers AI](https://developers.cloudflare.com/workers-ai/)                                    | LLM inference                                         |
| [Cloudflare MCP server](https://developers.cloudflare.com/agents/api-reference/mcp-agent-api/) | Tool definitions for Cloudflare API operations        |

---

## Related resources

* [Agents SDK](https://developers.cloudflare.com/agents/)
* [Human in the Loop](https://developers.cloudflare.com/agents/concepts/human-in-the-loop/)
* [Workers AI](https://developers.cloudflare.com/workers-ai/)
* [Blog post: Introducing Agent Lee ↗](https://blog.cloudflare.com/introducing-agent-lee)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/agent-lee/","name":"Agent Lee"}}]}
```

---

---
title: Dynamic Workers
description: Spin up isolated Workers on demand to execute code.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/dynamic-workers/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Dynamic Workers

Spin up Workers at runtime to execute code on-demand in a secure, sandboxed environment.

Dynamic Workers let you spin up an unlimited number of Workers to execute arbitrary code specified at runtime. Dynamic Workers can be used as a lightweight alternative to containers for securely sandboxing code you don't trust.

Dynamic Workers are the lowest-level primitive for spinning up a Worker, giving you full control over defining how the Worker is composed, which bindings it receives, whether it can reach the network, and more.

### Get started

Deploy the [Dynamic Workers Playground ↗](https://github.com/cloudflare/agents/tree/main/examples/dynamic-workers-playground) to create and run Workers dynamically from code you write or import from GitHub, with real-time logs and observability.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/dinasaur404/dynamic-workers-playground)

## Use Dynamic Workers for

Use this pattern when code needs to run quickly in a secure, isolated environment.

* **AI Agent "Code Mode"**: LLMs are trained to write code. Instead of supplying an agent with tool calls to perform tasks, give it an API and let it write and execute code. Save up to 80% in inference tokens and cost by allowing the agent to programmatically process data instead of sending it all through the LLM.
* **AI-generated applications / "Vibe Code"**: Run generated code for prototypes, projects, and automations in a secure, isolated sandboxed environment.
* **Fast development and previews**: Load prototypes, previews, and playgrounds in milliseconds.
* **Custom automations**: Create custom tools on the fly that execute a task, call an integration, or automate a workflow.
* **Platforms**: Run applications uploaded by your users.

## Features

Because you compose the Worker that runs the code at runtime, you control how that Worker is configured and what it can access.

* **[Bindings](https://developers.cloudflare.com/dynamic-workers/usage/bindings/)**: Decide which bindings and structured data the dynamic Worker receives.
* **[Observability](https://developers.cloudflare.com/dynamic-workers/usage/observability/)**: Attach Tail Workers and capture logs for each run.
* **[Network access](https://developers.cloudflare.com/dynamic-workers/usage/egress-control/)**: Intercept or block Internet access for outbound requests.
* **[Limits](https://developers.cloudflare.com/dynamic-workers/usage/limits/)**: Enforce custom limits on the dynamic Worker's resource usage.
* **[Durable Object Facets](https://developers.cloudflare.com/dynamic-workers/usage/durable-object-facets/)**: Run dynamically-loaded code as a Durable Object with its own isolated SQLite storage.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/dynamic-workers/","name":"Dynamic Workers"}}]}
```

---

---
title: Cloudflare Vectorize
description: Build full-stack AI applications with Vectorize, Cloudflare's vector database.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/vectorize/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

# Cloudflare Vectorize

Build full-stack AI applications with Vectorize, Cloudflare's powerful vector database.

Vectorize is a globally distributed vector database that enables you to build full-stack, AI-powered applications with [Cloudflare Workers](https://developers.cloudflare.com/workers/). Vectorize makes querying embeddings — representations of values or objects like text, images, audio that are designed to be consumed by machine learning models and semantic search algorithms — faster, easier and more affordable.

Vectorize is now Generally Available

To report bugs or give feedback, go to the [#vectorize Discord channel ↗](https://discord.cloudflare.com). If you are having issues with Wrangler, report issues in the [Wrangler GitHub repository ↗](https://github.com/cloudflare/workers-sdk/issues/new/choose).

For example, by storing the embeddings (vectors) generated by a machine learning model, including those built-in to [Workers AI](https://developers.cloudflare.com/workers-ai/) or by bringing your own from platforms like [OpenAI](#), you can build applications with powerful search, similarity, recommendation, classification and/or anomaly detection capabilities based on your own data.

The vectors returned can reference images stored in Cloudflare R2, documents in KV, and/or user profiles stored in D1 — enabling you to go from vector search result to concrete object all within the Workers platform, and without standing up additional infrastructure.

---

## Features

###  Vector database 

Learn how to create your first Vectorize database, upload vector embeddings, and query those embeddings from [Cloudflare Workers](https://developers.cloudflare.com/workers/).

[ Create your Vector database ](https://developers.cloudflare.com/vectorize/get-started/intro/) 

###  Vector embeddings using Workers AI 

Learn how to use Vectorize to generate vector embeddings using Workers AI.

[ Create vector embeddings using Workers AI ](https://developers.cloudflare.com/vectorize/get-started/embeddings/) 

###  Search using Vectorize and AI Search 

Learn how to automatically index your data and store it in Vectorize, then query it to generate context-aware responses using AI Search.

[ Build a RAG with Vectorize ](https://developers.cloudflare.com/ai-search/) 

---

## Related products

**[Workers AI](https://developers.cloudflare.com/workers-ai/)** 

Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network.

**[R2 Storage](https://developers.cloudflare.com/r2/)** 

Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.

---

## More resources

[Limits](https://developers.cloudflare.com/vectorize/platform/limits/) 

Learn about Vectorize limits and how to work within them.

[Use cases](https://developers.cloudflare.com/use-cases/ai/) 

Learn how you can build and deploy ambitious AI applications to Cloudflare's global network.

[Storage options](https://developers.cloudflare.com/workers/platform/storage-options/) 

Learn more about the storage and database options you can build on with Workers.

[Developer Discord](https://discord.cloudflare.com) 

Connect with the Workers community on Discord to ask questions, join the`#vectorize` channel to show what you are building, and discuss the platform with other developers.

[@CloudflareDev](https://x.com/cloudflaredev) 

Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Developer Platform.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/vectorize/","name":"Vectorize"}}]}
```

---

---
title: Cloudflare Workers AI
description: Run machine learning models, powered by serverless GPUs, on Cloudflare's global network.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/workers-ai/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

# Cloudflare Workers AI

Run machine learning models, powered by serverless GPUs, on Cloudflare's global network.

 Available on Free and Paid plans 

Workers AI allows you to run AI models in a serverless way, without having to worry about scaling, maintaining, or paying for unused infrastructure. You can invoke models running on GPUs on Cloudflare's network from your own code — from [Workers](https://developers.cloudflare.com/workers/), [Pages](https://developers.cloudflare.com/pages/), or anywhere via [the Cloudflare API](https://developers.cloudflare.com/api/resources/ai/methods/run/).

Workers AI gives you access to:

* **50+ [open-source models](https://developers.cloudflare.com/workers-ai/models/)**, available as a part of our model catalog
* Serverless, **pay-for-what-you-use** [pricing model](https://developers.cloudflare.com/workers-ai/platform/pricing/)
* All as part of a **fully-featured developer platform**, including [AI Gateway](https://developers.cloudflare.com/ai-gateway/), [Vectorize](https://developers.cloudflare.com/vectorize/), [Workers](https://developers.cloudflare.com/workers/) and more...

[ Get started ](https://developers.cloudflare.com/workers-ai/get-started)[ Watch a Workers AI demo ](https://youtu.be/cK%5FleoJsBWY?si=4u6BIy%5FuBOZf9Ve8)

Custom requirements

If you have custom requirements like private custom models or higher limits, complete the [Custom Requirements Form ↗](https://forms.gle/axnnpGDb6xrmR31T6). Cloudflare will contact you with next steps.

Workers AI is now Generally Available

To report bugs or give feedback, go to the [#workers-ai Discord channel ↗](https://discord.cloudflare.com). If you are having issues with Wrangler, report issues in the [Wrangler GitHub repository ↗](https://github.com/cloudflare/workers-sdk/issues/new/choose).

---

## Features

###  Models 

Workers AI comes with a curated set of popular open-source models that enable you to do tasks such as image classification, text generation, object detection and more.

[ Browse models ](https://developers.cloudflare.com/workers-ai/models/) 

---

## Related products

**[AI Gateway](https://developers.cloudflare.com/ai-gateway/)** 

Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more.

**[Vectorize](https://developers.cloudflare.com/vectorize/)** 

Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM.

**[Workers](https://developers.cloudflare.com/workers/)** 

Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.

**[Pages](https://developers.cloudflare.com/pages/)** 

Create full-stack applications that are instantly deployed to the Cloudflare global network.

**[R2](https://developers.cloudflare.com/r2/)** 

Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.

**[D1](https://developers.cloudflare.com/d1/)** 

Create new serverless SQL databases to query from your Workers and Pages projects.

**[Durable Objects](https://developers.cloudflare.com/durable-objects/)** 

A globally distributed coordination API with strongly consistent storage.

**[KV](https://developers.cloudflare.com/kv/)** 

Create a global, low-latency, key-value data storage.

---

## More resources

[Get started](https://developers.cloudflare.com/workers-ai/get-started/workers-wrangler/) 

Build and deploy your first Workers AI application.

[Plans](https://developers.cloudflare.com/workers-ai/platform/pricing/) 

Learn about Free and Paid plans.

[Limits](https://developers.cloudflare.com/workers-ai/platform/limits/) 

Learn about Workers AI limits.

[Use cases](https://developers.cloudflare.com/use-cases/ai/) 

Learn how you can build and deploy ambitious AI applications to Cloudflare's global network.

[Storage options](https://developers.cloudflare.com/workers/platform/storage-options/) 

Learn which storage option is best for your project.

[Developer Discord](https://discord.cloudflare.com) 

Connect with the Workers community on Discord to ask questions, share what you are building, and discuss the platform with other developers.

[@CloudflareDev](https://x.com/cloudflaredev) 

Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers-ai/","name":"Workers AI"}}]}
```
