Skip to content
Cloudflare Docs

Mount buckets

Mount S3-compatible object storage buckets as local filesystem paths. Access object storage using standard file operations.

When to mount buckets

Mount S3-compatible buckets when you need:

  • Persistent data - Data survives sandbox destruction
  • Large datasets - Process data without downloading
  • Shared storage - Multiple sandboxes access the same data
  • Cost-effective persistence - Cheaper than keeping sandboxes alive

Mount an R2 bucket

JavaScript
import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "data-processor");
// Mount R2 bucket
await sandbox.mountBucket("my-r2-bucket", "/data", {
endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",
});
// Access bucket with standard filesystem operations
await sandbox.exec("ls", { args: ["/data"] });
await sandbox.writeFile("/data/results.json", JSON.stringify(results));
// Use from Python
await sandbox.exec("python", {
args: [
"-c",
`
import pandas as pd
df = pd.read_csv('/data/input.csv')
df.describe().to_csv('/data/summary.csv')
`,
],
});

Credentials

Automatic detection

Set credentials as Worker secrets and the SDK automatically detects them:

Terminal window
npx wrangler secret put AWS_ACCESS_KEY_ID
npx wrangler secret put AWS_SECRET_ACCESS_KEY
JavaScript
// Credentials automatically detected from environment
await sandbox.mountBucket("my-r2-bucket", "/data", {
endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",
});

Explicit credentials

Pass credentials directly when needed:

JavaScript
await sandbox.mountBucket("my-r2-bucket", "/data", {
endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",
credentials: {
accessKeyId: env.R2_ACCESS_KEY_ID,
secretAccessKey: env.R2_SECRET_ACCESS_KEY,
},
});

Mount bucket subdirectories

Mount a specific subdirectory within a bucket using the prefix option. Only contents under the prefix are visible at the mount point:

JavaScript
// Mount only the /uploads/images/ subdirectory
await sandbox.mountBucket("my-bucket", "/images", {
endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",
prefix: "/uploads/images/",
});
// Files appear at mount point without the prefix
// Bucket: my-bucket/uploads/images/photo.jpg
// Mounted path: /images/photo.jpg
await sandbox.exec("ls", { args: ["/images"] });
// Write to subdirectory
await sandbox.writeFile("/images/photo.jpg", imageData);
// Creates my-bucket:/uploads/images/photo.jpg
// Mount different prefixes to different paths
await sandbox.mountBucket("datasets", "/training-data", {
endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",
prefix: "/ml/training/",
});
await sandbox.mountBucket("datasets", "/test-data", {
endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",
prefix: "/ml/testing/",
});

Read-only mounts

Protect data by mounting buckets in read-only mode:

JavaScript
await sandbox.mountBucket("dataset-bucket", "/data", {
endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",
readOnly: true,
});
// Reads work
await sandbox.exec("cat", { args: ["/data/dataset.csv"] });
// Writes fail
await sandbox.writeFile("/data/new-file.txt", "data"); // Error: Read-only filesystem

Local development

You can mount R2 buckets during local development with wrangler dev by passing the localBucket option. This uses the R2 binding from your Worker environment directly, so no S3-compatible endpoint or credentials are required.

Configure R2 bindings

Add an R2 bucket binding to your Wrangler configuration:

{
"r2_buckets": [
{
"binding": "MY_BUCKET",
"bucket_name": "my-test-bucket"
}
]
}

Mount with localBucket

Pass localBucket: true in the options to mount the bucket locally:

JavaScript
await sandbox.mountBucket("MY_BUCKET", "/data", {
localBucket: true,
});
// Access files using standard operations
await sandbox.exec("ls", { args: ["/data"] });
await sandbox.writeFile("/data/results.json", JSON.stringify(results));

The readOnly and prefix options work the same way in local mode:

JavaScript
// Read-only local mount
await sandbox.mountBucket("MY_BUCKET", "/data", {
localBucket: true,
readOnly: true,
});
// Mount a subdirectory
await sandbox.mountBucket("MY_BUCKET", "/images", {
localBucket: true,
prefix: "/uploads/images/",
});

Local development considerations

During local development, files are synchronized between R2 and the container using a periodic sync process rather than a direct filesystem mount. Keep the following in mind:

  • Synchronization window - A brief delay exists between when a file is written and when it appears on the other side. For example, if you upload a file to R2 and then immediately read it from the mounted path in the container, the file may not yet be available. Allow a short window for synchronization to complete before reading recently written data.
  • High-frequency writes - Rapid successive writes to the same file path may take slightly longer to fully propagate. For best results, avoid writing to the same file from both R2 and the container at the same time.
  • Bidirectional sync - Changes made in the container are synced to R2, and changes made in R2 are synced to the container. Both directions follow the same periodic sync model.

Unmount buckets

JavaScript
// Mount for processing
await sandbox.mountBucket("my-bucket", "/data", { endpoint: "..." });
// Do work
await sandbox.exec("python process_data.py");
// Clean up
await sandbox.unmountBucket("/data");

Other providers

The SDK supports any S3-compatible object storage. Here are examples for common providers:

Amazon S3

JavaScript
await sandbox.mountBucket("my-s3-bucket", "/data", {
endpoint: "https://s3.us-west-2.amazonaws.com", // Regional endpoint
credentials: {
accessKeyId: env.AWS_ACCESS_KEY_ID,
secretAccessKey: env.AWS_SECRET_ACCESS_KEY,
},
});

Google Cloud Storage

JavaScript
await sandbox.mountBucket("my-gcs-bucket", "/data", {
endpoint: "https://storage.googleapis.com",
credentials: {
accessKeyId: env.GCS_ACCESS_KEY_ID, // HMAC key
secretAccessKey: env.GCS_SECRET_ACCESS_KEY,
},
});

Other S3-compatible providers

For providers like Backblaze B2, MinIO, Wasabi, or others, use the standard mount pattern:

JavaScript
await sandbox.mountBucket("my-bucket", "/data", {
endpoint: "https://s3.us-west-000.backblazeb2.com", // Provider-specific endpoint
credentials: {
accessKeyId: env.ACCESS_KEY_ID,
secretAccessKey: env.SECRET_ACCESS_KEY,
},
});

For provider-specific configuration, see the s3fs-fuse wiki which documents supported providers and their recommended flags.

Troubleshooting

Missing credentials error

Error: MissingCredentialsError: No credentials found

Solution: Set credentials as Worker secrets:

Terminal window
npx wrangler secret put AWS_ACCESS_KEY_ID
npx wrangler secret put AWS_SECRET_ACCESS_KEY

Mount failed error

Error: S3FSMountError: mount failed

Common causes:

  • Incorrect endpoint URL
  • Invalid credentials
  • Bucket doesn't exist
  • Network connectivity issues

Verify your endpoint format and credentials:

JavaScript
try {
await sandbox.mountBucket("my-bucket", "/data", {
endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",
});
} catch (error) {
console.error("Mount failed:", error.message);
// Check endpoint format, credentials, bucket existence
}

Path already mounted error

Error: InvalidMountConfigError: Mount path already in use

Solution: Unmount first or use a different path:

JavaScript
// Unmount existing
await sandbox.unmountBucket("/data");
// Or use different path
await sandbox.mountBucket("bucket2", "/storage", { endpoint: "..." });

Slow file access

File operations on mounted buckets are slower than local filesystem due to network latency.

Solution: Copy frequently accessed files locally:

JavaScript
// Copy to local filesystem
await sandbox.exec("cp", {
args: ["/data/large-dataset.csv", "/workspace/dataset.csv"],
});
// Work with local copy (faster)
await sandbox.exec("python", {
args: ["process.py", "/workspace/dataset.csv"],
});
// Save results back to bucket
await sandbox.exec("cp", {
args: ["/workspace/results.json", "/data/results/output.json"],
});

Best practices

  • Mount early - Mount buckets at sandbox initialization
  • Use R2 for Cloudflare - Zero egress fees and optimized configuration
  • Secure credentials - Always use Worker secrets, never hardcode
  • Read-only when possible - Protect data with read-only mounts
  • Use prefixes for isolation - Mount subdirectories when working with specific datasets
  • Mount paths - Use /data, /storage, or /mnt/* (avoid /workspace, /tmp)
  • Handle errors - Wrap mount operations in try/catch blocks
  • Optimize access - Copy frequently accessed files locally