Mount buckets
Mount S3-compatible object storage buckets as local filesystem paths. Access object storage using standard file operations.
Mount S3-compatible buckets when you need:
- Persistent data - Data survives sandbox destruction
- Large datasets - Process data without downloading
- Shared storage - Multiple sandboxes access the same data
- Cost-effective persistence - Cheaper than keeping sandboxes alive
import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "data-processor");
// Mount R2 bucketawait sandbox.mountBucket("my-r2-bucket", "/data", { endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",});
// Access bucket with standard filesystem operationsawait sandbox.exec("ls", { args: ["/data"] });await sandbox.writeFile("/data/results.json", JSON.stringify(results));
// Use from Pythonawait sandbox.exec("python", { args: [ "-c", `import pandas as pddf = pd.read_csv('/data/input.csv')df.describe().to_csv('/data/summary.csv')`, ],});import { getSandbox } from '@cloudflare/sandbox';
const sandbox = getSandbox(env.Sandbox, 'data-processor');
// Mount R2 bucketawait sandbox.mountBucket('my-r2-bucket', '/data', { endpoint: 'https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com'});
// Access bucket with standard filesystem operationsawait sandbox.exec('ls', { args: ['/data'] });await sandbox.writeFile('/data/results.json', JSON.stringify(results));
// Use from Pythonawait sandbox.exec('python', { args: ['-c', `import pandas as pddf = pd.read_csv('/data/input.csv')df.describe().to_csv('/data/summary.csv')`] });Set credentials as Worker secrets and the SDK automatically detects them:
npx wrangler secret put AWS_ACCESS_KEY_IDnpx wrangler secret put AWS_SECRET_ACCESS_KEY// Credentials automatically detected from environmentawait sandbox.mountBucket("my-r2-bucket", "/data", { endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",});// Credentials automatically detected from environmentawait sandbox.mountBucket('my-r2-bucket', '/data', { endpoint: 'https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com'});Pass credentials directly when needed:
await sandbox.mountBucket("my-r2-bucket", "/data", { endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com", credentials: { accessKeyId: env.R2_ACCESS_KEY_ID, secretAccessKey: env.R2_SECRET_ACCESS_KEY, },});await sandbox.mountBucket('my-r2-bucket', '/data', { endpoint: 'https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com', credentials: { accessKeyId: env.R2_ACCESS_KEY_ID, secretAccessKey: env.R2_SECRET_ACCESS_KEY }});Mount a specific subdirectory within a bucket using the prefix option. Only contents under the prefix are visible at the mount point:
// Mount only the /uploads/images/ subdirectoryawait sandbox.mountBucket("my-bucket", "/images", { endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com", prefix: "/uploads/images/",});
// Files appear at mount point without the prefix// Bucket: my-bucket/uploads/images/photo.jpg// Mounted path: /images/photo.jpgawait sandbox.exec("ls", { args: ["/images"] });
// Write to subdirectoryawait sandbox.writeFile("/images/photo.jpg", imageData);// Creates my-bucket:/uploads/images/photo.jpg
// Mount different prefixes to different pathsawait sandbox.mountBucket("datasets", "/training-data", { endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com", prefix: "/ml/training/",});
await sandbox.mountBucket("datasets", "/test-data", { endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com", prefix: "/ml/testing/",});// Mount only the /uploads/images/ subdirectoryawait sandbox.mountBucket('my-bucket', '/images', { endpoint: 'https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com', prefix: '/uploads/images/'});
// Files appear at mount point without the prefix// Bucket: my-bucket/uploads/images/photo.jpg// Mounted path: /images/photo.jpgawait sandbox.exec('ls', { args: ['/images'] });
// Write to subdirectoryawait sandbox.writeFile('/images/photo.jpg', imageData);// Creates my-bucket:/uploads/images/photo.jpg
// Mount different prefixes to different pathsawait sandbox.mountBucket('datasets', '/training-data', { endpoint: 'https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com', prefix: '/ml/training/'});
await sandbox.mountBucket('datasets', '/test-data', { endpoint: 'https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com', prefix: '/ml/testing/'});Protect data by mounting buckets in read-only mode:
await sandbox.mountBucket("dataset-bucket", "/data", { endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com", readOnly: true,});
// Reads workawait sandbox.exec("cat", { args: ["/data/dataset.csv"] });
// Writes failawait sandbox.writeFile("/data/new-file.txt", "data"); // Error: Read-only filesystemawait sandbox.mountBucket('dataset-bucket', '/data', { endpoint: 'https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com', readOnly: true});
// Reads workawait sandbox.exec('cat', { args: ['/data/dataset.csv'] });
// Writes failawait sandbox.writeFile('/data/new-file.txt', 'data'); // Error: Read-only filesystemYou can mount R2 buckets during local development with wrangler dev by passing the localBucket option. This uses the R2 binding from your Worker environment directly, so no S3-compatible endpoint or credentials are required.
Add an R2 bucket binding to your Wrangler configuration:
{ "r2_buckets": [ { "binding": "MY_BUCKET", "bucket_name": "my-test-bucket" } ]}[[r2_buckets]]binding = "MY_BUCKET"bucket_name = "my-test-bucket"Pass localBucket: true in the options to mount the bucket locally:
await sandbox.mountBucket("MY_BUCKET", "/data", { localBucket: true,});
// Access files using standard operationsawait sandbox.exec("ls", { args: ["/data"] });await sandbox.writeFile("/data/results.json", JSON.stringify(results));await sandbox.mountBucket('MY_BUCKET', '/data', { localBucket: true});
// Access files using standard operationsawait sandbox.exec('ls', { args: ['/data'] });await sandbox.writeFile('/data/results.json', JSON.stringify(results));The readOnly and prefix options work the same way in local mode:
// Read-only local mountawait sandbox.mountBucket("MY_BUCKET", "/data", { localBucket: true, readOnly: true,});
// Mount a subdirectoryawait sandbox.mountBucket("MY_BUCKET", "/images", { localBucket: true, prefix: "/uploads/images/",});// Read-only local mountawait sandbox.mountBucket('MY_BUCKET', '/data', { localBucket: true, readOnly: true});
// Mount a subdirectoryawait sandbox.mountBucket('MY_BUCKET', '/images', { localBucket: true, prefix: '/uploads/images/'});During local development, files are synchronized between R2 and the container using a periodic sync process rather than a direct filesystem mount. Keep the following in mind:
- Synchronization window - A brief delay exists between when a file is written and when it appears on the other side. For example, if you upload a file to R2 and then immediately read it from the mounted path in the container, the file may not yet be available. Allow a short window for synchronization to complete before reading recently written data.
- High-frequency writes - Rapid successive writes to the same file path may take slightly longer to fully propagate. For best results, avoid writing to the same file from both R2 and the container at the same time.
- Bidirectional sync - Changes made in the container are synced to R2, and changes made in R2 are synced to the container. Both directions follow the same periodic sync model.
// Mount for processingawait sandbox.mountBucket("my-bucket", "/data", { endpoint: "..." });
// Do workawait sandbox.exec("python process_data.py");
// Clean upawait sandbox.unmountBucket("/data");// Mount for processingawait sandbox.mountBucket('my-bucket', '/data', { endpoint: '...' });
// Do workawait sandbox.exec('python process_data.py');
// Clean upawait sandbox.unmountBucket('/data');The SDK supports any S3-compatible object storage. Here are examples for common providers:
await sandbox.mountBucket("my-s3-bucket", "/data", { endpoint: "https://s3.us-west-2.amazonaws.com", // Regional endpoint credentials: { accessKeyId: env.AWS_ACCESS_KEY_ID, secretAccessKey: env.AWS_SECRET_ACCESS_KEY, },});await sandbox.mountBucket('my-s3-bucket', '/data', { endpoint: 'https://s3.us-west-2.amazonaws.com', // Regional endpoint credentials: { accessKeyId: env.AWS_ACCESS_KEY_ID, secretAccessKey: env.AWS_SECRET_ACCESS_KEY }});await sandbox.mountBucket("my-gcs-bucket", "/data", { endpoint: "https://storage.googleapis.com", credentials: { accessKeyId: env.GCS_ACCESS_KEY_ID, // HMAC key secretAccessKey: env.GCS_SECRET_ACCESS_KEY, },});await sandbox.mountBucket('my-gcs-bucket', '/data', { endpoint: 'https://storage.googleapis.com', credentials: { accessKeyId: env.GCS_ACCESS_KEY_ID, // HMAC key secretAccessKey: env.GCS_SECRET_ACCESS_KEY }});For providers like Backblaze B2, MinIO, Wasabi, or others, use the standard mount pattern:
await sandbox.mountBucket("my-bucket", "/data", { endpoint: "https://s3.us-west-000.backblazeb2.com", // Provider-specific endpoint credentials: { accessKeyId: env.ACCESS_KEY_ID, secretAccessKey: env.SECRET_ACCESS_KEY, },});await sandbox.mountBucket('my-bucket', '/data', { endpoint: 'https://s3.us-west-000.backblazeb2.com', // Provider-specific endpoint credentials: { accessKeyId: env.ACCESS_KEY_ID, secretAccessKey: env.SECRET_ACCESS_KEY }});For provider-specific configuration, see the s3fs-fuse wiki ↗ which documents supported providers and their recommended flags.
Error: MissingCredentialsError: No credentials found
Solution: Set credentials as Worker secrets:
npx wrangler secret put AWS_ACCESS_KEY_IDnpx wrangler secret put AWS_SECRET_ACCESS_KEYError: S3FSMountError: mount failed
Common causes:
- Incorrect endpoint URL
- Invalid credentials
- Bucket doesn't exist
- Network connectivity issues
Verify your endpoint format and credentials:
try { await sandbox.mountBucket("my-bucket", "/data", { endpoint: "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com", });} catch (error) { console.error("Mount failed:", error.message); // Check endpoint format, credentials, bucket existence}try { await sandbox.mountBucket('my-bucket', '/data', { endpoint: 'https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com' });} catch (error) { console.error('Mount failed:', error.message); // Check endpoint format, credentials, bucket existence}Error: InvalidMountConfigError: Mount path already in use
Solution: Unmount first or use a different path:
// Unmount existingawait sandbox.unmountBucket("/data");
// Or use different pathawait sandbox.mountBucket("bucket2", "/storage", { endpoint: "..." });// Unmount existingawait sandbox.unmountBucket('/data');
// Or use different pathawait sandbox.mountBucket('bucket2', '/storage', { endpoint: '...' });File operations on mounted buckets are slower than local filesystem due to network latency.
Solution: Copy frequently accessed files locally:
// Copy to local filesystemawait sandbox.exec("cp", { args: ["/data/large-dataset.csv", "/workspace/dataset.csv"],});
// Work with local copy (faster)await sandbox.exec("python", { args: ["process.py", "/workspace/dataset.csv"],});
// Save results back to bucketawait sandbox.exec("cp", { args: ["/workspace/results.json", "/data/results/output.json"],});// Copy to local filesystemawait sandbox.exec('cp', { args: ['/data/large-dataset.csv', '/workspace/dataset.csv'] });
// Work with local copy (faster)await sandbox.exec('python', { args: ['process.py', '/workspace/dataset.csv'] });
// Save results back to bucketawait sandbox.exec('cp', { args: ['/workspace/results.json', '/data/results/output.json'] });- Mount early - Mount buckets at sandbox initialization
- Use R2 for Cloudflare - Zero egress fees and optimized configuration
- Secure credentials - Always use Worker secrets, never hardcode
- Read-only when possible - Protect data with read-only mounts
- Use prefixes for isolation - Mount subdirectories when working with specific datasets
- Mount paths - Use
/data,/storage, or/mnt/*(avoid/workspace,/tmp) - Handle errors - Wrap mount operations in try/catch blocks
- Optimize access - Copy frequently accessed files locally
- Persistent storage tutorial - Complete R2 example
- Storage API reference - Full method documentation
- Environment variables - Credential configuration
- R2 documentation - Learn about Cloudflare R2