Backup and restore
Create point-in-time snapshots of sandbox directories and restore them using copy-on-write overlays. Backups are stored in an R2 bucket and use squashfs compression.
-
Create an R2 bucket for storing backups:
Terminal window npx wrangler r2 bucket create my-backup-bucket -
Add the
BACKUP_BUCKETR2 binding and presigned URL credentials to your Wrangler configuration:{"name": "my-sandbox-worker","main": "src/index.ts",// Set this to today's date"compatibility_date": "2026-03-11","compatibility_flags": ["nodejs_compat"],"containers": [{"class_name": "Sandbox","image": "./Dockerfile",},],"durable_objects": {"bindings": [{"class_name": "Sandbox","name": "Sandbox",},],},"migrations": [{"new_sqlite_classes": ["Sandbox"],"tag": "v1",},],"vars": {"BACKUP_BUCKET_NAME": "my-backup-bucket","CLOUDFLARE_ACCOUNT_ID": "<YOUR_ACCOUNT_ID>",},"r2_buckets": [{"binding": "BACKUP_BUCKET","bucket_name": "my-backup-bucket",},],}name = "my-sandbox-worker"main = "src/index.ts"# Set this to today's datecompatibility_date = "2026-03-11"compatibility_flags = [ "nodejs_compat" ][[containers]]class_name = "Sandbox"image = "./Dockerfile"[[durable_objects.bindings]]class_name = "Sandbox"name = "Sandbox"[[migrations]]new_sqlite_classes = [ "Sandbox" ]tag = "v1"[vars]BACKUP_BUCKET_NAME = "my-backup-bucket"CLOUDFLARE_ACCOUNT_ID = "<YOUR_ACCOUNT_ID>"[[r2_buckets]]binding = "BACKUP_BUCKET"bucket_name = "my-backup-bucket" -
Set your R2 API credentials as secrets:
Terminal window npx wrangler secret put R2_ACCESS_KEY_IDnpx wrangler secret put R2_SECRET_ACCESS_KEYYou can create R2 API tokens in the Cloudflare dashboard ↗ under R2 > Overview > Manage R2 API Tokens. The token needs Object Read & Write permissions for your backup bucket.
Use createBackup() to snapshot a directory and upload it to R2:
import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backup of /workspaceconst backup = await sandbox.createBackup({ dir: "/workspace" });console.log(`Backup created: ${backup.id}`);import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backup of /workspaceconst backup = await sandbox.createBackup({ dir: "/workspace" });console.log(`Backup created: ${backup.id}`);The SDK creates a compressed squashfs archive of the directory and uploads it directly to your R2 bucket using a presigned URL.
Use restoreBackup() to restore a directory from a backup:
import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backupconst backup = await sandbox.createBackup({ dir: "/workspace" });
// Restore the backupconst result = await sandbox.restoreBackup(backup);console.log(`Restored: ${result.success}`);import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backupconst backup = await sandbox.createBackup({ dir: "/workspace" });
// Restore the backupconst result = await sandbox.restoreBackup(backup);console.log(`Restored: ${result.success}`);When backing up a directory inside a git repository, set useGitignore: true to exclude files matching .gitignore rules. This is useful for skipping large generated directories like node_modules/, dist/, or build/ that can be recreated.
import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Back up only tracked and untracked non-ignored filesconst backup = await sandbox.createBackup({ dir: "/workspace", useGitignore: true,});import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Back up only tracked and untracked non-ignored filesconst backup = await sandbox.createBackup({ dir: "/workspace", useGitignore: true,});The SDK uses git ls-files to resolve which files are ignored. Both root-level and nested .gitignore files are respected.
By default, useGitignore is false and all files in the directory are included in the backup.
Save state before risky operations and restore if something fails:
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Save checkpoint before risky operationconst checkpoint = await sandbox.createBackup({ dir: "/workspace" });
try { await sandbox.exec("npm install some-experimental-package"); await sandbox.exec("npm run build");} catch (error) { // Restore to checkpoint if something goes wrong await sandbox.restoreBackup(checkpoint); console.log("Rolled back to checkpoint");}const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Save checkpoint before risky operationconst checkpoint = await sandbox.createBackup({ dir: "/workspace" });
try { await sandbox.exec("npm install some-experimental-package"); await sandbox.exec("npm run build");} catch (error) { // Restore to checkpoint if something goes wrong await sandbox.restoreBackup(checkpoint); console.log("Rolled back to checkpoint");}The DirectoryBackup handle is serializable. Persist it to KV, D1, or Durable Object storage for later use:
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backup and store the handle in KVconst backup = await sandbox.createBackup({ dir: "/workspace", name: "deploy-v2", ttl: 604800, // 7 days});
await env.KV.put(`backup:${userId}`, JSON.stringify(backup));
// Later, retrieve and restoreconst stored = await env.KV.get(`backup:${userId}`);if (stored) { const backupHandle = JSON.parse(stored); await sandbox.restoreBackup(backupHandle);}const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backup and store the handle in KVconst backup = await sandbox.createBackup({ dir: "/workspace", name: "deploy-v2", ttl: 604800, // 7 days});
await env.KV.put(`backup:${userId}`, JSON.stringify(backup));
// Later, retrieve and restoreconst stored = await env.KV.get(`backup:${userId}`);if (stored) { const backupHandle = JSON.parse(stored); await sandbox.restoreBackup(backupHandle);}Add a name option to identify backups. Names can be up to 256 characters:
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
const backup = await sandbox.createBackup({ dir: "/workspace", name: "before-migration",});
console.log(`Backup ID: ${backup.id}`);const sandbox = getSandbox(env.Sandbox, "my-sandbox");
const backup = await sandbox.createBackup({ dir: "/workspace", name: "before-migration",});
console.log(`Backup ID: ${backup.id}`);Set a custom time-to-live for backups. The default TTL is 3 days (259200 seconds). The ttl value must be a positive number of seconds:
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Short-lived backup for a quick operationconst shortBackup = await sandbox.createBackup({ dir: "/workspace", ttl: 600, // 10 minutes});
// Long-lived backup for extended workflowsconst longBackup = await sandbox.createBackup({ dir: "/workspace", name: "daily-snapshot", ttl: 604800, // 7 days});const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Short-lived backup for a quick operationconst shortBackup = await sandbox.createBackup({ dir: "/workspace", ttl: 600, // 10 minutes});
// Long-lived backup for extended workflowsconst longBackup = await sandbox.createBackup({ dir: "/workspace", name: "daily-snapshot", ttl: 604800, // 7 days});The TTL is enforced at restore time, not at creation time. When you call restoreBackup(), the SDK reads the backup metadata from R2 and compares the creation timestamp plus TTL against the current time (with a 60-second buffer to prevent race conditions). If the TTL has elapsed, the restore is rejected with a BACKUP_EXPIRED error.
The TTL does not automatically delete backup objects from R2. Expired backups remain in your bucket and continue to consume storage until you explicitly delete them or configure an automatic cleanup rule.
To automatically remove expired backup objects from R2, set up an R2 object lifecycle rule on your backup bucket. This is the recommended way to prevent expired backups from accumulating indefinitely.
For example, if your longest TTL is 7 days, configure a lifecycle rule to delete objects older than 7 days from the backups/ prefix. This ensures R2 storage does not grow unbounded while giving you a buffer to restore any non-expired backup.
Backup archives are stored in your R2 bucket under the backups/ prefix with the structure backups/{backupId}/data.sqsh and backups/{backupId}/meta.json. You can use the BACKUP_BUCKET R2 binding to manage these objects directly.
If you only need the most recent backup, delete the previous one before creating a new one:
import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Delete the previous backup's R2 objects before creating a new oneif (previousBackup) { await env.BACKUP_BUCKET.delete(`backups/${previousBackup.id}/data.sqsh`); await env.BACKUP_BUCKET.delete(`backups/${previousBackup.id}/meta.json`);}
// Create a fresh backupconst backup = await sandbox.createBackup({ dir: "/workspace", name: "latest",});
// Store the handle so you can delete it next timeawait env.KV.put("latest-backup", JSON.stringify(backup));import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Delete the previous backup's R2 objects before creating a new oneif (previousBackup) { await env.BACKUP_BUCKET.delete(`backups/${previousBackup.id}/data.sqsh`); await env.BACKUP_BUCKET.delete(`backups/${previousBackup.id}/meta.json`);}
// Create a fresh backupconst backup = await sandbox.createBackup({ dir: "/workspace", name: "latest",});
// Store the handle so you can delete it next timeawait env.KV.put("latest-backup", JSON.stringify(backup));To clean up multiple old backups, list objects under the backups/ prefix and delete them by key:
// List all backup objects in the bucketconst listed = await env.BACKUP_BUCKET.list({ prefix: "backups/" });
for (const object of listed.objects) { // Parse the backup ID from the key (backups/{id}/data.sqsh or backups/{id}/meta.json) const parts = object.key.split("/"); const backupId = parts[1];
// Delete objects older than 7 days const ageMs = Date.now() - object.uploaded.getTime(); const sevenDaysMs = 7 * 24 * 60 * 60 * 1000; if (ageMs > sevenDaysMs) { await env.BACKUP_BUCKET.delete(object.key); console.log(`Deleted expired object: ${object.key}`); }}// List all backup objects in the bucketconst listed = await env.BACKUP_BUCKET.list({ prefix: "backups/" });
for (const object of listed.objects) { // Parse the backup ID from the key (backups/{id}/data.sqsh or backups/{id}/meta.json) const parts = object.key.split("/"); const backupId = parts[1];
// Delete objects older than 7 days const ageMs = Date.now() - object.uploaded.getTime(); const sevenDaysMs = 7 * 24 * 60 * 60 * 1000; if (ageMs > sevenDaysMs) { await env.BACKUP_BUCKET.delete(object.key); console.log(`Deleted expired object: ${object.key}`); }}If you have the backup ID, delete both its archive and metadata directly:
const backupId = backup.id;
await env.BACKUP_BUCKET.delete(`backups/${backupId}/data.sqsh`);await env.BACKUP_BUCKET.delete(`backups/${backupId}/meta.json`);const backupId = backup.id;
await env.BACKUP_BUCKET.delete(`backups/${backupId}/data.sqsh`);await env.BACKUP_BUCKET.delete(`backups/${backupId}/meta.json`);Restore uses FUSE overlayfs to mount the backup as a read-only lower layer. New writes go to a writable upper layer and do not affect the original backup:
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backupconst backup = await sandbox.createBackup({ dir: "/workspace" });
// Restore the backupawait sandbox.restoreBackup(backup);
// New writes go to the upper layer — the backup is unchangedawait sandbox.writeFile( "/workspace/new-file.txt", "This does not modify the backup",);
// Restore the same backup again to discard changesawait sandbox.restoreBackup(backup);const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backupconst backup = await sandbox.createBackup({ dir: "/workspace" });
// Restore the backupawait sandbox.restoreBackup(backup);
// New writes go to the upper layer — the backup is unchangedawait sandbox.writeFile( "/workspace/new-file.txt", "This does not modify the backup",);
// Restore the same backup again to discard changesawait sandbox.restoreBackup(backup);Backup and restore operations can throw specific errors. Wrap calls in try...catch ↗ blocks:
import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Handle backup errorstry { const backup = await sandbox.createBackup({ dir: "/workspace" });} catch (error) { if (error.code === "INVALID_BACKUP_CONFIG") { // Missing BACKUP_BUCKET binding or invalid directory path console.error("Configuration error:", error.message); } else if (error.code === "BACKUP_CREATE_FAILED") { // Archive creation or upload to R2 failed console.error("Backup failed:", error.message); }}
// Handle restore errorstry { await sandbox.restoreBackup(backup);} catch (error) { if (error.code === "BACKUP_NOT_FOUND") { console.error("Backup not found in R2:", error.message); } else if (error.code === "BACKUP_EXPIRED") { console.error("Backup TTL has elapsed:", error.message); } else if (error.code === "BACKUP_RESTORE_FAILED") { console.error("Restore failed:", error.message); }}import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Handle backup errorstry { const backup = await sandbox.createBackup({ dir: "/workspace" });} catch (error) { if (error.code === "INVALID_BACKUP_CONFIG") { // Missing BACKUP_BUCKET binding or invalid directory path console.error("Configuration error:", error.message); } else if (error.code === "BACKUP_CREATE_FAILED") { // Archive creation or upload to R2 failed console.error("Backup failed:", error.message); }}
// Handle restore errorstry { await sandbox.restoreBackup(backup);} catch (error) { if (error.code === "BACKUP_NOT_FOUND") { console.error("Backup not found in R2:", error.message); } else if (error.code === "BACKUP_EXPIRED") { console.error("Backup TTL has elapsed:", error.message); } else if (error.code === "BACKUP_RESTORE_FAILED") { console.error("Restore failed:", error.message); }}The createBackup() method uses mksquashfs to create a compressed archive of the target directory. This process must be able to read every file and subdirectory within the path you are backing up. If any file or directory has restrictive permissions that prevent the archiver from reading it, the backup fails with a BackupCreateError and a "Permission denied" message.
- Directories owned by other users — If the target directory contains subdirectories created by a different user or process (for example,
/home/sandbox/.claude), the archiver may not have read access. - Restrictive file modes — Files with modes like
0600or directories with0700that belong to a different user than the one running the backup process. - Runtime-generated config directories — Tools and applications often create configuration directories (such as
.cache,.config, or tool-specific dotfiles) with restrictive permissions.
The recommended approach is to set permissions in your Dockerfile so that every container starts with the correct access. This avoids running chmod at runtime before every backup:
# Ensure the backup target directory is readableRUN mkdir -p /home/sandbox && chmod -R a+rX /home/sandboxThe a+rX flag grants read access to all files and execute (traverse) access to all directories, without changing write permissions.
If the restrictive permissions come from files created at runtime (for example, a tool that generates config files with 0600 mode), fix them before calling createBackup():
await sandbox.exec("chmod -R a+rX /home/sandbox/.claude");const backup = await sandbox.createBackup({ dir: "/home/sandbox" });If the backup encounters a permission issue, you will see an error like:
BackupCreateError: mksquashfs failed: Could not create destination file: Permission deniedThis means mksquashfs could not read one or more files inside the directory you passed to createBackup(). Check the permissions of all files and subdirectories within that path.
- Stop writes before restoring - Stop processes writing to the target directory before calling
restoreBackup() - Use checkpoints - Create backups before risky operations like package installations or migrations
- Exclude gitignored files - Set
useGitignore: truewhen backing up git repositories to skip generated files likenode_modules/and reduce backup size - Set appropriate TTLs - Use short TTLs for temporary checkpoints and longer TTLs for persistent snapshots
- Store handles externally - Persist
DirectoryBackuphandles to KV, D1, or Durable Object storage for cross-request access - Configure R2 lifecycle rules - Set up object lifecycle rules to automatically delete expired backups from R2, since TTL is only enforced at restore time
- Clean up old backups - Delete previous backup objects from R2 when you no longer need them, or use the delete-then-write pattern for rolling backups
- Handle errors - Wrap backup and restore calls in try/catch blocks
- Re-restore after restart - The FUSE mount is ephemeral, so re-restore from the backup handle after container restarts
- Backups API reference - Full method documentation
- Storage API reference - Mount S3-compatible buckets
- R2 documentation - Learn about Cloudflare R2
- R2 lifecycle rules - Configure automatic object cleanup