---
title: Backup and restore
description: Create point-in-time backups and restore sandbox directories.
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/sandbox/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Backup and restore

Create point-in-time snapshots of sandbox directories and restore them using copy-on-write overlays. Backups are stored in an R2 bucket and use squashfs compression.

## Prerequisites

1. Create an R2 bucket for storing backups:  
Terminal window  
```  
npx wrangler r2 bucket create my-backup-bucket  
```
2. Add the `BACKUP_BUCKET` R2 binding and presigned URL credentials to your Wrangler configuration:  
   * [  wrangler.jsonc ](#tab-panel-7935)  
   * [  wrangler.toml ](#tab-panel-7936)  
JSONC  
```  
{  
  "name": "my-sandbox-worker",  
  "main": "src/index.ts",  
  // Set this to today's date  
  "compatibility_date": "2026-05-08",  
  "compatibility_flags": ["nodejs_compat"],  
  "containers": [  
    {  
      "class_name": "Sandbox",  
      "image": "./Dockerfile",  
    },  
  ],  
  "durable_objects": {  
    "bindings": [  
      {  
        "class_name": "Sandbox",  
        "name": "Sandbox",  
      },  
    ],  
  },  
  "migrations": [  
    {  
      "new_sqlite_classes": ["Sandbox"],  
      "tag": "v1",  
    },  
  ],  
  "vars": {  
    "BACKUP_BUCKET_NAME": "my-backup-bucket",  
    "CLOUDFLARE_ACCOUNT_ID": "<YOUR_ACCOUNT_ID>",  
  },  
  "r2_buckets": [  
    {  
      "binding": "BACKUP_BUCKET",  
      "bucket_name": "my-backup-bucket",  
    },  
  ],  
}  
```  
TOML  
```  
name = "my-sandbox-worker"  
main = "src/index.ts"  
# Set this to today's date  
compatibility_date = "2026-05-08"  
compatibility_flags = [ "nodejs_compat" ]  
[[containers]]  
class_name = "Sandbox"  
image = "./Dockerfile"  
[[durable_objects.bindings]]  
class_name = "Sandbox"  
name = "Sandbox"  
[[migrations]]  
new_sqlite_classes = [ "Sandbox" ]  
tag = "v1"  
[vars]  
BACKUP_BUCKET_NAME = "my-backup-bucket"  
CLOUDFLARE_ACCOUNT_ID = "<YOUR_ACCOUNT_ID>"  
[[r2_buckets]]  
binding = "BACKUP_BUCKET"  
bucket_name = "my-backup-bucket"  
```
3. Set your R2 API credentials as secrets:  
Terminal window  
```  
npx wrangler secret put R2_ACCESS_KEY_ID  
npx wrangler secret put R2_SECRET_ACCESS_KEY  
```  
You can create R2 API tokens in the [Cloudflare dashboard ↗](https://dash.cloudflare.com/) under **R2** \> **Overview** \> **Manage R2 API Tokens**. The token needs **Object Read & Write** permissions for your backup bucket.

Note

The `vars` and API secrets in steps 2 and 3 are only required for production. For local development with `wrangler dev`, only the `BACKUP_BUCKET` R2 binding is required. Refer to [Local development](#local-development) for details.

## Create a backup

Use `createBackup()` to snapshot a directory and upload it to R2:

* [  JavaScript ](#tab-panel-7937)
* [  TypeScript ](#tab-panel-7938)

JavaScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a backup of /workspace

const backup = await sandbox.createBackup({ dir: "/workspace" });

console.log(`Backup created: ${backup.id}`);


```

TypeScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a backup of /workspace

const backup = await sandbox.createBackup({ dir: "/workspace" });

console.log(`Backup created: ${backup.id}`);


```

The SDK creates a compressed squashfs archive of the directory and uploads it directly to your R2 bucket using a presigned URL.

## Restore a backup

Use `restoreBackup()` to restore a directory from a backup:

* [  JavaScript ](#tab-panel-7939)
* [  TypeScript ](#tab-panel-7940)

JavaScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a backup

const backup = await sandbox.createBackup({ dir: "/workspace" });


// Restore the backup

const result = await sandbox.restoreBackup(backup);

console.log(`Restored: ${result.success}`);


```

TypeScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a backup

const backup = await sandbox.createBackup({ dir: "/workspace" });


// Restore the backup

const result = await sandbox.restoreBackup(backup);

console.log(`Restored: ${result.success}`);


```

Ephemeral mount

In production, the FUSE mount is lost when the sandbox sleeps or the container restarts. Re-restore from the backup handle to recover. This does not apply to local development.

## Exclude gitignored files

When backing up a directory inside a git repository, set `useGitignore: true` to exclude files matching `.gitignore` rules. This is useful for skipping large generated directories like `node_modules/`, `dist/`, or `build/` that can be recreated.

* [  JavaScript ](#tab-panel-7941)
* [  TypeScript ](#tab-panel-7942)

JavaScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Back up only tracked and untracked non-ignored files

const backup = await sandbox.createBackup({

  dir: "/workspace",

  useGitignore: true,

});


```

TypeScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Back up only tracked and untracked non-ignored files

const backup = await sandbox.createBackup({

  dir: "/workspace",

  useGitignore: true,

});


```

The SDK uses `git ls-files` to resolve which files are ignored. Both root-level and nested `.gitignore` files are respected.

By default, `useGitignore` is `false` and all files in the directory are included in the backup.

Requirements

`useGitignore` requires `git` to be installed in the container. If `useGitignore` is `true` and `git` is not available, `createBackup()` throws a `BackupCreateError`. If the backup directory is not inside a git repository, the option has no effect and all files are included.

## Checkpoint and rollback

Save state before risky operations and restore if something fails:

* [  JavaScript ](#tab-panel-7945)
* [  TypeScript ](#tab-panel-7946)

JavaScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Save checkpoint before risky operation

const checkpoint = await sandbox.createBackup({ dir: "/workspace" });


try {

  await sandbox.exec("npm install some-experimental-package");

  await sandbox.exec("npm run build");

} catch (error) {

  // Restore to checkpoint if something goes wrong

  await sandbox.restoreBackup(checkpoint);

  console.log("Rolled back to checkpoint");

}


```

TypeScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Save checkpoint before risky operation

const checkpoint = await sandbox.createBackup({ dir: "/workspace" });


try {

  await sandbox.exec("npm install some-experimental-package");

  await sandbox.exec("npm run build");

} catch (error) {

  // Restore to checkpoint if something goes wrong

  await sandbox.restoreBackup(checkpoint);

  console.log("Rolled back to checkpoint");

}


```

## Store backup handles

The `DirectoryBackup` handle is serializable. Persist it to KV, D1, or Durable Object storage for later use:

* [  JavaScript ](#tab-panel-7949)
* [  TypeScript ](#tab-panel-7950)

JavaScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a backup and store the handle in KV

const backup = await sandbox.createBackup({

  dir: "/workspace",

  name: "deploy-v2",

  ttl: 604800, // 7 days

});


await env.KV.put(`backup:${userId}`, JSON.stringify(backup));


// Later, retrieve and restore

const stored = await env.KV.get(`backup:${userId}`);

if (stored) {

  const backupHandle = JSON.parse(stored);

  await sandbox.restoreBackup(backupHandle);

}


```

TypeScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a backup and store the handle in KV

const backup = await sandbox.createBackup({

  dir: "/workspace",

  name: "deploy-v2",

  ttl: 604800, // 7 days

});


await env.KV.put(`backup:${userId}`, JSON.stringify(backup));


// Later, retrieve and restore

const stored = await env.KV.get(`backup:${userId}`);

if (stored) {

  const backupHandle = JSON.parse(stored);

  await sandbox.restoreBackup(backupHandle);

}


```

## Use named backups

Add a `name` option to identify backups. Names can be up to 256 characters:

* [  JavaScript ](#tab-panel-7943)
* [  TypeScript ](#tab-panel-7944)

JavaScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


const backup = await sandbox.createBackup({

  dir: "/workspace",

  name: "before-migration",

});


console.log(`Backup ID: ${backup.id}`);


```

TypeScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


const backup = await sandbox.createBackup({

  dir: "/workspace",

  name: "before-migration",

});


console.log(`Backup ID: ${backup.id}`);


```

## Configure TTL

Set a custom time-to-live for backups. The default TTL is 3 days (259200 seconds). The `ttl` value must be a positive number of seconds:

* [  JavaScript ](#tab-panel-7953)
* [  TypeScript ](#tab-panel-7954)

JavaScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Short-lived backup for a quick operation

const shortBackup = await sandbox.createBackup({

  dir: "/workspace",

  ttl: 600, // 10 minutes

});


// Long-lived backup for extended workflows

const longBackup = await sandbox.createBackup({

  dir: "/workspace",

  name: "daily-snapshot",

  ttl: 604800, // 7 days

});


```

TypeScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Short-lived backup for a quick operation

const shortBackup = await sandbox.createBackup({

  dir: "/workspace",

  ttl: 600, // 10 minutes

});


// Long-lived backup for extended workflows

const longBackup = await sandbox.createBackup({

  dir: "/workspace",

  name: "daily-snapshot",

  ttl: 604800, // 7 days

});


```

### How TTL is enforced

The TTL is enforced at **restore time**, not at creation time. When you call `restoreBackup()`, the SDK reads the backup metadata from R2 and compares the creation timestamp plus TTL against the current time (with a 60-second buffer to prevent race conditions). If the TTL has elapsed, the restore is rejected with a `BACKUP_EXPIRED` error.

The TTL does **not** automatically delete backup objects from R2\. Expired backups remain in your bucket and continue to consume storage until you explicitly delete them or configure an automatic cleanup rule.

### Configure R2 lifecycle rules for automatic cleanup

To automatically remove expired backup objects from R2, set up an [R2 object lifecycle rule](https://developers.cloudflare.com/r2/buckets/object-lifecycles/) on your backup bucket. This is the recommended way to prevent expired backups from accumulating indefinitely.

For example, if your longest TTL is 7 days, configure a lifecycle rule to delete objects older than 7 days from the `backups/` prefix. This ensures R2 storage does not grow unbounded while giving you a buffer to restore any non-expired backup.

## Local development

You can use backup and restore during local development with `wrangler dev` by passing the `localBucket: true` option to `createBackup()`. This uses the `BACKUP_BUCKET` R2 binding from your Worker environment directly, so no presigned URL credentials are required.

### Configure R2 binding

Add a `BACKUP_BUCKET` R2 binding to your Wrangler configuration:

* [  wrangler.jsonc ](#tab-panel-7933)
* [  wrangler.toml ](#tab-panel-7934)

JSONC

```

{

  "r2_buckets": [

    {

      "binding": "BACKUP_BUCKET",

      "bucket_name": "my-backup-bucket"

    }

  ]

}


```

TOML

```

[[r2_buckets]]

binding = "BACKUP_BUCKET"

bucket_name = "my-backup-bucket"


```

### Back up and restore with `localBucket`

Pass `localBucket: true` to `createBackup()` to back up and restore using the R2 binding directly:

* [  JavaScript ](#tab-panel-7951)
* [  TypeScript ](#tab-panel-7952)

JavaScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a local backup

const backup = await sandbox.createBackup({

  dir: "/workspace",

  localBucket: true,

});


// Restore the backup

const result = await sandbox.restoreBackup(backup);

console.log(`Restored: ${result.success}`);


```

TypeScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a local backup

const backup = await sandbox.createBackup({

  dir: "/workspace",

  localBucket: true,

});


// Restore the backup

const result = await sandbox.restoreBackup(backup);

console.log(`Restored: ${result.success}`);


```

Note

You can use an environment variable to toggle `localBucket` between local development and production. Set an environment variable such as `LOCAL_DEV` in your Wrangler configuration using `vars` for local development, then reference it in your code:

TypeScript

```

const backup = await sandbox.createBackup({

  dir: "/workspace",

  localBucket: Boolean(env.LOCAL_DEV),

});


```

When `localBucket` is `true`, presigned URL credentials are not required and the SDK uses the R2 binding directly. For more information on setting environment variables, refer to [Environment variables in Wrangler configuration](https://developers.cloudflare.com/workers/configuration/environment-variables/).

### Local development considerations

* **No presigned URLs** \- The SDK reads and writes backup archives directly through the R2 binding, so no S3-compatible credentials are needed.
* **No FUSE required** \- Local restore extracts the archive using `unsquashfs` instead of mounting with FUSE overlayfs. The backup is not applied as a copy-on-write overlay; the directory is replaced on restore.
* **Same archive format** \- Local backups use the same squashfs archive format as production backups.

Note

These considerations apply to local development with `wrangler dev` only. In production, backup and restore use presigned URLs and FUSE overlayfs.

## Clean up backup objects in R2

Backup archives are stored in your R2 bucket under the `backups/` prefix with the structure `backups/{backupId}/data.sqsh` and `backups/{backupId}/meta.json`. You can use the `BACKUP_BUCKET` R2 binding to manage these objects directly.

### Replace the latest backup (delete-then-write)

If you only need the most recent backup, delete the previous one before creating a new one:

* [  JavaScript ](#tab-panel-7955)
* [  TypeScript ](#tab-panel-7956)

JavaScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Delete the previous backup's R2 objects before creating a new one

if (previousBackup) {

  await env.BACKUP_BUCKET.delete(`backups/${previousBackup.id}/data.sqsh`);

  await env.BACKUP_BUCKET.delete(`backups/${previousBackup.id}/meta.json`);

}


// Create a fresh backup

const backup = await sandbox.createBackup({

  dir: "/workspace",

  name: "latest",

});


// Store the handle so you can delete it next time

await env.KV.put("latest-backup", JSON.stringify(backup));


```

TypeScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Delete the previous backup's R2 objects before creating a new one

if (previousBackup) {

  await env.BACKUP_BUCKET.delete(`backups/${previousBackup.id}/data.sqsh`);

  await env.BACKUP_BUCKET.delete(`backups/${previousBackup.id}/meta.json`);

}


// Create a fresh backup

const backup = await sandbox.createBackup({

  dir: "/workspace",

  name: "latest",

});


// Store the handle so you can delete it next time

await env.KV.put("latest-backup", JSON.stringify(backup));


```

### List and delete old backups by prefix

To clean up multiple old backups, list objects under the `backups/` prefix and delete them by key:

* [  JavaScript ](#tab-panel-7957)
* [  TypeScript ](#tab-panel-7958)

JavaScript

```

// List all backup objects in the bucket

const listed = await env.BACKUP_BUCKET.list({ prefix: "backups/" });


for (const object of listed.objects) {

  // Parse the backup ID from the key (backups/{id}/data.sqsh or backups/{id}/meta.json)

  const parts = object.key.split("/");

  const backupId = parts[1];


  // Delete objects older than 7 days

  const ageMs = Date.now() - object.uploaded.getTime();

  const sevenDaysMs = 7 * 24 * 60 * 60 * 1000;

  if (ageMs > sevenDaysMs) {

    await env.BACKUP_BUCKET.delete(object.key);

    console.log(`Deleted expired object: ${object.key}`);

  }

}


```

TypeScript

```

// List all backup objects in the bucket

const listed = await env.BACKUP_BUCKET.list({ prefix: "backups/" });


for (const object of listed.objects) {

  // Parse the backup ID from the key (backups/{id}/data.sqsh or backups/{id}/meta.json)

  const parts = object.key.split("/");

  const backupId = parts[1];


  // Delete objects older than 7 days

  const ageMs = Date.now() - object.uploaded.getTime();

  const sevenDaysMs = 7 * 24 * 60 * 60 * 1000;

  if (ageMs > sevenDaysMs) {

    await env.BACKUP_BUCKET.delete(object.key);

    console.log(`Deleted expired object: ${object.key}`);

  }

}


```

### Delete a specific backup by ID

If you have the backup ID, delete both its archive and metadata directly:

* [  JavaScript ](#tab-panel-7947)
* [  TypeScript ](#tab-panel-7948)

JavaScript

```

const backupId = backup.id;


await env.BACKUP_BUCKET.delete(`backups/${backupId}/data.sqsh`);

await env.BACKUP_BUCKET.delete(`backups/${backupId}/meta.json`);


```

TypeScript

```

const backupId = backup.id;


await env.BACKUP_BUCKET.delete(`backups/${backupId}/data.sqsh`);

await env.BACKUP_BUCKET.delete(`backups/${backupId}/meta.json`);


```

## Copy-on-write behavior

In production, restore uses FUSE overlayfs to mount the backup as a read-only lower layer. New writes go to a writable upper layer and do not affect the original backup:

* [  JavaScript ](#tab-panel-7959)
* [  TypeScript ](#tab-panel-7960)

JavaScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a backup

const backup = await sandbox.createBackup({ dir: "/workspace" });


// Restore the backup

await sandbox.restoreBackup(backup);


// New writes go to the upper layer — the backup is unchanged

await sandbox.writeFile(

  "/workspace/new-file.txt",

  "This does not modify the backup",

);


// Restore the same backup again to discard changes

await sandbox.restoreBackup(backup);


```

TypeScript

```

const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Create a backup

const backup = await sandbox.createBackup({ dir: "/workspace" });


// Restore the backup

await sandbox.restoreBackup(backup);


// New writes go to the upper layer — the backup is unchanged

await sandbox.writeFile(

  "/workspace/new-file.txt",

  "This does not modify the backup",

);


// Restore the same backup again to discard changes

await sandbox.restoreBackup(backup);


```

## Handle errors

Backup and restore operations can throw specific errors. Wrap calls in [try...catch ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/try...catch) blocks:

* [  JavaScript ](#tab-panel-7961)
* [  TypeScript ](#tab-panel-7962)

JavaScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Handle backup errors

try {

  const backup = await sandbox.createBackup({ dir: "/workspace" });

} catch (error) {

  if (error.code === "INVALID_BACKUP_CONFIG") {

    // Missing BACKUP_BUCKET binding or invalid directory path

    console.error("Configuration error:", error.message);

  } else if (error.code === "BACKUP_CREATE_FAILED") {

    // Archive creation or upload to R2 failed

    console.error("Backup failed:", error.message);

  }

}


// Handle restore errors

try {

  await sandbox.restoreBackup(backup);

} catch (error) {

  if (error.code === "BACKUP_NOT_FOUND") {

    console.error("Backup not found in R2:", error.message);

  } else if (error.code === "BACKUP_EXPIRED") {

    console.error("Backup TTL has elapsed:", error.message);

  } else if (error.code === "BACKUP_RESTORE_FAILED") {

    console.error("Restore failed:", error.message);

  }

}


```

TypeScript

```

import { getSandbox } from "@cloudflare/sandbox";


const sandbox = getSandbox(env.Sandbox, "my-sandbox");


// Handle backup errors

try {

  const backup = await sandbox.createBackup({ dir: "/workspace" });

} catch (error) {

  if (error.code === "INVALID_BACKUP_CONFIG") {

    // Missing BACKUP_BUCKET binding or invalid directory path

    console.error("Configuration error:", error.message);

  } else if (error.code === "BACKUP_CREATE_FAILED") {

    // Archive creation or upload to R2 failed

    console.error("Backup failed:", error.message);

  }

}


// Handle restore errors

try {

  await sandbox.restoreBackup(backup);

} catch (error) {

  if (error.code === "BACKUP_NOT_FOUND") {

    console.error("Backup not found in R2:", error.message);

  } else if (error.code === "BACKUP_EXPIRED") {

    console.error("Backup TTL has elapsed:", error.message);

  } else if (error.code === "BACKUP_RESTORE_FAILED") {

    console.error("Restore failed:", error.message);

  }

}


```

## Path permissions

The `createBackup()` method uses `mksquashfs` to create a compressed archive of the target directory. This process must be able to read every file and subdirectory within the path you are backing up. If any file or directory has restrictive permissions that prevent the archiver from reading it, the backup fails with a `BackupCreateError` and a "Permission denied" message.

### Common causes

* **Directories owned by other users** — If the target directory contains subdirectories created by a different user or process (for example, `/home/sandbox/.claude`), the archiver may not have read access.
* **Restrictive file modes** — Files with modes like `0600` or directories with `0700` that belong to a different user than the one running the backup process.
* **Runtime-generated config directories** — Tools and applications often create configuration directories (such as `.cache`, `.config`, or tool-specific dotfiles) with restrictive permissions.

### Fix permissions at build time

The recommended approach is to set permissions in your Dockerfile so that every container starts with the correct access. This avoids running `chmod` at runtime before every backup:

```

# Ensure the backup target directory is readable

RUN mkdir -p /home/sandbox && chmod -R a+rX /home/sandbox


```

The `a+rX` flag grants read access to all files and execute (traverse) access to all directories, without changing write permissions.

### Fix permissions at runtime

If the restrictive permissions come from files created at runtime (for example, a tool that generates config files with `0600` mode), fix them before calling `createBackup()`:

TypeScript

```

await sandbox.exec("chmod -R a+rX /home/sandbox/.claude");

const backup = await sandbox.createBackup({ dir: "/home/sandbox" });


```

### Example error

If the backup encounters a permission issue, you will see an error like:

```

BackupCreateError: mksquashfs failed: Could not create destination file: Permission denied


```

This means `mksquashfs` could not read one or more files inside the directory you passed to `createBackup()`. Check the permissions of all files and subdirectories within that path.

## Best practices

* **Stop writes before restoring** \- Stop processes writing to the target directory before calling `restoreBackup()`
* **Use checkpoints** \- Create backups before risky operations like package installations or migrations
* **Exclude gitignored files** \- Set `useGitignore: true` when backing up git repositories to skip generated files like `node_modules/` and reduce backup size
* **Set appropriate TTLs** \- Use short TTLs for temporary checkpoints and longer TTLs for persistent snapshots
* **Store handles externally** \- Persist `DirectoryBackup` handles to KV, D1, or Durable Object storage for cross-request access
* **Configure R2 lifecycle rules** \- Set up [object lifecycle rules](https://developers.cloudflare.com/r2/buckets/object-lifecycles/) to automatically delete expired backups from R2, since TTL is only enforced at restore time
* **Clean up old backups** \- Delete previous backup objects from R2 when you no longer need them, or use the delete-then-write pattern for rolling backups
* **Handle errors** \- Wrap backup and restore calls in `try...catch` blocks
* **Re-restore after restart** \- In production, the FUSE mount is ephemeral, so re-restore from the backup handle after container restarts

## Related resources

* [Backups API reference](https://developers.cloudflare.com/sandbox/api/backups/) \- Full method documentation
* [Storage API reference](https://developers.cloudflare.com/sandbox/api/storage/) \- Mount S3-compatible buckets
* [R2 documentation](https://developers.cloudflare.com/r2/) \- Learn about Cloudflare R2
* [R2 lifecycle rules](https://developers.cloudflare.com/r2/buckets/object-lifecycles/) \- Configure automatic object cleanup

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/sandbox/","name":"Sandbox SDK"}},{"@type":"ListItem","position":3,"item":{"@id":"/sandbox/guides/","name":"How-to guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/sandbox/guides/backup-restore/","name":"Backup and restore"}}]}
```
