---
title: Transform user-uploaded images before uploading to R2
description: Set up bindings to connect Images, R2, and Assets to your Worker
image: https://developers.cloudflare.com/dev-products-preview.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/images/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Transform user-uploaded images before uploading to R2

**Last reviewed:**  about 1 year ago 

In this guide, you will build an app that accepts image uploads, overlays the image with a visual watermark, then stores the transformed image in your R2 bucket.

---

With Images, you have the flexibility to choose where your original images are stored. You can transform images that are stored outside of the Images product, like in [R2](https://developers.cloudflare.com/r2/).

When you store user-uploaded media in R2, you may want to optimize or manipulate images before they are uploaded to your R2 bucket.

You will learn how to connect Developer Platform services to your Worker through bindings, as well as use various optimization features in the Images API.

## Prerequisites

Before you begin, you will need to do the following:

* Add an [Images Paid](https://developers.cloudflare.com/images/pricing/#images-paid) subscription to your account. This allows you to bind the Images API to your Worker.
* Create an [R2 bucket](https://developers.cloudflare.com/r2/get-started/), where the transformed images will be uploaded.
* Create a new Worker project.

If you are new, review how to [create your first Worker](https://developers.cloudflare.com/workers/get-started/guide/).

## 1: Set up your Worker project

To start, you will need to set up your project to use the following resources on the Developer Platform:

* [Images](https://developers.cloudflare.com/images/optimization/transformations/bindings/) to transform, resize, and encode images directly from your Worker.
* [R2](https://developers.cloudflare.com/r2/api/workers/workers-api-usage/) to connect the bucket for storing transformed images.
* [Assets](https://developers.cloudflare.com/workers/static-assets/binding/) to access a static image that will be used as the visual watermark.

### Add the bindings to your Wrangler configuration

Configure your Wrangler configuration file to add the Images, R2, and Assets bindings:

* [  wrangler.jsonc ](#tab-panel-6606)
* [  wrangler.toml ](#tab-panel-6607)

JSONC

```

{

  "images": {

    "binding": "IMAGES"

  },

  "r2_buckets": [

    {

      "binding": "R2",

      "bucket_name": "<BUCKET>"

    }

  ],

  "assets": {

    "directory": "./<DIRECTORY>",

    "binding": "ASSETS"

  }

}


```

TOML

```

[images]

binding = "IMAGES"


[[r2_buckets]]

binding = "R2"

bucket_name = "<BUCKET>"


[assets]

directory = "./<DIRECTORY>"

binding = "ASSETS"


```

Replace `<BUCKET>` with the name of the R2 bucket where you will upload the images after they are transformed. In your Worker code, you will be able to refer to this bucket using `env.R2.`

Replace `./<DIRECTORY>` with the name of the project's directory where the overlay image will be stored. In your Worker code, you will be able to refer to these assets using `env.ASSETS`.

### Set up your assets directory

Because we want to apply a visual watermark to every uploaded image, you need a place to store the overlay image.

The assets directory of your project lets you upload static assets as part of your Worker. When you deploy your project, these uploaded files, along with your Worker code, are deployed to Cloudflare's infrastructure in a single operation.

After you configure your Wrangler file, upload the overlay image to the specified directory. In our example app, the directory `./assets` contains the overlay image.

## 2: Build your frontend

You will need to build the interface for the app that lets users upload images.

In this example, the frontend is rendered directly from the Worker script.

To do this, make a new `html` variable, which contains a `form` element for accepting uploads. In `fetch`, construct a new `Response` with a `Content-Type: text/html` header to serve your static HTML site to the client:

* [  JavaScript ](#tab-panel-6610)
* [  TypeScript ](#tab-panel-6611)

JavaScript

```

const html = `

<!DOCTYPE html>

        <html>

          <head>

            <meta charset="UTF-8">

            <title>Upload Image</title>

          </head>

          <body>

            <h1>Upload an image</h1>

            <form method="POST" enctype="multipart/form-data">

              <input type="file" name="image" accept="image/*" required />

              <button type="submit">Upload</button>

            </form>

          </body>

        </html>

`;


export default {

  async fetch(request, env) {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      // This is called when the user submits the form

    }

  },

};


```

TypeScript

```

const html = `

<!DOCTYPE html>

        <html>

          <head>

            <meta charset="UTF-8">

            <title>Upload Image</title>

          </head>

          <body>

            <h1>Upload an image</h1>

            <form method="POST" enctype="multipart/form-data">

              <input type="file" name="image" accept="image/*" required />

              <button type="submit">Upload</button>

            </form>

          </body>

        </html>

`;


interface Env {

  IMAGES: ImagesBinding;

  R2: R2Bucket;

  ASSETS: Fetcher;

}


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      // This is called when the user submits the form

    }

  },

} satisfies ExportedHandler<Env>;


```

## 3: Read the uploaded image

After you have a `form`, you need to make sure you can transform the uploaded images.

Because the `form` lets users upload directly from their disk, you cannot use `fetch()` to get an image from a URL. Instead, you will operate on the body of the image as a stream of bytes.

To do this, parse the uploaded file from the `form` and get its stream:

* [  JavaScript ](#tab-panel-6608)
* [  TypeScript ](#tab-panel-6609)

JavaScript

```

export default {

  async fetch(request, env) {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      try {

        // Parse form data

        const formData = await request.formData();

        const file = formData.get("image");

        if (!file || typeof file.stream !== "function") {

          return new Response("No image file provided", { status: 400 });

        }


        // Get uploaded image as a readable stream

        const fileStream = file.stream();

      } catch (err) {

        console.log(err.message);

      }

    }

  },

};


```

TypeScript

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      try {

        // Parse form data

        const formData = await request.formData();

        const file = formData.get("image");

        if (!file || typeof file.stream !== "function") {

          return new Response("No image file provided", { status: 400 });

        }


        // Get uploaded image as a readable stream

        const fileStream = file.stream();

      } catch (err) {

        console.log((err as Error).message);

      }

    }

  },

} satisfies ExportedHandler<Env>;


```

Prevent potential errors when accessing request.body

The body of a [Request ↗](https://developer.mozilla.org/en-US/docs/Web/API/Request) can only be accessed once. If you previously used `request.formData()` in the same request, you may encounter a TypeError when attempting to access `request.body`.

To avoid errors, create a clone of the Request object with `request.clone()` for each subsequent attempt to access a Request's body. Keep in mind that Workers have a [memory limit of 128 MB per Worker](https://developers.cloudflare.com/workers/platform/limits/#memory) and loading particularly large files into a Worker's memory multiple times may reach this limit. To ensure memory usage does not reach this limit, consider using [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/).

## 4: Transform the image

For every uploaded image, you want to perform the following actions:

* Overlay the visual watermark that we added to our assets directory.
* Transcode the image — with its watermark — to `AVIF`. This compresses the image and reduces its file size.
* Upload the transformed image to R2.

### Set up the overlay image

To fetch the overlay image from the assets directory, create a function `assetUrl` then use `env.ASSETS` to retrieve the `watermark.png` image:

* [  JavaScript ](#tab-panel-6612)
* [  TypeScript ](#tab-panel-6613)

JavaScript

```

function assetUrl(request, path) {

  const url = new URL(request.url);

  url.pathname = path;

  return url;

}


export default {

  async fetch(request, env) {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      try {

        // Parse form data

        const formData = await request.formData();

        const file = formData.get("image");

        if (!file || typeof file.stream !== "function") {

          return new Response("No image file provided", { status: 400 });

        }


        // Get uploaded image as a readable stream

        const fileStream = file.stream();


        // Fetch image as watermark

        const watermarkResponse = await env.ASSETS.fetch(

          assetUrl(request, "watermark.png"),

        );

        const watermarkStream = watermarkResponse.body;

      } catch (err) {

        console.log(err.message);

      }

    }

  },

};


```

TypeScript

```

function assetUrl(request: Request, path: string): URL {

  const url = new URL(request.url);

  url.pathname = path;

  return url;

}


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      try {

        // Parse form data

        const formData = await request.formData();

        const file = formData.get("image");

        if (!file || typeof file.stream !== "function") {

          return new Response("No image file provided", { status: 400 });

        }


        // Get uploaded image as a readable stream

        const fileStream = file.stream();


        // Fetch image as watermark

        const watermarkResponse = await env.ASSETS.fetch(

          assetUrl(request, "watermark.png"),

        );

        const watermarkStream = watermarkResponse.body;

      } catch (err) {

        console.log((err as Error).message);

      }

    }

  },

} satisfies ExportedHandler<Env>;


```

### Watermark and transcode the image

You can interact with the Images binding through `env.IMAGES`.

This is where you will put all of the optimization operations you want to perform on the image. Here, you will use the `.draw()` function to apply a visual watermark over the uploaded image, then use `.output()` to encode the image as AVIF:

* [  JavaScript ](#tab-panel-6614)
* [  TypeScript ](#tab-panel-6615)

JavaScript

```

function assetUrl(request, path) {

  const url = new URL(request.url);

  url.pathname = path;

  return url;

}


export default {

  async fetch(request, env) {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      try {

        // Parse form data

        const formData = await request.formData();

        const file = formData.get("image");

        if (!file || typeof file.stream !== "function") {

          return new Response("No image file provided", { status: 400 });

        }


        // Get uploaded image as a readable stream

        const fileStream = file.stream();


        // Fetch image as watermark

        const watermarkResponse = await env.ASSETS.fetch(

          assetUrl(request, "watermark.png"),

        );

        const watermarkStream = watermarkResponse.body;

        if (!watermarkStream) {

          return new Response("Failed to fetch watermark", { status: 500 });

        }


        // Apply watermark and convert to AVIF

        const imageResponse = (

          await env.IMAGES.input(fileStream)

            // Draw the watermark on top of the image

            .draw(

              env.IMAGES.input(watermarkStream).transform({

                width: 100,

                height: 100,

              }),

              { bottom: 10, right: 10, opacity: 0.75 },

            )

            // Output the final image as AVIF

            .output({ format: "image/avif" })

        ).response();

      } catch (err) {

        console.log(err.message);

      }

    }

  },

};


```

TypeScript

```

function assetUrl(request: Request, path: string): URL {

  const url = new URL(request.url);

  url.pathname = path;

  return url;

}


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      try {

        // Parse form data

        const formData = await request.formData();

        const file = formData.get("image");

        if (!file || typeof file.stream !== "function") {

          return new Response("No image file provided", { status: 400 });

        }


        // Get uploaded image as a readable stream

        const fileStream = file.stream();


        // Fetch image as watermark

        const watermarkResponse = await env.ASSETS.fetch(

          assetUrl(request, "watermark.png"),

        );

        const watermarkStream = watermarkResponse.body;

        if (!watermarkStream) {

          return new Response("Failed to fetch watermark", { status: 500 });

        }


        // Apply watermark and convert to AVIF

        const imageResponse = (

          await env.IMAGES.input(fileStream)

            // Draw the watermark on top of the image

            .draw(

              env.IMAGES.input(watermarkStream).transform({

                width: 100,

                height: 100,

              }),

              { bottom: 10, right: 10, opacity: 0.75 },

            )

            // Output the final image as AVIF

            .output({ format: "image/avif" })

        ).response();

      } catch (err) {

        console.log((err as Error).message);

      }

    }

  },

} satisfies ExportedHandler<Env>;


```

## 5: Upload to R2

Upload the transformed image to R2.

By creating a `fileName` variable, you can specify the name of the transformed image. In this example, you append the date to the name of the original image before uploading to R2.

Here is the full code for the example:

* [  JavaScript ](#tab-panel-6616)
* [  TypeScript ](#tab-panel-6617)

JavaScript

```

const html = `

<!DOCTYPE html>

        <html>

          <head>

            <meta charset="UTF-8">

            <title>Upload Image</title>

          </head>

          <body>

            <h1>Upload an image</h1>

            <form method="POST" enctype="multipart/form-data">

              <input type="file" name="image" accept="image/*" required />

              <button type="submit">Upload</button>

            </form>

          </body>

        </html>

`;


function assetUrl(request, path) {

  const url = new URL(request.url);

  url.pathname = path;

  return url;

}


export default {

  async fetch(request, env) {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      try {

        // Parse form data

        const formData = await request.formData();

        const file = formData.get("image");

        if (!file || typeof file.stream !== "function") {

          return new Response("No image file provided", { status: 400 });

        }


        // Get uploaded image as a readable stream

        const fileStream = file.stream();


        // Fetch image as watermark

        const watermarkResponse = await env.ASSETS.fetch(

          assetUrl(request, "watermark.png"),

        );

        const watermarkStream = watermarkResponse.body;

        if (!watermarkStream) {

          return new Response("Failed to fetch watermark", { status: 500 });

        }


        // Apply watermark and convert to AVIF

        const imageResponse = (

          await env.IMAGES.input(fileStream)

            // Draw the watermark on top of the image

            .draw(

              env.IMAGES.input(watermarkStream).transform({

                width: 100,

                height: 100,

              }),

              { bottom: 10, right: 10, opacity: 0.75 },

            )

            // Output the final image as AVIF

            .output({ format: "image/avif" })

        ).response();


        // Add timestamp to file name

        const fileName = `image-${Date.now()}.avif`;


        // Upload to R2

        await env.R2.put(fileName, imageResponse.body);


        return new Response(`Image uploaded successfully as ${fileName}`, {

          status: 200,

        });

      } catch (err) {

        console.log(err.message);

        return new Response("Internal error", { status: 500 });

      }

    }

    return new Response("Method not allowed", { status: 405 });

  },

};


```

TypeScript

```

interface Env {

  IMAGES: ImagesBinding;

  R2: R2Bucket;

  ASSETS: Fetcher;

}


const html = `

<!DOCTYPE html>

        <html>

          <head>

            <meta charset="UTF-8">

            <title>Upload Image</title>

          </head>

          <body>

            <h1>Upload an image</h1>

            <form method="POST" enctype="multipart/form-data">

              <input type="file" name="image" accept="image/*" required />

              <button type="submit">Upload</button>

            </form>

          </body>

        </html>

`;


function assetUrl(request: Request, path: string): URL {

  const url = new URL(request.url);

  url.pathname = path;

  return url;

}


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    if (request.method === "GET") {

      return new Response(html, { headers: { "Content-Type": "text/html" } });

    }

    if (request.method === "POST") {

      try {

        // Parse form data

        const formData = await request.formData();

        const file = formData.get("image");

        if (!file || typeof file.stream !== "function") {

          return new Response("No image file provided", { status: 400 });

        }


        // Get uploaded image as a readable stream

        const fileStream = file.stream();


        // Fetch image as watermark

        const watermarkResponse = await env.ASSETS.fetch(

          assetUrl(request, "watermark.png"),

        );

        const watermarkStream = watermarkResponse.body;

        if (!watermarkStream) {

          return new Response("Failed to fetch watermark", { status: 500 });

        }


        // Apply watermark and convert to AVIF

        const imageResponse = (

          await env.IMAGES.input(fileStream)

            // Draw the watermark on top of the image

            .draw(

              env.IMAGES.input(watermarkStream).transform({

                width: 100,

                height: 100,

              }),

              { bottom: 10, right: 10, opacity: 0.75 },

            )

            // Output the final image as AVIF

            .output({ format: "image/avif" })

        ).response();


        // Add timestamp to file name

        const fileName = `image-${Date.now()}.avif`;


        // Upload to R2

        await env.R2.put(fileName, imageResponse.body);


        return new Response(`Image uploaded successfully as ${fileName}`, {

          status: 200,

        });

      } catch (err) {

        console.log((err as Error).message);

        return new Response("Internal error", { status: 500 });

      }

    }

    return new Response("Method not allowed", { status: 405 });

  },

} satisfies ExportedHandler<Env>;


```

## Next steps

In this tutorial, you learned how to connect your Worker to various resources on the Developer Platform to build an app that accepts image uploads, transform images, and uploads the output to R2.

Next, you can [set up a transformation URL](https://developers.cloudflare.com/images/optimization/features/#url-interface) to dynamically optimize images that are stored in R2.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/images/","name":"Cloudflare Images"}},{"@type":"ListItem","position":3,"item":{"@id":"/images/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/images/tutorials/optimize-user-uploaded-image/","name":"Transform user-uploaded images before uploading to R2"}}]}
```
