Skip to main content

Cloudflare Integration

Cloudflare R2 is a cloud object storage service for data-intensive applications.

Cloudflare's API follows the same schema as any S3 compatible API.

Windmill S3 integration

You can link a Windmill workspace to an S3 bucket and use it as source and/or target of your processing steps seamlessly, without any boilerplate.


See Object Storage for Large Data for more details.

Add an S3 Resource

To integrate Cloudflare to Windmill, you need to save the following elements as a resource.

S3 resource type

Here is the information on where to find the required details:

PropertyTypeDescriptionDefaultRequiredWhere to FindAdditional Details
bucketstringS3 bucket nametrueR2 DashboardName of the S3 bucket to access
regionstringAWS region for the S3 buckettrueR2 documentationThe region is specific to R2 and is set when creating the bucket
useSSLbooleanUse SSL for connectionstruefalseR2 documentationSSL/TLS is required for Cloudflare R2
endPointstringS3 endpointtrueR2 documentationEndpoint URL will be in the format https://[bucket-id].r2.storage.cloud.cloudflare.com
accessKeystringAWS access keyfalseNot applicable for Cloudflare R2Access key ID is not required for R2
pathStylebooleanUse path-style addressingfalsefalseNot applicable for Cloudflare R2Virtual-hosted-style URLs are always used in R2
secretKeystringAWS secret keyfalseNot applicable for Cloudflare R2Secret access key is not required for R2


Your resource can be used passed as parameters or directly fetched within scripts, flows and apps.


tip

Find some pre-set interactions with S3 on the Hub.

Feel free to create your own S3 scripts on Windmill.

Connect your Windmill workspace to your S3 bucket or your Azure Blob storage

Once you've created an S3 or Azure Blob resource in Windmill, you can use Windmill's native integration with S3 and Azure Blob, making it the recommended storage for large objects like files and binary data.

S3 Integration Infographic

S3/Azure for Python Cache & Large Logs

For large logs storage (and display) and cache for distributed Python jobs, you can connect your instance to a bucket.