Transfer DigitalOcean Spaces Buckets Between Regions Using Rclone

Validated on 4 Feb 2026 • Last edited on 10 Mar 2026

Spaces Object Storage is an S3-compatible service for storing and serving large amounts of data. The built-in Spaces CDN minimizes page load times, improves performance, and reduces bandwidth and infrastructure costs.

DigitalOcean Spaces buckets are created in a single region and cannot be moved directly to another region. To transfer data between regions, you must copy the contents of a source bucket into a new bucket created in the destination region using a third-party tool.

We recommend using Rclone, an open-source file management utility that supports Amazon S3–compatible object storage, including Spaces. Rclone can copy or synchronize objects between buckets in different regions, verify transfer integrity, and handle large datasets efficiently.

Rclone can also be used for additional object storage tasks such as uploading or downloading files, synchronizing directories, mounting buckets locally, and managing multiple storage providers. For more information, see the Rclone documentation or run rclone help to explore available commands.

Get a Spaces Access Key and Secret Key

Before transferring data between regions, you need a Spaces access key and secret key. Save the access key ID and secret key securely. These credentials are required to authenticate Rclone.

Identify Your Spaces Endpoints

Each Spaces bucket uses a region-specific S3-compatible endpoint in the format:

<region>.digitaloceanspaces.com

To see your bucket’s origin endpoint, go to the control panel, in the left menu, click Spaces Object Storage, under the Buckets tab, click the bucket you want the origin endpoint of, and then in the bucket’s overview page, on the top right is where the bucket’s origin endpoint is located.

Note

Rclone may transfer objects using either server-side copies (CopyObject) or a download-and-upload workflow (GetObject followed by PutObject).

When buckets are in different regions, use different remotes, or use different storage tiers (Standard Storage versus Cold Storage), Rclone downloads each object from the source bucket and uploads it to the destination bucket.

Install Rclone

Rclone can be installed on your local machine or on a Droplet. If you are transferring a large amount of data or have limited local bandwidth, installing Rclone on a Droplet in either the source or destination region may improve performance.

Download the appropriate binary for your operating system from the Rclone downloads page and follow the installation instructions for your platform.

Configure Rclone for Spaces

Rclone accesses storage providers through named remotes. Each Spaces region must be configured as a separate remote.

In the Rclone configuration file (~/.config/rclone/rclone.conf), define a remote for each region:

[spaces-sfo2]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id = your_spaces_access_key
secret_access_key = your_spaces_secret_key
endpoint = sfo2.digitaloceanspaces.com
acl = private

Create a second remote for the destination region by duplicating the configuration and updating the remote name and endpoint:

[spaces-nyc3]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id = your_spaces_access_key
secret_access_key = your_spaces_secret_key
endpoint = nyc3.digitaloceanspaces.com
acl = private

On macOS and Linux, restrict access to the configuration file because it contains credentials:

chmod 600 ~/.config/rclone/rclone.conf

Transfer Data Between Regions

After configuring both regions, you can use Rclone to list buckets, inspect contents, and transfer data.

First, verify that both remotes are available for the transfer:

rclone listremotes

Then, list buckets in a region:

rclone lsd spaces-sfo2:

To copy or synchronize objects between buckets, use rclone sync or rclone copy:

rclone sync spaces-sfo2:source-bucket spaces-nyc3:destination-bucket

If the destination bucket does not already exist, Rclone attempts to create it. This succeeds only if the bucket name is available and meets Spaces naming requirements.

When transferring data to or from Cold Storage buckets, Rclone can’t use server-side copy operations. In this case, add the --disable copy flag to force download-and-upload behavior:

rclone sync --disable copy spaces-sfo2:source-bucket spaces-nyc3:destination-bucket

Verify Transferred Data

After the transfer completes, you can verify object integrity using the check command:

rclone check spaces-sfo2:source-bucket spaces-nyc3:destination-bucket

A successful transfer shows that Rclone found no differences between the source and destination buckets and verified that all files match like this:

2026/02/04 11:32:18 NOTICE: S3 bucket destination-bucket: 0 differences found
2026/02/04 11:32:18 NOTICE: S3 bucket destination-bucket: 128 matching files

This compares object hashes between buckets. If hashes cannot be compared, you can rerun the command using the --size-only or --download flags.

  • When using the --size-only flag, Rclone considers a file successfully transferred if the file size matches between the source and destination buckets.
  • When using the --download flag, Rclone downloads objects from both buckets and compares them locally. If no differences are reported, the transfer is considered successful. This method is more thorough but slower and uses additional bandwidth.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.