Spaces Performance Best Practices

Validated on 10 Mar 2026 • Last edited on 26 Mar 2026

Spaces Object Storage is an S3-compatible service for storing and serving large amounts of data. The built-in Spaces CDN minimizes page load times, improves performance, and reduces bandwidth and infrastructure costs.

These best practices improve performance when storing and serving data with DigitalOcean Spaces. These recommendations help reduce latency, improve throughput, and prevent throttling in common application architectures.

Use a Content Delivery Network (CDN)

A content delivery network (CDN) caches your objects in geographically distributed edge locations so users download content from a nearby server instead of directly from Spaces, reducing latency and improving download performance.

When to Use This

Use a CDN if your application primarily serves public content over GET requests, especially small or frequently accessed files.

Typical use cases include:

  • Static assets for websites
  • Media files and downloads
  • Public content served to global users

How to Implement

You can enable the built-in Spaces CDN, which is included with Spaces at no additional cost.

You can also integrate third-party CDNs such as:

Some CDNs require additional configuration. For example, Cloudflare requires either:

  • A Cloudflare Worker
  • A plan that supports host header rewrites

Optimize Object Naming for Large Buckets

Distributing object keys across multiple prefixes improves the performance of ListObjects requests in Spaces that contain large numbers of objects.

When to Use This

Consider optimizing object naming if your Space contains a large number of objects and you frequently call the ListObjects API.

Large workloads typically range from tens of thousands to millions of objects, depending on request frequency.

How to Implement

Prefix object names with 6-8 random or structured characters. Below are examples of filenames:

abc-file.jpg
2026-03-10-logfile.txt

Then, use the delimiter parameter when calling ListObjects to organize results by prefix.

Warning
Do not include personally identifiable or sensitive information in bucket names, object names, metadata, or tags. This information is not encrypted with server-side encryption and may be exposed in request metadata.

See the Spaces API reference for the full list of parameters.

Avoid Many Small Files and Use Multi-Part Uploads for Large Files

Spaces performs best when reading and writing moderate-to-large objects, so combining very small files and using multi-part uploads for large files can reduce request overhead and improve transfer reliability.

When to Use This

Optimize file size if you:

  • Store many files smaller than 1 MB.
  • Upload files larger than 500 MB.

Typical optimal file sizes range from 20 MB to 200 MB.

How to Implement

For large uploads, use multi-part uploads. For small files, combine multiple small files into larger files when possible.

For example, combine daily log files into a monthly archive.

Choose the Right Datacenter for Your Resources

Placing your Spaces buckets and related resources in the appropriate datacenter reduces latency and improves connectivity, especially when resources communicate frequently or serve users in specific regions.

When to Use This

Choose regions based on where your application and users access Spaces:

  • If Droplets access Spaces frequently, place them in the same datacenter.
  • If users access Spaces directly from the internet, use a CDN.

How to Implement

Select a region when creating your Space. For existing infrastructure, you can migrate related resources, such as Droplets, using snapshots and redeployment in the desired region.

Handle 5xx Errors with Retries

Transient server errors, such as 503 Slow Down, can occur during heavy workloads. Implementing retry logic ensures uploads succeed and prevents incomplete datasets.

When to Use This

Always implement retry logic when uploading files to Spaces.

How to Implement

Use an S3-compatible client or SDK that already supports retry logic, such as s3cmd.

If implementing uploads manually, add retry logic with exponential backoff to handle temporary errors.

Manage Your Request Rate

Spaces applies rate limiting at the Space level to ensure fair resource usage across users, so optimizing how frequently your application sends requests helps prevent throttling and maintain consistent throughput.

When to Use This

Optimize request patterns if:

  • Your application sends more than about 150 requests per second.
  • Requests are being throttled.
  • You receive frequent 503 Slow Down responses.

How to Implement

You can reduce request rates by:

  • Combining many small files into larger objects.
  • Implementing retry logic with backoff.
  • Reducing unnecessary API calls.

If you expect sustained workloads above 150 requests per second, open a support ticket so we can help prepare your environment.

Use Block or Local Storage for Low-Latency Workloads

Spaces is optimized for static, unstructured object storage, so workloads that require low-latency access or filesystem semantics perform better on block or local storage.

When to Use This

Spaces is best suited for static, unstructured object storage.

Use block or local storage instead if your workload requires:

  • Low-latency databases
  • Frequent small writes
  • POSIX-style filesystem access

Filesystem-on-S3 tools such as S3FS or S3QL are not recommended for Spaces.

How to Implement

You can:

For database workloads, use dedicated database systems such as Redis or Cassandra.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.