Spaces Limits

Validated on 1 Dec 2025 • Last edited on 13 Jan 2026

Spaces Object Storage is an S3-compatible service for storing and serving large amounts of data. The built-in Spaces CDN minimizes page load times, improves performance, and reduces bandwidth and infrastructure costs.

Spaces Standard Storage Limits

  • You can create up to 100 Spaces buckets and 200 access keys per account. If you need to raise this limit, contact support.

  • You cannot transfer Spaces buckets directly between regions or teams. To migrate data, create a new bucket and transfer files using Rclone or another compatible tool.

  • Spaces does not include built-in backups. To back up data, copy files to another bucket or to a local machine using tools such as Rclone, s3cmd, or SnapShooter.

  • You can share access to all buckets within an account or team, but not to individual buckets.

  • Spaces does not support DigitalOcean tags or bucket tags.

  • During the one-week period when a bucket is pending destruction, you cannot reuse its name. To recover a bucket pending destruction, cancel the scheduled deletion.

  • You cannot secure a CDN subdomain using a custom wildcard SSL certificate already in use elsewhere in your account. Add a new custom certificate during custom subdomain setup.

  • Wildcard SSL certificates do not match bucket names containing periods (.). For browser-based access, avoid using periods in bucket names. Buckets support browser and API access with path-style requests, and API access with virtual host–style requests.

  • Spaces automatically deletes incomplete multipart uploads older than 30 days.

  • You cannot use Cloudflare Origin CA certificates for custom subdomains.

  • Using presigned URLs does not allow transferred files to be cached when using the Spaces CDN. Attempting to do so may result in double the bandwidth charge without the CDN’s performance benefit.

  • You cannot use multiple CDNs from different vendors with a single Spaces bucket (for example, the built-in Spaces CDN and an external CDN service).

Rate Limits

All new Spaces buckets support a limit of 800 total operations per second. During periods of very high load, LIST requests may be rate-limited further.

Buckets created before regional infrastructure upgrades use older, lower limits:

  • 1,500 requests (any operation) per IP address per second across all buckets on an account.
  • 10 concurrent PUT or COPY requests to any individual object.
  • 500 total operations per second.
  • 300 combined PUT, POST, COPY, DELETE, and LIST operations per second.
Datacenter Upgrade date (buckets created before use old limits)
AMS3 16 December 2020
FRA1 04 November 2020
NYC3 03 December 2020
SFO2 15 November 2022
SGP1 06 February 2021

If you need higher limits, contact support. Applications should retry with exponential backoff when receiving 503 Slow Down errors.

Object Size and Upload Limits

In general, several small parallel connections perform better than a single long-running connection. For more than 400 read requests per second, we recommend using the Spaces CDN.

  • PUT requests can be up to 5 GB.

  • Multipart upload parts can be up to 5 GB and must be at least 5 MiB (except the final part).

  • A multipart upload can include up to 10,000 parts with a maximum total size of 5 TB.

  • Multipart uploads and PUT requests sent to the CDN using presigned URLs have a maximum payload of 8100 KiB (7.91 MiB).

  • Using the control panel, you can delete up to 9999 files at once. For 10,000 or more files, use multiple requests or the API.

  • You can set permissions for all files in a folder, but recursive changes require a third-party client.

  • Large buckets may not display all objects in the control panel. You can view all objects using the Spaces API or S3-compatible tools like s3cmd or AWS S3.

  • The minimum billable object size is 4 KiB, rounded up. Storage includes both data and metadata.

  • Early client disconnects may still be billed up to the full object size based on bytes egressed before disconnection.

Bucket and Object Count Limits

  • You can only enable object versioning using the DigitalOcean API.

  • Buckets created on or after July 2021 support up to 100 million unversioned objects or 50 million versioned objects.

  • If a bucket exceeds these limits, distribute workloads across multiple buckets to maintain consistent performance.

  • Buckets created before July 2021 can request an upgrade to the higher limits.

Access Key Limits

  • You can create and edit access keys only through the control panel. CLI and API creation are not supported.

  • You cannot convert full access keys into limited access keys or vice versa.

  • Per-bucket access keys are incompatible with S3-compatible bucket policies. You cannot create a “Read” or “Read/Write/Delete” access key on a bucket that uses PutBucketPolicy, and vice versa.

Access Log Limits

  • Access logs can only be configured using the S3-compatible API or Terraform.

  • Configuration via the control panel, API, or doctl is not supported.

  • Spaces stores CDN and origin logs in the same target folder. To separate them, disable the CDN.

  • Spaces does not support specifying the same source and target bucket when calling PutBucketLogging.

Spaces Cold Storage Limits

  • Cold Storage buckets do not support static website hosting or website configuration features such as PutBucketWebsite.

  • Cold Storage buckets cannot be used as the destination bucket for access logs.

  • You cannot create Spaces Cold Storage buckets using the DigitalOcean API or CLI. To create Cold Storage buckets, use the DigitalOcean Control Panel.

  • CopyObject does not work between Cold Storage buckets in different regions.

  • CopyObject does not work between Standard Storage and Cold Storage buckets. To move objects between these tiers, use a tool that performs a GetObject to PutObject transfer (for example, Rclone when configured appropriately).

  • Cold Storage buckets do not support:

    • CDN integration or custom CDN endpoints
    • Bucket policies
    • Intelligent tiering
    • CORS configuration
  • Cold Storage buckets can only be accessed using signed S3 requests with valid access keys.

  • Spaces Cold Storage is available in all regions except BLR1. For details, see Spaces Availability.

  • Each Cold Storage bucket supports up to 450 write, 250 read, and 25 list requests per second.

  • Objects smaller than 128 KiB are billed as 128 KiB.

  • Retrievals have a 128 KiB minimum charge per read. Partial or small reads are billed as 128 KiB.

  • Each object has a 30-day minimum storage charge. Early deletions beyond the first 250 GiB each month incur proportional charges.

  • Overwriting an object is billed as a delete followed by a create.

Known Issues

  • Delete actions in Spaces may not display the correct IP address in the account’s security history.

  • Uploading hundreds or thousands of files via the control panel may fail. For large uploads, use s3cmd or another third-party tool.

  • The Spaces API does not currently support list-objects-v2 pagination.

  • CDN subdomain certificates may fail to upload during renewal, preventing SSL delivery once the original certificate expires. As a workaround, update the certificate via the API or contact support.

  • Accounts with many buckets and objects (for example, more than 500 buckets with more than 10000+ objects each) may not see bucket statistics on the Spaces landing page.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.