Spaces is an S3-compatible object storage service that lets you store and serve large amounts of data. Each Space is a bucket for you to store and serve files. The free, built-in Spaces CDN minimizes page load times, improves performance, and reduces bandwidth and infrastructure costs.
You can create up to 100 Spaces per account. If you need to raise this limit, contact support.
You can share access to all of the Spaces on an account or team, but not to specific Spaces.
In the 1+ week period during which a Space is pending destruction, you cannot reuse that Space’s name. If you need to recover a Space that is pending destruction, you can cancel a Space’s scheduled deletion in the control panel.
You cannot secure a CDN’s subdomain with a custom wildcard SSL certificate that is already being used elsewhere in your account. Instead, you’ll need to add a new custom certificate during the custom subdomain set up for your CDN.
SSL wildcard certificates will not match Spaces if the Space name has a period,
., in it. If you need browser-based access to a Space, we recommend against using periods in the Space name. Spaces supports browser and API access with path-style requests and API access with vhost-style requests.
Spaces have the following request rate limits:
750 requests (any operation) per IP address per second to all Spaces on an account.
240 total operations per second to any individual Space.
LIST operations per second to any individual Space. We may further limit
LIST operations if necessary under periods of high load.
COPY requests per 5 minutes to any individual object in a Space.
Applications should retry with exponential backoff on
503 Slow Down errors. Significantly exceeding these limits without exponential backoff may result in temporary suspension of access to particular objects or Spaces.
In general, using a small number of parallel connections gives better performance than a single connection. If you plan to push more than 200 requests per second to Spaces, we recommend using the Spaces CDN or creating more Spaces.
DigitalOcean’s internal DNS infrastructure also has rate limits in place to limit the impact of abusive actors. If you are making a large number of requests, we recommend implementing recursive DNS caching.
Spaces have the following file size limits:
PUT requests can be at most 5 GB.
Each part of a multi-part upload can be at most 5 GB.
Each part of a multi-part upload must be at least 5 MiB, except for the final part.
Multi-part uploads can have at most 10,000 parts.
The maximum supported total size of a multi-part upload is 5 TB.
Using the control panel, you can delete up to 9,999 files from a Space at once. To delete 10,000 or more files, use multiple requests in the control panel or use the API to batch deletes more quickly.
While you can set permissions for all the files in a folder, currently you’ll need to use a third-party client to set permissions recursively.
If you have a large number of objects or multi-part uploads, you may not be able to view all your objects in the control panel. You can still view all the objects using the Spaces API and s3-compatible tools such as s3cmd or AWS S3.
The minimum billable object size is 4 KiB. Storage is consumed by both the data and metadata of objects, and billed in multiples of 4 KiB, rounded up.
Early client disconnects may be metered up-to the original size of the requested object. Metering will reflect the total number of bytes egressed from Spaces prior to the receipt of the disconnect.
Individual Spaces with more than 3 million unversioned objects or 1.5 million objects with versioning may require intermittent maintenance periods to ensure consistent performance. Each version of a versioned object counts towards these limits. If you have Spaces that reach these limits, we recommend that you distribute your workload across multiple Spaces instead.
Space delete actions do not include the correct IP address that conducted the action in an account’s security history.
Uploading hundreds or thousands of files via
cloud.digitalocean.com may not complete reliably. For this use case, use s3cmd or other third-party tools.
The Spaces API does not support
File metadata headers, like
Content-Encoding, are not passed through the CDN. Metadata headers are correctly set when fetching content directly from the origin.
CDN subdomain certificates can silently fail to upload to the CDN on renewal. This causes the CDN to stop serving assets with SSL once the original certificate expires. Additionally, when this happens, you cannot change the invalid certificate in the control panel.
Our engineers are working on a fix for the certificate renewal uploads and a fix in the control panel to support uploading or selecting a different certificate when a renewed certificate upload fails.