Spaces Object Storage is an S3-compatible object storage service. Spaces buckets let you store and serve large amounts of data, and the built-in CDN minimizes page load times and improves performance.
You can create up to 100 Spaces buckets and 200 access keys per account. If you need to raise this limit, contact support.
You can share access to all of the buckets on an account or team, but not to specific buckets.
Spaces currently does not support DigitalOcean tags or bucket tags.
In the 1+ week period during which a bucket is pending destruction, you cannot reuse that Space’s name. If you need to recover a bucket that is pending destruction, you can cancel a bucket’s scheduled deletion in the control panel.
You cannot secure a CDN’s subdomain with a custom wildcard SSL certificate that is already being used elsewhere in your account. Instead, you’ll need to add a new custom certificate during the custom subdomain set up for your CDN.
SSL wildcard certificates will not match buckets if the bucket’s name has a period, .
, in it. If you need browser-based access to a bucket, we recommend against using periods in the bucket’s name. Buckets support browser and API access with path-style requests and API access with vhost-style requests.
Spaces automatically delete any incomplete multipart uploads older than 90 days to prevent billing and to free up storage.
You cannot use CloudFlare Origin CA certificates for your custom subdomains.
All Spaces buckets have the following request rate limits:
Buckets created prior to the dates listed in the chart below, based on their datacenter, have the following per-bucket request rate limits:
PUT
, POST
, COPY
, DELETE
, and LIST
operations per second
Datacenter | Date |
---|---|
AMS3 | 2020-12-16 |
FRA1 | 2020-11-04 |
NYC3 | 2020-12-03 |
SFO2 | 2022-11-15 |
SGP1 | 2021-02-06 |
All other Spaces buckets have the following per-bucket request rate limits:
During periods of high load, we may further limit LIST
requests, if necessary.
Applications should retry with exponential backoff on 503 Slow Down
errors. Significantly exceeding these limits without exponential backoff may result in temporary suspension of access to particular objects or buckets.
In general, using a small number of parallel connections gives better performance than a single connection. If you plan to push more than 400 requests per second to Spaces, we recommend using the Spaces CDN or creating more buckets.
DigitalOcean’s internal DNS infrastructure also has rate limits in place to limit the impact of abusive actors. If you are making a large number of requests, we recommend implementing recursive DNS caching.
Buckets have the following file size limits:
PUT
requests can be at most 5 GB.
Each part of a multi-part upload can be at most 5 GB.
Each part of a multi-part upload must be at least 5 MiB, except for the final part.
Multi-part uploads can have at most 10,000 parts.
The maximum supported total size of a multi-part upload is 5 TB.
PUT
requests and individual parts of multi-part uploads sent to the Spaces CDN using presigned URLs have a maximum payload of 8,100 KiB, or 7.91 MiB.
Using the control panel, you can delete up to 9,999 files from a bucket at once. To delete 10,000 or more files, use multiple requests in the control panel or use the API to batch deletes more quickly.
While you can set permissions for all the files in a folder, currently you’ll need to use a third-party client to set permissions recursively.
If you have a large number of objects or multi-part uploads, you may not be able to view all your objects in the control panel. You can still view all the objects using the Spaces API and s3-compatible tools such as s3cmd or AWS S3.
The minimum billable object size is 4 KiB. Storage is consumed by both the data and metadata of objects, and billed in multiples of 4 KiB, rounded up.
Early client disconnects may be metered up-to the original size of the requested object. Metering will reflect the total number of bytes egressed from Spaces prior to the receipt of the disconnect.
Spaces buckets created on or after July 2021 support up to 100 million unversioned objects or 50 million versioned objects, instead of the previous 3 million unversioned or 1.5 million versioned. If your bucket exceeds its object limits, we recommend distributing its workload across multiple buckets, as, otherwise, it require intermittent maintenance periods to ensure consistent performance. If your bucket was created prior to July 2021, you can request an upgrade to the new limits via a support ticket.
You can only create and edit access keys with these permissions through the control panel, not via the CLI or API.
Spaces you cannot convert “All Permissions” keys into “Read” or “Read/Write/Delete” access keys or vice-versa.
Currently, per-bucket access keys are incompatible with S3-compatible bucket policies. In other words, you cannot currently create a “Read” or “Read/Write/Delete” access key on bucket if it is configured with a PutBucketPolicy-based bucket policy, and you cannot use the PutBucketPolicy S3 API on any bucket that a “Read” or “Read/Write/Delete” access key has access to.
Spaces buckets’ delete actions do not include the correct IP address that conducted the action in an account’s security history.
Uploading hundreds or thousands of files via cloud.digitalocean.com
may not complete reliably. For this use case, use s3cmd or other third-party tools.
The Spaces API does not support list-objects-v2
pagination.
CDN subdomain certificates can silently fail to upload to the CDN on renewal. This causes the CDN to stop serving assets with SSL once the original certificate expires. Additionally, when this happens, you cannot change the invalid certificate in the control panel.
Our engineers are working on a fix for the certificate renewal uploads and a fix in the control panel to support uploading or selecting a different certificate when a renewed certificate upload fails.
As a workaround, you can use the API to update the CDN certificate. You can also open a support ticket for help.
If you have a considerable number of Spaces buckets and objects within them, such as over 500 buckets with around 10,000 objects each, bucket statistics may become unavailable in the Spaces landing page.