Spaces Object Storage is an S3-compatible object storage service that lets you store and serve large amounts of data. Each Space is a bucket for you to store and serve files. The built-in Spaces CDN minimizes page load times and improves performance.
The Spaces API is inter-operable with the AWS S3 API, meaning you can use existing S3 tools and libraries with Spaces. A common use case is managing Spaces buckets programmatically with AWS’ S3 Software Development Kits (SDKs). The S3 SDKs are available in a variety of languages and most are compatible with the Spaces API.
After you set up and configure an SDK, you can follow the examples below to see how to perform common Spaces operations in JavaScript, Go, PHP, Python 3, and Ruby.
When using S3-focused tools, an S3 “key” is the name of a file in a bucket.
Install the AWS SDK using the package manager for your language of choice.
npm install @aws-sdk/client-s3
go get -u github.com/aws/aws-sdk-go
php composer.phar require aws/aws-sdk-php
pip install boto3
gem install aws-sdk-s3
To use the Spaces API, you need to create an access key and secret key for your bucket from the API page in the control panel.
The examples below rely on environment variables to access these keys. Export SPACES_KEY
and SPACES_SECRET
to your environment (e.g. export SPACES_KEY=EXAMPLE7UQOTHDTF3GK4
) to make them available to your code.
To use Spaces with tools or libraries designed for the S3 API, you must configure the “endpoint” setting to point to buckets. The value should be ${REGION}.digitaloceanspaces.com
where ${REGION}
is the DigitalOcean datacenter region (e.g. nyc3
) where your bucket is located.
region
to be us-east-1
, an AWS region name. The DigitalOcean datacenter region is based on the endpoint
value, which is nyc3
in the example below.import { S3 } from "@aws-sdk/client-s3";
const s3Client = new S3({
forcePathStyle: false, // Configures to use subdomain/virtual calling format.
endpoint: "https://nyc3.digitaloceanspaces.com",
region: "us-east-1",
credentials: {
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET
}
});
export { s3Client };
region
to be us-east-1
, an AWS region name. The DigitalOcean datacenter region is based on the endpoint
value, which is nyc3
in the example below.package main
import (
"os"
// Additional imports needed for examples below
"fmt"
"io"
"strings"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
)
func main() {
key := os.Getenv("SPACES_KEY")
secret := os.Getenv("SPACES_SECRET")
s3Config := &aws.Config{
Credentials: credentials.NewStaticCredentials(key, secret, ""),
Endpoint: aws.String("https://nyc3.digitaloceanspaces.com"),
Region: aws.String("us-east-1"),
S3ForcePathStyle: aws.Bool(false), // // Configures to use subdomain/virtual calling format. Depending on your version, alternatively use o.UsePathStyle = false
}
newSession := session.New(s3Config)
s3Client := s3.New(newSession)
// ...
region
to be us-east-1
, an AWS region name. The DigitalOcean datacenter region is based on the endpoint
value, which is nyc3
in the example below.<?php
// Included aws/aws-sdk-php via Composer's autoloader
require 'vendor/autoload.php';
use Aws\S3\S3Client;
$client = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-1',
'endpoint' => 'https://nyc3.digitaloceanspaces.com',
'use_path_style_endpoint' => false, // Configures to use subdomain/virtual calling format.
'credentials' => [
'key' => getenv('SPACES_KEY'),
'secret' => getenv('SPACES_SECRET'),
],
]);
import os
import boto3
session = boto3.session.Session()
client = session.client('s3',
config=botocore.config.Config(s3={'addressing_style': 'virtual'}), // Configures to use subdomain/virtual calling format.
region_name='nyc3',
endpoint_url='https://nyc3.digitaloceanspaces.com',
aws_access_key_id=os.getenv('SPACES_KEY'),
aws_secret_access_key=os.getenv('SPACES_SECRET'))
region
to be us-east-1
, an AWS region name. The DigitalOcean datacenter region is based on the endpoint
value, which is nyc3
in the example below.require 'aws-sdk-s3'
client = Aws::S3::Client.new(
access_key_id: ENV['SPACES_KEY'],
secret_access_key: ENV['SPACES_SECRET'],
endpoint: 'https://nyc3.digitaloceanspaces.com',
force_path_style: false, // Configures to use subdomain/virtual calling format.
region: 'us-east-1'
)
Due to an AWS-specific behavior in all versions of the SDK except Python 3, to successfully create a new bucket, you must specify an AWS region, such as us-east-1
, in your configuration. This is because, when creating a bucket, the SDK sends an entirely different payload if a custom region is specified, which results in an error. Region does not affect any other examples listed in this page.
Note, specifying us-east-1
does not result in slower performance, regardless of your bucket’s location. The SDK checks the region for verification purposes but never sends the payload there. Instead, it sends the payload to the specified custom endpoint.
These examples create a new bucket in the region configured above.
Bucket names must be globally unique. Attempting to create a bucket with a name that is in use will fail with a BucketAlreadyExists
error and return a 409 status code.
// Imports your configured client and any necessary S3 commands.
import { CreateBucketCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./s3Client.js";
// Specifies the new Space's name.
const bucketParams = { Bucket: "example-bucket-name" };
// Creates the new Space.
const run = async () => {
try {
const data = await s3Client.send(new CreateBucketCommand(bucketParams));
console.log("Success", data.Location);
return data;
} catch (err) {
console.log("Error", err);
}
};
run();
params := &s3.CreateBucketInput{
Bucket: aws.String("example-bucket-name"),
}
_, err := s3Client.CreateBucket(params)
if err != nil {
fmt.Println(err.Error())
}
$client->createBucket([
'Bucket' => 'example-bucket-name',
]);
client.create_bucket(Bucket='example-bucket-name')
client.create_bucket({
bucket: "example-bucket-name",
})
These examples list all of your account’s buckets in your client’s endpoint region by retrieving the list of buckets from the API and looping through them to print their names.
// Imports your configured client and any necessary S3 commands.
import { ListBucketsCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";
// Returns a list of Spaces in your region.
export const run = async () => {
try {
const data = await s3Client.send(new ListBucketsCommand({}));
console.log("Success", data.Buckets);
return data; // For unit tests.
} catch (err) {
console.log("Error", err);
}
};
run();
spaces, err := s3Client.ListBuckets(nil)
if err != nil {
fmt.Println(err.Error())
return
}
for _, b := range spaces.Buckets {
fmt.Println(aws.StringValue(b.Name))
}
$spaces = $client->listBuckets();
foreach ($spaces['Buckets'] as $space){
echo $space['Name']."\n";
}
response = client.list_buckets()
for space in response['Buckets']:
print(space['Name'])
spaces = client.list_buckets()
spaces.buckets.each do |space|
puts "#{space.name}"
end
These examples upload a file to a bucket using the private
canned ACL so the uploaded file is not publicly accessible. These examples take the file contents as the Body
argument. You can use the SourceFile
argument to use the path to the file instead, but not all SDKs support this.
In the S3 API, “canned-ACLs” are pre-defined sets of permissions that can be used to manage access to buckets and objects. Buckets only supports the private
and public-read
canned-ACLs.
// Imports your configured client and any necessary S3 commands.
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";
// Specifies a path within your bucket and the file to upload.
export const bucketParams = {
Bucket: "example-bucket-name",
Key: "example.txt",
Body: "content"
};
// Uploads the specified file to the chosen path.
export const run = async () => {
try {
const data = await s3Client.send(new PutObjectCommand(bucketParams));
console.log(
"Successfully uploaded object: " +
bucketParams.Bucket +
"/" +
bucketParams.Key
);
return data;
} catch (err) {
console.log("Error", err);
}
};
run();
object := s3.PutObjectInput{
Bucket: aws.String("example-bucket-name"),
Key: aws.String("file.ext"),
Body: strings.NewReader("The contents of the file."),
ACL: aws.String("private"),
Metadata: map[string]*string{
"x-amz-meta-my-key": aws.String("your-value"),
},
}
_, err := s3Client.PutObject(&object)
if err != nil {
fmt.Println(err.Error())
}
$client->putObject([
'Bucket' => 'example-bucket-name',
'Key' => 'file.ext',
'Body' => 'The contents of the file.',
'ACL' => 'private',
'Metadata' => array(
'x-amz-meta-my-key' => 'your-value'
)
]);
client.put_object(Bucket='example-bucket-name',
Key='file.ext',
Body=b'The contents of the file.',
ACL='private',
Metadata={
'x-amz-meta-my-key': 'your-value'
}
)
client.put_object({
bucket: "example-bucket-name",
key: "file.ext",
body: "The contents of the file.",
acl: "private",
metadata = {
x-amz-meta-my-key : "your-value"
}
})
These examples list all of the files stored in a specific bucket by retrieving the list of files from the API and looping through them to print their names.
// Imports your configured client and any necessary S3 commands.
import { ListObjectsCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";
// Specifies a path within your bucket, e.g. example-bucket-name/example-directory.
export const bucketParams = { Bucket: "example-bucket-name" };
// Returns a list of objects in your specified path.
export const run = async () => {
try {
const data = await s3Client.send(new ListObjectsCommand(bucketParams));
console.log("Success", data);
return data;
} catch (err) {
console.log("Error", err);
}
};
run();
input := &s3.ListObjectsInput{
Bucket: aws.String("example-bucket-name"),
}
objects, err := s3Client.ListObjects(input)
if err != nil {
fmt.Println(err.Error())
}
for _, obj := range objects.Contents {
fmt.Println(aws.StringValue(obj.Key))
}
$objects = $client->listObjects([
'Bucket' => 'example-bucket-name',
]);
foreach ($objects['Contents'] as $obj){
echo $obj['Key']."\n";
}
response = client.list_objects(Bucket='example-bucket-name')
for obj in response['Contents']:
print(obj['Key'])
objects = client.list_objects({bucket: "example-bucket-name"})
objects.contents.each do |obj|
puts "#{obj.key}"
end
These examples make an authenticated request to download a file from a specific bucket. They will download a file stored in Spaces (file.ext
) to /tmp/local-file.ext
on the local file-system.
// Imports your configured client and any necessary S3 commands.
import { GetObjectCommand } from "@aws-sdk/client-s3";
import { writeFileSync } from "fs";
import { s3Client } from "./path/to/s3Client.js";
// Specifies a path within your bucket and the file to download.
const bucketParams = {
Bucket: "example-bucket-name",
Key: "file.ext"
};
// Function to turn the file's body into a string.
const streamToString = (stream) => {
const chunks = [];
return new Promise((resolve, reject) => {
stream.on('data', (chunk) => chunks.push(Buffer.from(chunk)));
stream.on('error', (err) => reject(err));
stream.on('end', () => resolve(Buffer.concat(chunks).toString('utf8')));
});
};
// Downloads your file and saves its contents to /tmp/local-file.ext.
const run = async () => {
try {
const response = await s3Client.send(new GetObjectCommand(bucketParams));
const data = await streamToString(response.Body);
writeFileSync("/tmp/local-file.ext", data);
console.log("Success", data);
return data;
} catch (err) {
console.log("Error", err);
}
};
run();
input := &s3.GetObjectInput{
Bucket: aws.String("example-bucket-name"),
Key: aws.String("file.ext"),
}
result, err := s3Client.GetObject(input)
if err != nil {
fmt.Println(err.Error())
}
out, err := os.Create("/tmp/local-file.ext")
defer out.Close()
_, err = io.Copy(out, result.Body)
if err != nil {
fmt.Println(err.Error())
}
$result = $client->getObject([
'Bucket' => 'example-bucket-name',
'Key' => 'file.ext',
]);
file_put_contents('/tmp/local-file.ext', $result['Body']);
client.download_file('example-bucket-name',
'file.ext',
'/tmp/local-file.ext')
client.get_object(
bucket: 'example-bucket-name',
key: 'file.ext',
response_target: '/tmp/local-file.ext'
)
Using pre-signed URLs, you can share private files for a limited period of time with people that have the link. In the control panel, these are called Quick Share links.
The examples generate pre-signed URLs for a file (file.ext
) in a bucket that will last for five minutes.
You can use presigned URLs with the Spaces CDN. To do so, configure your SDK or S3 tool to use the non-CDN endpoint, generate a presigned URL for a GetObject request, then modify the hostname in the URL to be the CDN hostname (<space-name>.<region>.cdn.digitaloceanspaces.com
, unless the bucket uses a custom hostname).
// Imports your configured client and any necessary S3 commands.
import { GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { s3Client } from "./path/to/s3Client.js";
// Specifies a path within your Space and the file to download.
export const bucketParams = {
Bucket: "example-bucket-name",
Key: "file.ext"
};
// Generates the URL.
export const run = async () => {
try {
const url = await getSignedUrl(s3Client, new GetObjectCommand(bucketParams), { expiresIn: 15 * 60 }); // Adjustable expiration.
console.log("URL:", url);
return url;
} catch (err) {
console.log("Error", err);
}
};
run();
req, _ := s3Client.GetObjectRequest(&s3.GetObjectInput{
Bucket: aws.String("example-bucket-name"),
Key: aws.String("file.ext"),
})
urlStr, err := req.Presign(5 * time.Minute)
if err != nil {
fmt.Println(err.Error())
}
fmt.Println(urlStr)
$cmd = $client->getCommand('GetObject', [
'Bucket' => 'example-bucket-name',
'Key' => 'file.ext'
]);
$request = $client->createPresignedRequest($cmd, '+5 minutes');
$presignedUrl = (string) $request->getUri();
echo $presignedUrl."\n";
url = client.generate_presigned_url(ClientMethod='get_object',
Params={'Bucket': 'example-bucket-name',
'Key': 'file.ext'},
ExpiresIn=300)
print(url)
signer = Aws::S3::Presigner.new(client: client)
url = signer.presigned_url(
:get_object,
bucket: "example-bucket-name",
key: "file.ext",
expires_in: 300
)
puts url
You can also use pre-signed URLs to grant permission to upload a specific file using a PUT
request. These URLs are only valid for a limited time period. These examples generate pre-signed URLs that will last for five minutes.
To create the pre-signed URL, you must specify the filename and its expected content type, like text
or application/json
.
// Imports your configured client and any necessary S3 commands.
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { s3Client } from "./path/to/s3Client.js";
// Specifies path, file, and content type.
export const bucketParams = {
Bucket: "example-bucket-name",
Key: "file.ext",
ContentType: "text"
};
// Generates the URL.
export const run = async () => {
try {
const url = await getSignedUrl(s3Client, new PutObjectCommand(bucketParams), { expiresIn: 15 * 60 }); // Adjustable expiration.
console.log("URL:", url);
return url;
} catch (err) {
console.log("Error", err);
}
};
run();
req, _ := s3Client.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String("example-bucket-name"),
Key: aws.String("new-file.ext"),
})
urlStr, err := req.Presign(5 * time.Minute)
if err != nil {
fmt.Println(err.Error())
}
fmt.Println(urlStr)
$cmd = $client->getCommand('PutObject', [
'Bucket' => 'example-bucket-name',
'Key' => 'new-file.ext'
]);
$request = $client->createPresignedRequest($cmd, '+5 minutes');
$presignedUrl = (string) $request->getUri();
echo $presignedUrl."\n";
url = client.generate_presigned_url(ClientMethod='put_object',
Params={'Bucket': 'example-bucket-name',
'Key': 'new-file.ext'},
ExpiresIn=300)
print(url)
signer = Aws::S3::Presigner.new(client: client)
url = signer.presigned_url(
:put_object,
bucket: "example-bucket-name",
key: "new-file.ext",
expires_in: 300
)
puts url
You can use the resulting URL to upload the file using standard HTTP requests without needing access to the bucket’s secret key. The content type and file name used in the upload must match the ones used when generating the URL. For example:
curl -X PUT \
-H "Content-Type: text" \
-d "The contents of the file." \
"https://example-bucket-name.nyc3.digitaloceanspaces.com/new-file.ext?AWSAccessKeyId=EXAMPLE7UQOTHDTF3GK4&Content-Type=text&Expires=1580419378&Signature=YIXPlynk4BALXE6fH7vqbnwjSEw%3D"
CreatePresignedPost()
.These examples delete a file (example-file-to-delete.ext
) from a specific bucket.
// Imports your configured client and any necessary S3 commands.
import { DeleteObjectCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";
// Specifies a path within your bucket and the file to delete.
export const bucketParams = { Bucket: "example-bucket-name", Key: "example-file.txt" };
// Returns a list of objects in your specified path.
export const run = async () => {
try {
const data = await s3Client.send(new DeleteObjectCommand(bucketParams));
console.log("Success", data);
return data;
} catch (err) {
console.log("Error", err);
}
};
run();
input := &s3.DeleteObjectInput{
Bucket: aws.String("example-bucket-name"),
Key: aws.String("example-file-to-delete.ext"),
}
result, err := s3Client.DeleteObject(input)
if err != nil {
fmt.Println(err.Error())
}
$client->deleteObject([
'Bucket' => 'example-bucket-name',
'Key' => 'example-file-to-delete.ext',
]);
client.delete_object(Bucket='example-bucket-name',
Key='example-file-to-delete.ext')
client.delete_object({
bucket: 'example-bucket-name',
key: 'example-file-to-delete.ext'
})
These examples delete a bucket. To do so, you must first delete all files in the bucket. Attempting to delete a bucket that still contains files will fail with a BucketNotEmpty
error and return a 409 status code.
// Imports your configured client and any necessary S3 commands.
import { DeleteBucketCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";
// Specifies the name of the bucket to delete.
export const bucketParams = { Bucket: "example-bucket-name" };
// Deletes specified bucket.
export const run = async () => {
try {
const data = await s3Client.send(new DeleteBucketCommand(bucketParams));
return data;
console.log("Success - bucket deleted");
} catch (err) {
console.log("Error", err);
}
};
run();
input := &s3.DeleteBucketInput{
Bucket: aws.String("example-bucket-name"),
}
result, err := s3Client.DeleteBucket(input)
if err != nil {
fmt.Println(err.Error())
}
$client->deleteBucket([
'Bucket' => 'example-bucket-name',
]);
client.delete_bucket(Bucket='example-bucket-name')
client.delete_bucket({bucket: 'example-bucket-name'})
For more details on compatibility with the S3 API, see the Spaces API documentation.
The full reference documentation for the SDKs used above can be found at: