Using DigitalOcean Spaces with AWS S3 SDKs

Spaces Object Storage is an S3-compatible object storage service that lets you store and serve large amounts of data. Each Space is a bucket for you to store and serve files. The built-in Spaces CDN minimizes page load times and improves performance.


The Spaces API is inter-operable with the AWS S3 API, meaning you can use existing S3 tools and libraries with Spaces. A common use case is managing Spaces buckets programmatically with AWS’ S3 Software Development Kits (SDKs). The S3 SDKs are available in a variety of languages and most are compatible with the Spaces API.

After you set up and configure an SDK, you can follow the examples below to see how to perform common Spaces operations in JavaScript, Go, PHP, Python 3, and Ruby.

When using S3-focused tools, an S3 “key” is the name of a file in a bucket.

Setup and Configuration

Install the SDK

Install the AWS SDK using the package manager for your language of choice.

npm install @aws-sdk/client-s3
go get -u github.com/aws/aws-sdk-go
php composer.phar require aws/aws-sdk-php
pip install boto3
gem install aws-sdk-s3

Create Access Keys

To use the Spaces API, you need to create an access key and secret key for your bucket from the API page in the control panel.

The examples below rely on environment variables to access these keys. Export SPACES_KEY and SPACES_SECRET to your environment (e.g. export SPACES_KEY=EXAMPLE7UQOTHDTF3GK4) to make them available to your code.

Configure a Client

To use Spaces with tools or libraries designed for the S3 API, you must configure the “endpoint” setting to point to buckets. The value should be ${REGION}.digitaloceanspaces.com where ${REGION} is the DigitalOcean datacenter region (e.g. nyc3) where your bucket is located.

Note
To successfully create a new bucket, this SDK requires the region to be us-east-1, an AWS region name. The DigitalOcean datacenter region is based on the endpoint value, which is nyc3 in the example below.
import { S3 } from "@aws-sdk/client-s3";

const s3Client = new S3({
    forcePathStyle: false, // Configures to use subdomain/virtual calling format.
    endpoint: "https://nyc3.digitaloceanspaces.com",
    region: "us-east-1",
    credentials: {
      accessKeyId: process.env.SPACES_KEY,
      secretAccessKey: process.env.SPACES_SECRET
    }
});

export { s3Client };
Note
To successfully create a new bucket, this SDK requires the region to be us-east-1, an AWS region name. The DigitalOcean datacenter region is based on the endpoint value, which is nyc3 in the example below.
package main

import (
    "os"
    // Additional imports needed for examples below
    "fmt"
    "io"
    "strings"
    "time"

    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/aws/credentials"
    "github.com/aws/aws-sdk-go/aws/session"
    "github.com/aws/aws-sdk-go/service/s3"
)

func main() {
    key := os.Getenv("SPACES_KEY")
    secret := os.Getenv("SPACES_SECRET")

    s3Config := &aws.Config{
        Credentials: credentials.NewStaticCredentials(key, secret, ""),
        Endpoint:    aws.String("https://nyc3.digitaloceanspaces.com"),
        Region:      aws.String("us-east-1"),
        S3ForcePathStyle: aws.Bool(false), // // Configures to use subdomain/virtual calling format. Depending on your version, alternatively use o.UsePathStyle = false
    }

    newSession := session.New(s3Config)
    s3Client := s3.New(newSession)

    // ...
Note
To successfully create a new bucket, this SDK requires the region to be us-east-1, an AWS region name. The DigitalOcean datacenter region is based on the endpoint value, which is nyc3 in the example below.
<?php

// Included aws/aws-sdk-php via Composer's autoloader
require 'vendor/autoload.php';
use Aws\S3\S3Client;

$client = new Aws\S3\S3Client([
        'version' => 'latest',
        'region'  => 'us-east-1',
        'endpoint' => 'https://nyc3.digitaloceanspaces.com',
        'use_path_style_endpoint' => false, // Configures to use subdomain/virtual calling format.
        'credentials' => [
                'key'    => getenv('SPACES_KEY'),
                'secret' => getenv('SPACES_SECRET'),
            ],
]);
import os
import boto3

session = boto3.session.Session()
client = session.client('s3',
                        config=botocore.config.Config(s3={'addressing_style': 'virtual'}), // Configures to use subdomain/virtual calling format.
                        region_name='nyc3',
                        endpoint_url='https://nyc3.digitaloceanspaces.com',
                        aws_access_key_id=os.getenv('SPACES_KEY'),
                        aws_secret_access_key=os.getenv('SPACES_SECRET'))
Note
To successfully create a new bucket, this SDK requires the region to be us-east-1, an AWS region name. The DigitalOcean datacenter region is based on the endpoint value, which is nyc3 in the example below.
require 'aws-sdk-s3'

client = Aws::S3::Client.new(
  access_key_id: ENV['SPACES_KEY'],
  secret_access_key: ENV['SPACES_SECRET'],
  endpoint: 'https://nyc3.digitaloceanspaces.com',
  force_path_style: false, // Configures to use subdomain/virtual calling format.
  region: 'us-east-1'
)

Due to an AWS-specific behavior in all versions of the SDK except Python 3, to successfully create a new bucket, you must specify an AWS region, such as us-east-1, in your configuration. This is because, when creating a bucket, the SDK sends an entirely different payload if a custom region is specified, which results in an error. Region does not affect any other examples listed in this page.

Note, specifying us-east-1 does not result in slower performance, regardless of your bucket’s location. The SDK checks the region for verification purposes but never sends the payload there. Instead, it sends the payload to the specified custom endpoint.

Usage Examples

Create a New Bucket

These examples create a new bucket in the region configured above.

Bucket names must be globally unique. Attempting to create a bucket with a name that is in use will fail with a BucketAlreadyExists error and return a 409 status code.

// Imports your configured client and any necessary S3 commands.
import { CreateBucketCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./s3Client.js";

// Specifies the new Space's name.
const bucketParams = { Bucket: "example-bucket-name" };

// Creates the new Space.
const run = async () => {
  try {
    const data = await s3Client.send(new CreateBucketCommand(bucketParams));
    console.log("Success", data.Location);
    return data;
  } catch (err) {
    console.log("Error", err);
  }
};

run();
    params := &s3.CreateBucketInput{
        Bucket: aws.String("example-bucket-name"),
    }

    _, err := s3Client.CreateBucket(params)
    if err != nil {
        fmt.Println(err.Error())
    }
$client->createBucket([
    'Bucket' => 'example-bucket-name',
]);
client.create_bucket(Bucket='example-bucket-name')
client.create_bucket({
  bucket: "example-bucket-name",
})

List All Buckets in a Region

These examples list all of your account’s buckets in your client’s endpoint region by retrieving the list of buckets from the API and looping through them to print their names.

// Imports your configured client and any necessary S3 commands.
import { ListBucketsCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";

// Returns a list of Spaces in your region.
export const run = async () => {
  try {
    const data = await s3Client.send(new ListBucketsCommand({}));
    console.log("Success", data.Buckets);
    return data; // For unit tests.
  } catch (err) {
    console.log("Error", err);
  }
};

run();
    spaces, err := s3Client.ListBuckets(nil)
    if err != nil {
        fmt.Println(err.Error())
        return
    }

    for _, b := range spaces.Buckets {
        fmt.Println(aws.StringValue(b.Name))
    }
$spaces = $client->listBuckets();
foreach ($spaces['Buckets'] as $space){
    echo $space['Name']."\n";
}
response = client.list_buckets()
for space in response['Buckets']:
    print(space['Name'])
spaces =  client.list_buckets()
spaces.buckets.each do |space|
  puts "#{space.name}"
end

Upload a File to a Bucket

These examples upload a file to a bucket using the private canned ACL so the uploaded file is not publicly accessible. These examples take the file contents as the Body argument. You can use the SourceFile argument to use the path to the file instead, but not all SDKs support this.

In the S3 API, “canned-ACLs” are pre-defined sets of permissions that can be used to manage access to buckets and objects. Buckets only supports the private and public-read canned-ACLs.

// Imports your configured client and any necessary S3 commands.
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";

// Specifies a path within your bucket and the file to upload.
export const bucketParams = {
  Bucket: "example-bucket-name",
  Key: "example.txt",
  Body: "content"
};

// Uploads the specified file to the chosen path.
export const run = async () => {
  try {
    const data = await s3Client.send(new PutObjectCommand(bucketParams));
    console.log(
      "Successfully uploaded object: " +
        bucketParams.Bucket +
        "/" +
        bucketParams.Key
    );
    return data;
  } catch (err) {
    console.log("Error", err);
  }
};

run();
    object := s3.PutObjectInput{
        Bucket: aws.String("example-bucket-name"),
        Key:    aws.String("file.ext"),
        Body:   strings.NewReader("The contents of the file."),
        ACL:    aws.String("private"),
        Metadata: map[string]*string{
                                 "x-amz-meta-my-key": aws.String("your-value"),
                         },
    }
    _, err := s3Client.PutObject(&object)
    if err != nil {
        fmt.Println(err.Error())
    }
$client->putObject([
     'Bucket' => 'example-bucket-name',
     'Key'    => 'file.ext',
     'Body'   => 'The contents of the file.',
     'ACL'    => 'private',
     'Metadata'   => array(
         'x-amz-meta-my-key' => 'your-value'
     )
]);
client.put_object(Bucket='example-bucket-name',
                  Key='file.ext',
                  Body=b'The contents of the file.',
                  ACL='private',
                  Metadata={
                      'x-amz-meta-my-key': 'your-value'
                  }
                )
client.put_object({
  bucket: "example-bucket-name",
  key: "file.ext",
  body: "The contents of the file.",
  acl: "private",
  metadata = {
  x-amz-meta-my-key : "your-value"
  }
})

List All Files in a Bucket

These examples list all of the files stored in a specific bucket by retrieving the list of files from the API and looping through them to print their names.

// Imports your configured client and any necessary S3 commands.
import { ListObjectsCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";

// Specifies a path within your bucket, e.g. example-bucket-name/example-directory.
export const bucketParams = { Bucket: "example-bucket-name" };

// Returns a list of objects in your specified path.
export const run = async () => {
  try {
    const data = await s3Client.send(new ListObjectsCommand(bucketParams));
    console.log("Success", data);
    return data;
  } catch (err) {
    console.log("Error", err);
  }
};

run();
    input := &s3.ListObjectsInput{
      Bucket:  aws.String("example-bucket-name"),
    }

    objects, err := s3Client.ListObjects(input)
    if err != nil {
      fmt.Println(err.Error())
    }

    for _, obj := range objects.Contents {
      fmt.Println(aws.StringValue(obj.Key))
    }
$objects = $client->listObjects([
    'Bucket' => 'example-bucket-name',
]);

foreach ($objects['Contents'] as $obj){
    echo $obj['Key']."\n";
}
response = client.list_objects(Bucket='example-bucket-name')
for obj in response['Contents']:
    print(obj['Key'])
objects = client.list_objects({bucket: "example-bucket-name"})
objects.contents.each do |obj|
  puts "#{obj.key}"
end

Download a File from a Bucket

These examples make an authenticated request to download a file from a specific bucket. They will download a file stored in Spaces (file.ext) to /tmp/local-file.ext on the local file-system.

// Imports your configured client and any necessary S3 commands.
import { GetObjectCommand } from "@aws-sdk/client-s3";
import { writeFileSync } from "fs";
import { s3Client } from "./path/to/s3Client.js";

// Specifies a path within your bucket and the file to download.
const bucketParams = {
  Bucket: "example-bucket-name",
  Key: "file.ext"
};

// Function to turn the file's body into a string.
const streamToString = (stream) => {
  const chunks = [];
  return new Promise((resolve, reject) => {
    stream.on('data', (chunk) => chunks.push(Buffer.from(chunk)));
    stream.on('error', (err) => reject(err));
    stream.on('end', () => resolve(Buffer.concat(chunks).toString('utf8')));
  });
};

// Downloads your file and saves its contents to /tmp/local-file.ext.
const run = async () => {
  try {
    const response = await s3Client.send(new GetObjectCommand(bucketParams));
    const data = await streamToString(response.Body);
    writeFileSync("/tmp/local-file.ext", data);
    console.log("Success", data);
    return data;
  } catch (err) {
    console.log("Error", err);
  }
};

run();
    input := &s3.GetObjectInput{
        Bucket: aws.String("example-bucket-name"),
        Key:    aws.String("file.ext"),
    }

    result, err := s3Client.GetObject(input)
    if err != nil {
        fmt.Println(err.Error())
    }

    out, err := os.Create("/tmp/local-file.ext")
    defer out.Close()

    _, err = io.Copy(out, result.Body)
    if err != nil {
        fmt.Println(err.Error())
    }
$result = $client->getObject([
    'Bucket' => 'example-bucket-name',
    'Key' => 'file.ext',
]);

file_put_contents('/tmp/local-file.ext', $result['Body']);
client.download_file('example-bucket-name',
                     'file.ext',
                     '/tmp/local-file.ext')
client.get_object(
  bucket: 'example-bucket-name',
  key: 'file.ext',
  response_target: '/tmp/local-file.ext'
)

Generate a Pre-Signed URL to Download a Private File

Using pre-signed URLs, you can share private files for a limited period of time with people that have the link. In the control panel, these are called Quick Share links.

The examples generate pre-signed URLs for a file (file.ext) in a bucket that will last for five minutes.

You can use presigned URLs with the Spaces CDN. To do so, configure your SDK or S3 tool to use the non-CDN endpoint, generate a presigned URL for a GetObject request, then modify the hostname in the URL to be the CDN hostname (<space-name>.<region>.cdn.digitaloceanspaces.com, unless the bucket uses a custom hostname).

// Imports your configured client and any necessary S3 commands.
import { GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { s3Client } from "./path/to/s3Client.js";

// Specifies a path within your Space and the file to download.
export const bucketParams = {
  Bucket: "example-bucket-name",
  Key: "file.ext"
};

// Generates the URL.
export const run = async () => {
  try {
    const url = await getSignedUrl(s3Client, new GetObjectCommand(bucketParams), { expiresIn: 15 * 60 }); // Adjustable expiration.
    console.log("URL:", url);
    return url;
  } catch (err) {
    console.log("Error", err);
  }
};

run();
    req, _ := s3Client.GetObjectRequest(&s3.GetObjectInput{
        Bucket: aws.String("example-bucket-name"),
        Key:    aws.String("file.ext"),
    })

    urlStr, err := req.Presign(5 * time.Minute)
    if err != nil {
        fmt.Println(err.Error())
    }

    fmt.Println(urlStr)
$cmd = $client->getCommand('GetObject', [
    'Bucket' => 'example-bucket-name',
    'Key'    => 'file.ext'
]);

$request = $client->createPresignedRequest($cmd, '+5 minutes');
$presignedUrl = (string) $request->getUri();

echo $presignedUrl."\n";
url = client.generate_presigned_url(ClientMethod='get_object',
                                    Params={'Bucket': 'example-bucket-name',
                                            'Key': 'file.ext'},
                                    ExpiresIn=300)

print(url)
signer = Aws::S3::Presigner.new(client: client)
url = signer.presigned_url(
  :get_object,
  bucket: "example-bucket-name",
  key: "file.ext",
  expires_in: 300
)

puts url

Generate a Pre-Signed URL to Upload a File

You can also use pre-signed URLs to grant permission to upload a specific file using a PUT request. These URLs are only valid for a limited time period. These examples generate pre-signed URLs that will last for five minutes.

To create the pre-signed URL, you must specify the filename and its expected content type, like text or application/json.

// Imports your configured client and any necessary S3 commands.
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { s3Client } from "./path/to/s3Client.js";

// Specifies path, file, and content type.
export const bucketParams = {
  Bucket: "example-bucket-name",
  Key: "file.ext",
  ContentType: "text"
};

// Generates the URL.
export const run = async () => {
  try {
    const url = await getSignedUrl(s3Client, new PutObjectCommand(bucketParams), { expiresIn: 15 * 60 }); // Adjustable expiration.
    console.log("URL:", url);
    return url;
  } catch (err) {
    console.log("Error", err);
  }
};

run();
    req, _ := s3Client.PutObjectRequest(&s3.PutObjectInput{
        Bucket: aws.String("example-bucket-name"),
        Key:    aws.String("new-file.ext"),
    })
    urlStr, err := req.Presign(5 * time.Minute)
    if err != nil {
        fmt.Println(err.Error())
    }

    fmt.Println(urlStr)
$cmd = $client->getCommand('PutObject', [
    'Bucket' => 'example-bucket-name',
    'Key'    => 'new-file.ext'
]);

$request = $client->createPresignedRequest($cmd, '+5 minutes');
$presignedUrl = (string) $request->getUri();

echo $presignedUrl."\n";
url = client.generate_presigned_url(ClientMethod='put_object',
                                    Params={'Bucket': 'example-bucket-name',
                                            'Key': 'new-file.ext'},
                                    ExpiresIn=300)

print(url)
signer = Aws::S3::Presigner.new(client: client)
url = signer.presigned_url(
  :put_object,
  bucket: "example-bucket-name",
  key: "new-file.ext",
  expires_in: 300
)

puts url

You can use the resulting URL to upload the file using standard HTTP requests without needing access to the bucket’s secret key. The content type and file name used in the upload must match the ones used when generating the URL. For example:

curl -X PUT \
  -H "Content-Type: text" \
  -d "The contents of the file." \
  "https://example-bucket-name.nyc3.digitaloceanspaces.com/new-file.ext?AWSAccessKeyId=EXAMPLE7UQOTHDTF3GK4&Content-Type=text&Expires=1580419378&Signature=YIXPlynk4BALXE6fH7vqbnwjSEw%3D"
How do I set custom file metadata with a pre-signed URL?

To set custom file metadata via a pre-signed URL, you can use this Python demo script as a template for your code. If the AWS signature calculation algorithm requires that your chosen fields are part of the signature, you must include them in both the presigned URL computation (such as generate_presigned_url) and the final request, like so:

    
        
            
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
# Original URL: https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/python/example_code/s3/s3_basics/presigned_url.py
# DigitalOcean: This has modifications to show how to use DigitalOcean spaces, as well as include additional headers on either Spaces or AWS S3

"""
Purpose

Shows how to use the AWS SDK for Python (Boto3) with Amazon Simple Storage Service
(Amazon S3) to generate a presigned URL that can perform an action for a limited
time with your credentials. Also shows how to use the Requests package
to make a request with the URL.
"""

# snippet-start:[python.example_code.s3.Scenario_GeneratePresignedUrl]
import argparse
import logging
import boto3
import botocore
from botocore.exceptions import ClientError
import requests

logger = logging.getLogger(__name__)


def generate_presigned_url(s3_client, client_method, method_parameters, expires_in):
    """
    Generate a presigned Amazon S3 URL that can be used to perform an action.

    :param s3_client: A Boto3 Amazon S3 client.
    :param client_method: The name of the client method that the URL performs.
    :param method_parameters: The parameters of the specified client method.
    :param expires_in: The number of seconds the presigned URL is valid for.
    :return: The presigned URL.
    """
    try:
        url = s3_client.generate_presigned_url(
            ClientMethod=client_method,
            Params=method_parameters,
            ExpiresIn=expires_in
        )
        logger.info("Got presigned URL: %s", url)
    except ClientError:
        logger.exception(
            "Couldn't get a presigned URL for client method '%s'.", client_method)
        raise
    return url


def usage_demo():
    logging.basicConfig(level=logging.INFO, format='%(levelname)s: %(message)s')

    print('-'*88)
    print("Welcome to the Amazon S3 presigned URL demo.")
    print('-'*88)

    parser = argparse.ArgumentParser()
    parser.add_argument('bucket', help="The name of the bucket.")
    parser.add_argument(
        'key', help="For a GET operation, the key of the object in Amazon S3. For a "
                    "PUT operation, the name of a file to upload.")
    parser.add_argument(
        'action', choices=('get', 'put'), help="The action to perform.")
    args = parser.parse_args()

    ##s3_client = boto3.client('s3')
    session = boto3.session.Session()
    # This fetches credentials from AWS config in the env or default paths
    s3_client = session.client('s3',
                        region_name='sfo3',
                        endpoint_url='https://sfo3.digitaloceanspaces.com',
                        config=botocore.client.Config(signature_version='s3v4'),
                        )

    client_action = 'get_object' if args.action == 'get' else 'put_object'
    # The following lines are changed for DigitalOcean PoC demo purposes!
    # Changes part 1:
    # 1. Move params to a named variable
    # 2. Add metadata to the PutObject request signature computation
    request_headers = {}
    params = {'Bucket': args.bucket, 'Key': args.key}
    transform_data = lambda x: x
    if args.action == 'put':
        # For the purposes of this demo, we are applying a Content-Encoding Brotli
        request_headers['Content-Encoding'] = 'br'
        params['ContentEncoding'] = 'br'
        import brotli
        transform_data = lambda x: brotli.compress(x.encode('ascii'))
        # And we will add custom metadata
        # Add it to BOTH the signature computation AND the request headers
        params['Metadata'] = {
                'author': 'robbat2',
        }
        for k,v in params['Metadata'].items():
            # TODO: this might need special encoding to handle quoting etc.
            request_headers['x-amz-meta-'+k.lower()] = v

    # generate the URL now, only change is to pass the params block.
    url = generate_presigned_url(
            s3_client, client_action, params, 1000)
    # End of changes part 1

    print("Using the Requests package to send a request to the URL.")
    response = None
    if args.action == 'get':
        response = requests.get(url)
    elif args.action == 'put':
        print("Putting data to the URL.")
        try:
            with open(args.key, 'r') as object_file:
                object_text = object_file.read()
            # Changes part 2:
            # Transform the data
            # Add headers
            response = requests.put(url, data=transform_data(object_text), headers=request_headers)
            # End of Changes part 2
        except FileNotFoundError:
            print(f"Couldn't find {args.key}. For a PUT operation, the key must be the "
                  f"name of a file that exists on your computer.")

    if response is not None:
        print("Got response:")
        print(f"Status: {response.status_code}")
        print(response.text)

    print('-'*88)


if __name__ == '__main__':
    usage_demo()
# snippet-end:[python.example_code.s3.Scenario_GeneratePresignedUrl]

        
    
Warning
Pre-signed URLs do not support certain operation parameters, such as SSECustomerKey, ACL, Expires, ContentLength, and Tagging. If you are using a pre-signed URL to upload from a browser and need to use these fields, you must either provide them as headers when sending a request or call a method such as CreatePresignedPost().

Delete a File from a Bucket

These examples delete a file (example-file-to-delete.ext) from a specific bucket.

// Imports your configured client and any necessary S3 commands.
import { DeleteObjectCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";

// Specifies a path within your bucket and the file to delete.
export const bucketParams = { Bucket: "example-bucket-name", Key: "example-file.txt" };

// Returns a list of objects in your specified path.
export const run = async () => {
  try {
    const data = await s3Client.send(new DeleteObjectCommand(bucketParams));
    console.log("Success", data);
    return data;
  } catch (err) {
    console.log("Error", err);
  }
};

run();
    input := &s3.DeleteObjectInput{
        Bucket: aws.String("example-bucket-name"),
        Key:    aws.String("example-file-to-delete.ext"),
    }

    result, err := s3Client.DeleteObject(input)
    if err != nil {
        fmt.Println(err.Error())
    }
$client->deleteObject([
    'Bucket' => 'example-bucket-name',
    'Key' => 'example-file-to-delete.ext',
]);
client.delete_object(Bucket='example-bucket-name',
                     Key='example-file-to-delete.ext')
client.delete_object({
  bucket: 'example-bucket-name',
  key: 'example-file-to-delete.ext'
})

Delete a Bucket

These examples delete a bucket. To do so, you must first delete all files in the bucket. Attempting to delete a bucket that still contains files will fail with a BucketNotEmpty error and return a 409 status code.

// Imports your configured client and any necessary S3 commands.
import { DeleteBucketCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./path/to/s3Client.js";

// Specifies the name of the bucket to delete.
export const bucketParams = { Bucket: "example-bucket-name" };

// Deletes specified bucket.
export const run = async () => {
  try {
    const data = await s3Client.send(new DeleteBucketCommand(bucketParams));
    return data;
    console.log("Success - bucket deleted");
  } catch (err) {
    console.log("Error", err);
  }
};

run();
    input := &s3.DeleteBucketInput{
        Bucket: aws.String("example-bucket-name"),
    }

    result, err := s3Client.DeleteBucket(input)
    if err != nil {
        fmt.Println(err.Error())
    }
$client->deleteBucket([
    'Bucket' => 'example-bucket-name',
]);
client.delete_bucket(Bucket='example-bucket-name')
client.delete_bucket({bucket: 'example-bucket-name'})

Additional Resources

For more details on compatibility with the S3 API, see the Spaces API documentation.

The full reference documentation for the SDKs used above can be found at: