How to Import Your Own Models (BYOM)
Validated on 20 Apr 2026 • Last edited on 28 Apr 2026
Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.
Importing models into Model Catalog from Hugging Face or a Spaces bucket or folder allows you to Bring Your Own Models (BYOM). After importing a model, you can review, edit, or delete it from the My Models tab.
To browse, view, import, manage imported models, go to the DigitalOcean Control Panel, in the left menu, click INFERENCE, click Model Catalog, and then click the My Models tab.
Import a Model
You can import one model at a time in each import workflow, but you don’t need to wait for one import to finish before starting another.
BYOM imports support only Safetensors files and dedicated inference-compatible architectures, including Qwen2ForCausalLM and Qwen3ForCausalLM.
On the top-right, click Import Model to open the Import Model page.
Import From Hugging Face
Under the Choose model source section, click Import from Hugging Face to open the Provide model details window.
In the Hugging Face URL or Repo ID field, enter the Hugging Face repository URL or repository ID for the model you want to import. For example, https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507 or qwen/qwen3-4b-instruct-2507.
In the Model name field, enter a unique, descriptive name for your imported model, such as Qwen3-4B-Instruct-2507 or Qwen3-4B-Instruct-2507. Use a name that helps identify the model, such as its family, size, or version. Model names can contain letters, numbers, spaces, hyphens (-), and forward slashes (/). After the model is imported, you can’t rename it.
In the Description field, add a short description of the model, such as its intended use case, supported tasks, or any important limitations.
In the Model tags field, add tags to help organize and sort your imported models. To add multiple tags, press SPACEBAR or ENTER after each term. Tags can include letters, numbers, colons, dashes, and underscores. To remove all existing tags, click clear all in the top-right of the Model tags section.
In the Choose preferred GPU region dropdown list, select the DigitalOcean region where you want to import the model. Select the region closest to the workloads or users that use the model. After the model is uploaded, you can’t change its region.
If you deploy the model in a different region than where its hosted, you may incur additional latency for deployment.
In the Hugging Face access token field, enter your Hugging Face access token if your model is gated. This means access to the model files is restricted and may require you to be signed in, request access, or use an authorized token to download them. If you don’t provide an access token for gated models, the import fails. To create one, see the Hugging Face security token guide. The Hugging Face token is stored with the model and used for maintenance and model access retries.
Click the Terms and Conditions checkbox to confirm that you agree to the model’s terms of use, and then click Import model.
A validation check is performed to ensure that the architecture and licensing are supported.
If the model import fails, make sure the Hugging Face URL or repository ID is correct, the model is accessible, and you entered a valid Hugging Face access token for gated or private repositories. After verifying those details, try the import again.
Import From a Spaces Bucket or Folder
Under the Choose model source section, click Import from a Spaces bucket or folder to open the Select model window.
In the Select model section, click the + next to the bucket with the model you want to import, and then select the folder with the model files. Estimated size shows you the size of the model you’re importing in GB.
Then, at the bottom, add details for your model. In the Model name field, enter a unique, descriptive name for your imported model, such as Qwen3-4B-Instruct-2507 or Qwen3-4B-Instruct-2507. Use a name that helps identify the model, such as its family, size, or version. Model names can contain letters, numbers, spaces, hyphens (-), and forward slashes (/). After the model is imported, you can’t rename it.
In the Description field, add a short description of the model, such as its intended use case, supported tasks, or any important limitations.
In the Model tags field, add tags to help organize and sort your imported models. To add multiple tags, press SPACEBAR or ENTER after each term. Tags can include letters, numbers, colons, dashes, and underscores. To remove all existing tags, click clear all in the top-right of the Model tags section.
In the Choose preferred GPU region dropdown list, select the DigitalOcean region where you want to import the model. Select the region closest to the workloads or users that use the model. After the model is uploaded, you can’t change its region.
If you deploy the model in a different region than where its hosted, you may incur additional latency for deployment.
Click the Terms and Conditions checkbox to confirm that you agree to the model’s terms of use, and then click Import model.
If the model import fails, make sure the files in your Spaces bucket or folder are intact and not corrupted. Imports can also fail if required files are missing, incomplete, or placed incorrectly. After verifying the bucket or folder contents and permissions, try the import again.
Review Models to Import
In the Summary section, review the model details, estimated total token cost, and estimated monthly Spaces storage cost based on storage used in GB. Imported model weights are stored in a service-managed Spaces location. Because this storage location isn’t directly accessible or manageable, review the model size, storage usage, and expected monthly cost carefully before importing, especially for large models or multiple imported versions. For pricing details, see pricing.
In the Model to import section, review the model you’re importing, including its size in GB and import status. The following model statuses indicate whether an imported model is ready to use or still in progress:
- Ready means the model is available to use after import.
- Importing means setup is still in progress.
- Failed means the import doesn’t complete successfully because the model or its source is missing required files, uses an unsupported format or architecture, has an invalid structure or configuration, or is otherwise inaccessible or incompatible. Verify the source details and model files, and then try the import again.
If you want to remove the model you wanted to import, under the Model to import section, click x on the right of it.
Afterwards, click Import Model.
Browse Models
Browse your imported models, either by searching your models in the Search by name textbox, or by clicking your models in the model list. The model list displays the following information for each model:
-
Name: The model name, provider, and model ID used for API requests.
-
Status: The current state of the model. The following model statuses indicate whether an imported model is ready to use or still in progress:
- Ready means the model is available to use after import.
- Importing means setup is still in progress.
- Failed means the import doesn’t complete successfully because the model or its source is missing required files, uses an unsupported format or architecture, has an invalid structure or configuration, or is otherwise inaccessible or incompatible. Verify the source details and model files, and then try the import again.
-
Supported Modalities: The input and output types the model supports, such as text input and text output.
-
Deployments: The number of Active and Inactive dedicated inference deployments currently using the model.
-
Tags: Any tags added to help organize and filter the model.
-
Created: When the model was imported or created.
Click the Name header to sort the list in ascending or descending order.
If needed, you can edit any of your models’ details.
Filter Models
To filter the model list by certain attributes, in the top-right, click Filter, and then checkbox the attributes you want to filter the list by. You can filter the following:
-
Status: The model’s current import or availability status. The following model statuses indicate whether an imported model is ready to use or still in progress:
- Ready means the model is available to use after import.
- Importing means setup is still in progress.
- Failed means the import doesn’t complete successfully because the model or its source is missing required files, uses an unsupported format or architecture, has an invalid structure or configuration, or is otherwise inaccessible or incompatible. Verify the source details and model files, and then try the import again.
-
Architecture: The type of model you imported, such as a Qwen model.
-
Source: Where the model was imported from, such as Hugging Face.
-
Tags: Custom tags applied to the model.
Either, click Reset to defaults to reset the filters you chosen, or click a model in the models list to open its model card.
View Model Card
Each model card includes a Model Details tab with information about the model and a DI Deployments tab where you can view the model’s current dedicated inference deployments.
View Model Details
To see a specific model’s details, click the model to view its Model Details tab. The Model Details tab includes:
-
Description of the model, including its intended use cases and key characteristics, if available.
-
Governing terms, such as AI Model Terms for BYOM models, Service Terms, or Terms of Service Agreements.
-
Architecture: The model architecture, such as a Qwen-based architecture.
-
Creation Method: How the model was imported, such as from Hugging Face or Spaces.
- Hugging Face URL: If you imported a Hugging Face model, the repository URL or repo ID used for the import.
-
DigitalOcean Region: The DigitalOcean region where the imported model is stored.
-
Created: When the model was imported.
-
Status: The model’s current import status, such as:
- Ready means the model is available to use after import.
- Importing means setup is still in progress.
- Failed means the import doesn’t complete successfully because the model or its source is missing required files, uses an unsupported format or architecture, has an invalid structure or configuration, or is otherwise inaccessible or incompatible. Verify the source details and model files, and then try the import again.
-
Content Length: Maximum supported context window or input token limit.
-
Parameters: Approximate number of model parameters, indicating model size if available.
-
Input Capabilities: Supported input modalities, such as text, documents, image, audio, video.
-
Output Capabilities: Types of outputs the model can generate, such as text, image, video, audio, embeddings.
-
Tags: Tags associated with the model that help identify or organize it.
Use this information to compare models based on cost, feature support, and deployment options.
Under the Availability section, see how you can deploy the model:
- Dedicated inference: To deploy the model with dedicated inference, on the right, click Deploy endpoint, and then follow set up dedicated inference to configure and deploy the endpoint.
To run model evaluations, click Run Evaluation. This lets you evaluate your deployed model’s performance after deployment. You can run evaluations only for models deployed using dedicated inference.
View DI Deployments
To view current dedicated inference deployments for your model, click the DI Deployments tab. You can view the following information on your deployment:
-
Name: The name of the dedicated inference deployment.
-
Status: The current deployment status.
- Provisioning: Deployment is still being created and isn’t yet ready to serve requests.
- Active: Deployment is ready and available to serve requests.
-
Public Endpoint: The public URL you can use to access the deployment endpoint.
Edit a Model
After you import a model, you can use edit your model to update its description and tags.
On the right of the model you want to edit, click …, and then click Edit model to open the Edit model window.
You can edit the following model attributes:
- In the Model description textbox, enter or update a short description of the model, such as its intended use case, supported tasks, or any important limitations.
- In the Model tags section, click x next to a tag to remove it, or use the Add model tags for sorting field to add a new tag. To add multiple tags, press
SPACEBARorENTERafter each term. Tags can include letters, numbers, colons, dashes, and underscores. To remove all existing tags, click clear all in the top-right of the Model tags section.
After updating your model attributes, click Update.
Delete a Model
Delete an imported model to remove it from your models list when you no longer need it. If the model is actively deployed on a dedicated endpoint, you must first destroy the dedicated inference deployment before you can delete the model.
On the right of the model you want to delete, click …, and then click Delete to open the Delete model window.
In the Enter model name field, enter the name of your model to confirm deletion, and then click Delete model.