pydo.dedicated_inferences

Generated on 13 Apr 2026 from pydo version v0.30.0

Dedicated Inference delivers scalable production-grade LLM hosting on DigitalOcean.

Create, list, get, update, and delete Dedicated Inference instances; manage accelerators, CA certificate, sizes, GPU model config, and access tokens.

pydo.dedicated_inferences.create_tokens()

Create a Dedicated Inference Token

pydo.dedicated_inferences.create()

Create a Dedicated Inference

pydo.dedicated_inferences.delete_tokens()

Revoke a Dedicated Inference Token

pydo.dedicated_inferences.delete()

Delete a Dedicated Inference

pydo.dedicated_inferences.get_accelerator()

Get a Dedicated Inference Accelerator

pydo.dedicated_inferences.get_ca()

Get Dedicated Inference CA Certificate

pydo.dedicated_inferences.get_gpu_model_config()

Get Dedicated Inference GPU Model Config

pydo.dedicated_inferences.get()

Get a Dedicated Inference

pydo.dedicated_inferences.list_accelerators()

List Dedicated Inference Accelerators

pydo.dedicated_inferences.list_sizes()

List Dedicated Inference Sizes

pydo.dedicated_inferences.list_tokens()

List Dedicated Inference Tokens

pydo.dedicated_inferences.list()

List Dedicated Inferences

pydo.dedicated_inferences.patch()

Update a Dedicated Inference

We can't find any results for your search.

Try using different keywords or simplifying your search terms.