doctl dedicated-inference get-gpu-model-config
Generated on 1 Apr 2026
from doctl version
v1.154.0
Usage
doctl dedicated-inference get-gpu-model-config [flags]Aliases
ggmcDescription
Returns the supported GPU model configurations for dedicated inference endpoints, including model slugs, names, compatible GPU slugs, and whether models are gated.
Example
The following example lists GPU model configurations:
doctl dedicated-inference get-gpu-model-config Flags
| Option | Description |
|---|---|
--format |
Columns for output in a comma-separated list. Possible values: ModelSlug, ModelName, IsModelGated, GPUSlugs. |
--help, -h |
Help for this command |
--no-header |
Return raw data with no headers Default: false |
Related Commands
| Command | Description |
|---|---|
| doctl dedicated-inference | Display commands for managing dedicated inference endpoints |
Global Flags
| Option | Description |
|---|---|
--access-token, -t |
API V2 access token |
--api-url, -u |
Override default API endpoint |
--config, -c |
Specify a custom config file Default: |
--context |
Specify a custom authentication context name |
--http-retry-max |
Set maximum number of retries for requests that fail with a 429 or 500-level error
Default: 5 |
--http-retry-wait-max |
Set the minimum number of seconds to wait before retrying a failed request
Default: 30 |
--http-retry-wait-min |
Set the maximum number of seconds to wait before retrying a failed request
Default: 1 |
--interactive |
Enable interactive behavior. Defaults to true if the terminal supports it (default false)
Default: false |
--output, -o |
Desired output format [text|json] Default: text |
--trace |
Show a log of network activity while performing a command Default: false |
--verbose, -v |
Enable verbose output Default: false |