# OpenAI GPT OSS (ROCm 7) Generated on 6 Oct 2025 from [the OpenAI GPT OSS (ROCm 7) catalog page](https://marketplace.digitalocean.com/apps/openai-gpt-oss-rocm-7) OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. Now on ROCm 7. ## Software Included | Package | Version | License | |---|---|---| | ROCm (host system) | 7.0 | [LICENSE](https://rocm.docs.amd.com/en/latest/about/license.html) | | ROCm (in Docker image) | 7.0 | [LICENSE](https://rocm.docs.amd.com/en/latest/about/license.html) | | vLLM (in Docker image) | 0.9.2 | [LICENSE](https://github.com/vllm-project/vllm/blob/main/LICENSE) | | OpenAI GPT-OSS | 0.0.8 | [Apache License 2.0](https://github.com/openai/gpt-oss/blob/main/LICENSE) | ## Deploying this Offering using the Control Panel Click the **Deploy to DigitalOcean** button to deploy this offering. If you aren’t logged in, this link will prompt you to log in with your DigitalOcean account. [![Deploy OpenAI GPT OSS (ROCm 7) to DO](https://www.deploytodo.com/do-btn-blue.svg)](https://cloud.digitalocean.com/droplets/new?image=amd-openaigptossrocm) ## Getting Started After Deploying OpenAI GPT OSS (ROCm 7) This 1-Click exposes three interfaces for GPT OSS model! - You can open a WebUI interface in your browser by visiting: - `http://your_droplet_public_ipv4` To authenticate in WebUI use the email and password provided when you SSH into your droplet. - You can access the vLLM inferencing API directly at: - `http://your_droplet_public_ipv4:8000` A bearer token for the GPT OSS inferencing API is provided when you SSH into your droplet. - Finally, from within this Droplet: ``` curl -X POST "http://localhost:8001/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openai/gpt-oss-120b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' ```