# NVIDIA H100 Reference Machines are Linux and Windows virtual machines with persistent storage, GPU options, and free unlimited bandwidth. They’re designed for high-performance computing (HPC) workloads. NVIDIA H100 is a GPU built on [NVIDIA’s Hopper GPU Architecture](https://www.nvidia.com/en-us/data-center/h100/). It’s designed for large-scale artificial intelligence and high-performance computing workloads. H100 machines feature 4x100 GigE connections for private traffic. For more information about the hardware and performance specs, see the [machine types specifications page](https://docs.digitalocean.com/products/paperspace/machines/details/machine-types/index.html.md), which provides detailed comparisons of various machine configurations. NVIDIA H100x8 supports NVLink, a high-speed interconnect for multiple GPUs, while the NVIDIA H100x1 does not. Paperspace supports the NVIDIA H100 with two configurations: a single chip (NVIDIA H100x1) and eight chips (NVIDIA H100x8). NVIDIA H100 is only available in the `NY2` region. If you want to use NVIDIA H100s, you need to [send a request to Paperspace support](https://docs.digitalocean.com/products/paperspace/machines/support/index.html.md). NVIDIA H100 is offered on-demand, meaning it’s available only if capacity allows and your request is approved by Paperspace. **Note**: Ubuntu 20.04 works on all GPUs except H100s. A100-80G works with both Ubuntu 20.04 and 22.04, while H100s only work with Ubuntu 22.04. To get started with a pre-configured environment, consider using the [ML-in-a-Box 22.04 Template](https://docs.digitalocean.com/products/paperspace/machines/how-to/create/index.html.md), which is designed for machine learning workloads on NVIDIA H100.