The Llama-3.2-11B-Vision-Instruct is a multimodal large language model optimized for visual recognition, image reasoning, captioning, and answering questions about images. It was trained on 6 billion image-text pairs, has 11 billion parameters, and is supported for commercial and research use in English.
meta-llama/Llama-3.2-11B-Vision-Instruct
GPU Model | Number of Accelerators | Max Input Tokens | Max New Tokens |
---|---|---|---|
NVIDIA H100 | 1 | 99,658 | 99,690 |
NVIDIA H100 | 2 | 74,840 | 74,872 |
NVIDIA H100 | 4 | 90,582 | 90,614 |
NVIDIA H100 | 8 | 90,582 | 90,614 |
Package | Version | License |
---|---|---|
Meta Llama 3.2 | 3.2-11B-Vision-Instruct | LLAMA 3.2 COMMUNITY LICENSE |
Click the Deploy to DigitalOcean button to create a Droplet based on this 1-Click App. If you aren’t logged in, this link will prompt you to log in with your DigitalOcean account.
In addition to creating a Droplet from the Llama 3.2 11B Vision Instruct - Single GPU 1-Click App using the control panel, you can also use the DigitalOcean API. As an example, to create a 4GB Llama 3.2 11B Vision Instruct - Single GPU Droplet in the SFO2 region, you can use the following curl
command. You need to either save your API access token) to an environment variable or substitute it in the command below.
curl -X POST -H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$TOKEN'' -d \
'{"name":"choose_a_name","region":"sfo2","size":"s-2vcpu-4gb","image": "digitaloceanai-llama3211bvision"}' \
"https://api.digitalocean.com/v2/droplets"
Access the Droplet Console:
root
user using the password you set during droplet creation.root
:ssh root@your_droplet_public_IP
+ Ensure your SSH key is added to the SSH agent, or specify the key file directly:
ssh -i /path/to/your/private_key root@your_droplet_public_IP
+ Once connected, you will be logged in as the root user without needing a password.
Check the Message of the Day (MOTD) for Access Token:
sudo systemctl status caddy
You can make an API call to the droplet using the following cURL command:
curl --location 'http://<your_droplet_ip>/v1/chat/completions' \
--header 'accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <your_token_here>' \
--data '{
"messages": [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "test-image.jpg"
}
},
{
"type": "text",
"text": "Describe this image in detail"
}
]
}
],
"max_tokens": 600,
"stream": false
}'
This works with every OpenAI client including JavaScript.