Ollama with Open WebUI provides a fast and easy way to deploy and interact with Large Language Models (LLMs). Integration with the Ollama models library offers a variety of models for tasks like natural language processing, chatbots, and content generation. Ideal for developers, data scientists, and AI enthusiasts, this application provides a simple yet platform to explore and experiment with foundational models.
Package | Version | License |
---|---|---|
Ollama | 0.3.6 | MIT License |
Open WebUI | 0.3.13 | MIT License |
Anaconda | 2024.06-1 | Non-Commericial Use Only |
Click the Deploy to DigitalOcean button to create a Droplet based on this 1-Click App. If you aren’t logged in, this link will prompt you to log in with your DigitalOcean account.
In addition to creating a Droplet from the Ollama with Open WebUI 1-Click App using the control panel, you can also use the DigitalOcean API. As an example, to create a 4GB Ollama with Open WebUI Droplet in the SFO2 region, you can use the following curl
command. You need to either save your API access token) to an environment variable or substitute it in the command below.
curl -X POST -H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$TOKEN'' -d \
'{"name":"choose_a_name","region":"sfo2","size":"s-2vcpu-4gb","image": "sharklabs-ollamawithopenwe"}' \
"https://api.digitalocean.com/v2/droplets"
Welcome to DigitalOcean’s Ollama Droplet 1-Click! This guide will walk you through the initial setup, accessing applications, managing services, using Conda environments and configuring TLS.
Ollama is a powerful tool designed to simplify the interaction with Large Language Models (LLMs). It allows you to easily download, manage, and deploy various LLMs for tasks like natural language processing, chatbots, and content generation.
Open WebUI is a user-friendly web interface that provides an intuitive way to interact with LLMs. It allows you to input queries, manage models, and view outputs in real-time. Open-WebUI is integrated with Ollama to enhance your workflow and make it easier to experiment with different models.
To start using the Open WebUI, open your web browser and navigate to:
http://your_droplet_public_ipv4
Note: The first account created on Open WebUI gains Administrator privileges, which allows control over user management and system settings. Make sure to set up this account immediately to secure your environment.
Once logged into Open WebUI, you will find an intuitive dashboard that allows you to interact with LLMs. The main features include:
Note: Once users sign up to your Open WebUI droplet, they will need to be approved from the admin panel to be able to login and access the UI.
For advanced settings and more detailed usage, refer to the Open WebUI documentation.
Ollama and Open WebUI are configured to run as systemd services for easy management. You can manage these services using the following commands:
systemctl status open-webui
systemctl status ollama
All applications are installed under the user digitalocean
. The Droplet includes a pre-installed Anaconda distribution, which is configured for the Open WebUI.
Conda is already set up, and Open WebUI is available within the ‘ui’ virtual environment. To access and manage this environment, follow these steps:
source /home/digitalocean/anaconda3/etc/profile.d/conda.sh
conda activate ui
To enhance the functionality of your setup, you can download models from the Ollama repository. Use the following command to download a specific model:
ollama pull <model_name>
Replace &amp;amp;lt;model_name&amp;amp;gt;
with the name of the desired model.
To manage your Conda environments and packages:
su - digitalocean
If the conda
command is not working, initialize Conda with:
/home/digitalocean/anaconda3/bin/conda init
After running this command, you may need to source the ~/.bashrc
file or log out and back in using su - digitalocean
.
Use Conda to create isolated environments tailored to your specific projects:
conda create --name myenv
conda activate myenv
Install necessary packages and libraries within this environment using Conda or pip.
For more detailed instructions on using Conda, refer to the Conda documentation.
To secure your Open WebUI with HTTPS, you can configure TLS using Certbot and Caddy.
sudo apt-get update
sudo apt-get install certbot
Run the following command to obtain a free SSL certificate from Let’s Encrypt:
sudo certbot certonly --standalone -d <your_domain>
Replace &amp;amp;lt;your_domain&amp;amp;gt;
with your actual domain name.
Edit the Caddy configuration file located at /etc/caddy/caddyfile
. Update it to include the following settings:
:443 {
tls /etc/letsencrypt/live/<your_domain>/fullchain.pem /etc/letsencrypt/live/<your_domain>/privkey.pem
reverse_proxy localhost:8080
log {
output file /var/log/caddy/access.log
}
}
Replace &amp;amp;lt;your_domain&amp;amp;gt;
with your actual domain name.
After making changes to the Caddyfile, restart the Caddy service to apply the new configuration:
sudo systemctl restart caddy
Fail2Ban is configured to provide additional security by monitoring login attempts and banning IP addresses that show malicious signs. The rules for Open WebUI are defined as follows:
Fail2Ban Configuration for Open WebUI:
/etc/fail2ban/jail.d/open-webui.conf
.Adjusting Fail2Ban Rules: Users can customize or add additional rules as needed to increase security based on specific requirements.
Note: To run larger foundational models, it is recommended to increase your Droplet size. Ensure you have sufficient CPU, RAM, and storage to handle the demands of larger models, which can significantly improve performance and stability.
This guide provides all the essentials to get you started with Ollama and Open WebUI on your DigitalOcean Droplet, offering a robust and scalable environment for working with Large Language Models.