Getting Ollama and Open WebUI Working on Kubuntu 23.10

I have two specific use cases for setting up AI locally instead of using ChatGPT (for which I have a subscription as I use it for a variety of tasks, as does my wife). I needed to analyze interviews but couldn’t upload them to the web for security reasons. I also wanted to work on a book project without giving ChatGPT some of my data (again, for confidentiality reasons). For that reason, I wanted a local instance of generative AI on my computer. The easiest approach for this is Ollama with Open WebUI. But actually getting it working on Kubuntu 23.10 turned out to be quite the ordeal. Since it was so challenging, I figured I’d document it here in case anyone else runs into these issues.

First, installing Ollama is pretty straightforward:

curl -fsSL https://ollama.com/install.sh | sh

But installing the Open WebUI and getting the two to talk turned out to be a nightmare. Open WebUI recommends using their docker container. I’m not a huge fan of Docker as it takes up a ton of resources, but, if that is what takes to get a local AI instance, I was willing to give it a whirl.

Docker’s website has good instructions for installing Docker. But DO NOT install it via Snap. That causes all sorts of issues with your GPU. Use this approach:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

# and install docker
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

You might then want to use Open WebUI’s code for installing the docker container, but don’t do that yet or you’re likely to get an error. First, you need to install NVIDIA’s Container Toolkit for Docker (assuming you have an NVIDIA GPU). Here’s how you do that (thanks to Stack Overflow, of course):

# Configure the repository:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey |sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \
&& sudo apt-get update

# install the NVIDIA Container Toolkit package:
sudo apt-get install -y nvidia-container-toolkit

# configure the container runtime for Docker:
sudo nvidia-ctk runtime configure --runtime=docker

# restart Docker:
sudo systemctl restart docker

Now that you’ve done that, hopefully, everything is in place to go ahead and install Open WebUI’s docker container per their instructions:

sudo docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

It was that last command that I tried to run both (a) without installing Docker first and (b) without installing the NVIDIA container toolkit that gave me tons of problems. Installing everything in the right order should save you the hours it took me to get this working.

Assuming you don’t get any errors, then open your browser and enter the URL for the Open WebUI:

http://127.0.0.1:3000/

That ultimately got me a functioning instance of Open WebUI that was connected to Ollama.

Loading


Posted

in

by

Comments

2 responses to “Getting Ollama and Open WebUI Working on Kubuntu 23.10”

  1. Gokhan Avatar
    Gokhan

    THANK YOU VERY VERY VERY MUCH !

  2. […] models, but not the largest models. I was able to get Ollama installed on my machine (see here). I tested this with five models: llama3(8B), llama3-gradient (8B), command-r (35B), mixtral (47B), […]

Leave a Reply

Your email address will not be published. Required fields are marked *