This page describes how to run an AI model locally with a functional web interface in Windows using WSL. The Linux distro used is Ubuntu 20.04.6 LTS and the tools used are Ollama, Open WebUi and Docker. The AI model used is deepseek-r1:1.5b
Author: angeljsd
Date: 02/01/2025
Install ollama and download an AI model (the basic deepseek model in this case)
In WSL run:
curl -fsSL [<https://ollama.com/install.sh>](<https://ollama.com/install.sh>) | sh
ollama -v
sudo service ollama start
ollama run deepseek-r1:1.5b
Install docker in WSL (if needed)
sudo apt update
sudo apt install [docker.io](<http://docker.io/>)
Find your WSL IP
In WSL run:
ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1
We want to interact with the model using a chat-like web interface, for that we’ll use Open WebUI