Skip to main content

Run DeepSeek-R1 locally with Ollama on Docker

Link: https://www.linkedin.com/pulse/run-deepseek-r1-locally-ollama-docker-adrian-escutia-wtikc/

Adrian Escutia

Simplifying Solutions in Airgap and Enterprise-Restricted Environments

Would you like to run local LLMs, here is how you can do it with Ollama and the new DeepSeek-R1 model that is breaking the boundaries of AI. 🚀

For those of us passionate about pushing the boundaries of AI, this is a game changer. 💡

Being able to run powerful language models locally, with the flexibility to fine-tune and experiment in a more personalized and secure environment, opens up so many possibilities.

Steps to run DeepSeek AI locally with Ollama on Docker:

# Install Ollama
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# Pull the DeepSeek-R1 model
docker exec -it ollama ollama run deepseek-r1:7b
# Start chatting with DeepSeek-R1 - Web UI
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Open http://localhost:3000 in your browser, and you are done!

Let's make AI work for us, locally, efficiently, and creatively