Ollama: Install it in Linux

ollama


sudo apt update && sudo apt upgrade -y

curl -fsSL https://ollama.com/install.sh | sh

If your system doesn’t have an Nvidia or AMD GPU, Ollama will use the CPU to run LLM models.

ollama pull <<model-name>>
ollama pull llama2
ollama run <<model-name>>
ollama run llama2

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts