Download and install Ollama from https://ollama.com/
ollama list
ollama run mistral:7b-instruct # This will run Mistral: <https://ollama.com/library/mistral:instruct>
My installed models (as of Dec 28, 2024)
❯ ollama list
NAME ID SIZE MODIFIED
deepseek-coder-v2:16b-lite-instruct-q8_0 ef033cab4dae 16 GB 2 days ago
qwen2.5-coder:7b-instruct 2b0496514337 4.7 GB 2 days ago
mistral:7b-instruct f974a74358d6 4.1 GB 3 days ago
qwen2.5:72b-instruct-q8_0 23f2cb48bb9a 77 GB 3 days ago
mistral-large:123b-instruct-2411-q4_K_M bbcf36dc47ad 73 GB 3 days ago
qwq:32b-preview-q8_0 9c62a2e770b7 34 GB 3 days ago
gemma2:9b-instruct-q8_0 54faa8324fdf 9.8 GB 3 days ago
qwen2.5-coder:32b-instruct-q8_0 f37bbf27ec01 34 GB 3 days ago
llama3.2-vision:90b-instruct-q8_0 e65e1af5e383 95 GB 3 days ago
llama3.3:70b-instruct-q8_0 d5b5e1b84868 74 GB 3 days ago
mixtral:8x22b-instruct e8479ee1cb51 79 GB 3 days ago
For multiline input, you can wrap text with """:
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
$ ollama run llava "What's in this image? /Users/hlb/Desktop/smile.png"
The image features a yellow smiley face, which is likely the central focus of the picture.
$ ollama run llama3.2 "Summarize this file: $(cat README.md)"
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
Use homebrew to install Python 3.12
brew install [email protected] # Don't install 3.13
# brew install [email protected] # If you need Tcl/Tk GUI
Install pipx
<aside> ⚠️
Don’t use homebrew to install pipx. It will install Python 3.13
</aside>
python -m pip install pipx --break-system-packages
python3 -m pipx ensurepath
Understand how to use pipx https://github.com/pypa/pipx
(optional) Install uv
pipx install uv
Understand how to use uv https://github.com/astral-sh/uv