6. Skip to content

6. Install Ollama and set-up Large Language Models

Warning

Requires Ollama to run. This particular setup was tested with Ollama version 0.1.27.

Ollama is used for the unstructured generative component of Privacy Fingerprint. It provides a simple interface to download quantised models and run inference locally.

6.1 Ollama Installation

Install Ollama using the curl command.

curl https://ollama.ai/install.sh | sh

6.2 Start Ollama

Either open up the desktop application or a terminal and enter ollama serve.

6.3 Ollama models

To download a model open a terminal and enter ollama pull <model_name>. The example notebooks in this repository currently use llama2:latest fe938a131f40.

See the Ollama model library for all available models.

6.4 Other models

It is possible to use your own models not specified in the Ollama model library. Ollama supports the .gguf format and many quantised and non-quantised models can be found on the Hugging Face Hub.

To quantise a model, check out the resources on setting up open source LLMs with llama.cpp and the introductory reading around quantisation.