Zack Saadioui
8/27/2024
1curl -fsSL https://ollama.com/install.sh | sh
1docker pull ollama/ollama
1
ollama1
2
bash
ollama run llama3.1| Model | Parameters | Size | Command |
|---|---|---|---|
| Llama 3.1 | 8B | 4.7GB | 1
ollama run llama3.1 |
| Llama 3.1 | 70B | 40GB | 1
ollama run llama3.1:70b |
| Llama 3.1 | 405B | 231GB | 1
ollama run llama3.1:405b |
| Phi 3 Mini | 3.8B | 2.3GB | 1
ollama run phi3 |
| Mistral | 7B | 4.1GB | 1
ollama run mistral |
| Gemma 2 | 9B | 5.5GB | 1
ollama run gemma2 |
1
Modelfile1
./vicuna-33b.Q4_0.gguf1
2
bash
ollama create example -f Modelfile1
2
bash
ollama run example1
llama3.11
2
bash
ollama pull llama3.11
Modelfile1
Next, create and run the customized model:1
2
bash
ollama create mymodel -f ./Modelfile1
2
bash
ollama pull llama3.11
2
bash
ollama rm llama3.11
2
bash
ollama cp llama3.1 my-model1
2
bash
ollama list1
curl1
2
bash
curl http://localhost:11434/api/generate -d '{ "model": "llama3.1", "prompt":"Why is the sky blue?" }'1
2
bash
curl http://localhost:11434/api/chat -d '{ "model": "llama3.1", "messages": [ { "role": "user", "content": "why is the sky blue?" } ] }'1
.pdf1
.txt1
.csvCopyright © Arsturn 2025