← Back to Models
Llama 3.2
Meta's Llama 3.2 goes small with 1B and 3B models.
llama3.2-1b
1B parameters
Base
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.2:1b
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 2 GB | 2 GB |
llama3.2-1b-instruct
1B parameters
Instruction-Tuned
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.2:1b-instruct
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 2 GB | 2 GB |
llama3.2-3b
3B parameters
Base
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.2:3b
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 6 GB | 6 GB |
llama3.2-3b-instruct
3B parameters
Instruction-Tuned
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.2:3b-instruct
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 6 GB | 6 GB |