← Back to Models
Llama 3.1
Llama 3.1 is a new state-of-the-art model from Meta available in 8B, 70B and 405B parameter sizes.
llama3.1-8b
8B parameters
Base
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.1:8b
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 16 GB | 16 GB |
llama3.1-8b-instruct
8B parameters
Instruction-Tuned
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.1:8b-instruct
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 16 GB | 16 GB |
llama3.1-70b
70B parameters
Base
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.1:70b
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 140 GB | 140 GB |
llama3.1-70b-instruct
70B parameters
Instruction-Tuned
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.1:70b-instruct
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 140 GB | 140 GB |
llama3.1-405b
405B parameters
Base
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.1:405b
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 810 GB | 810 GB |
llama3.1-405b-instruct
405B parameters
Instruction-Tuned
Pull this model
Use the following command with the HoML CLI:
homl pull llama3.1:405b-instruct
Resource Requirements
Quantization | Disk Space | GPU Memory |
---|---|---|
BF16 | 810 GB | 810 GB |