HoML Logo

HoML

← Back to Models

Mixtral

Mixtral models are a series of Sparse Mixture of Experts (SMoE) models from Mistral AI. They are designed to be highly efficient, using only a fraction of their total parameters for any given token, which leads to faster inference times.

mixtral-8x7b-v0.1

47B parameters Base

Pull this model

Use the following command with the HoML CLI:

homl pull mixtral:8x7b-v0.1

Resource Requirements

Quantization Disk Space GPU Memory
BF16 96.8 GB 90 GB

mixtral-8x7b-instruct-v0.1

47B parameters Instruction-Tuned

Pull this model

Use the following command with the HoML CLI:

homl pull mixtral:8x7b-instruct-v0.1

Resource Requirements

Quantization Disk Space GPU Memory
BF16 96.8 GB 90 GB

mixtral-8x7b-instruct-v0.1-4bit

47B parameters Instruction-Tuned

Pull this model

Use the following command with the HoML CLI:

homl pull mixtral:8x7b-instruct-v0.1-4bit

Resource Requirements

Quantization Disk Space GPU Memory
4-bit 23-27 GB 23-27 GB

mixtral-8x7b-instruct-v0.1-2bit

47B parameters Instruction-Tuned

Pull this model

Use the following command with the HoML CLI:

homl pull mixtral:8x7b-instruct-v0.1-2bit

Resource Requirements

Quantization Disk Space GPU Memory
2-bit 12.6-18.2 GB 12-16 GB