LLM Deep-dive: Mixtral 8x7b

By 
Matous Eibich, Marcus Zethraeus & Stefan Wendin
December 15, 2023

Introduction

In the rapidly evolving world of Artificial Intelligence, Large Language Models (LLMs) have become a cornerstone for numerous applications, ranging from generative text creation to programming assistants. Each new model aims to surpass its predecessors in both scope and performance. Among these advancements, Mistral AI's recent introduction of the Mixtral 8x7b model marks a significant leap forward. This model represents the pinnacle of open-source LLMs, showcasing a state-of-the-art approach that challenges existing benchmarks within the open-source domain. Unique in its architecture, Mixtral 8x7b is composed of multiple "experts," a design choice that takes a novel approach to language processing. In this article, we'll delve into the intricacies of Mixtral 8x7b, exploring its innovative architecture, impressive capabilities, and the challenges it brings to the table.

The Architecture of Mixtral 8x7b

The most interesting part of this model (besides its impressive performance) is its Mixture of Experts (MoE) architecture. This innovative design consists of two key elements:

Architecture diagram of the MoE layer from the Outrageously Large Neural Network paper

Performance

Mixtral 8x7b's performance is a benchmark of interest for those seeking the leading edge in open-source language models. It notably outperforms the best available open-source LLM, LLaMA 2 70B, in several key areas, showcasing its advanced understanding and language processing abilities. This achievement is significant, as it positions Mixtral 8x7b at the forefront of open-source LLM options, making it a prime choice for developers and businesses prioritizing accessibility and transparency.

Mixtral 8x7B performance. Source: https://mistral.ai/news/mixtral-of-experts/

While Mixtral 8x7b demonstrates a strong standing within the open-source domain, it's important to note that it doesn't quite match the performance of closed-source forerunners like GPT-4 and Gemini Ultra. The decision between utilizing an open-source model like Mixtral 8x7b or opting for a closed-source alternative is a nuanced one, often based on a range of factors including price, the need for customization, and specific use-case requirements. For further guidance on making this critical choice, our previous article 'How to Choose the Right LLM for Your Use Case' offers valuable insights and considerations.

Limitations

Running Mixtral 8x7b efficiently requires over 90 GB of VRAM, which surpasses the capacity of standard home computers and necessitates the use of high-end GPUs typically found in cloud computing environments or specialized AI research labs. Quantization can make this problem better but VRAM requirements are still significant. 

Source: https://huggingface.co/blog/mixtral

Conclusion

Mixtral 8x7b marks an important step for open-source LLMs, becoming one of the best (and probably the best) open-source models to date. The model leverages a new type of architecture MoE that brings big gains in efficiency. Its VRAM demands do require considerable computing power, aligning it more with research and enterprise-level applications than home use. As AI continues to advance, models like Mixtral 8x7b will likely become more accessible and continue to push the boundaries of what open-source AI can accomplish.

Explore Mixtral 8x7b on Hugging Face Chat.

Learn more