NVIDIA, in collaboration with Mistral, has unveiled Mistral NeMo 12B, a groundbreaking language model that promises leading performance across a variety of benchmarks. According to the NVIDIA Technical Blog, this advanced model is optimized to run on a single GPU, making it a cost-effective and efficient solution for text generation applications.
Mistral Nemo 12B
The Mistral NeMo 12B model is a dense transformer model with 12 billion parameters, trained on a large multilingual vocabulary of 131,000 words. It excels at a wide range of tasks, including common sense reasoning, coding, mathematics, and multilingual chat. The performance of this model on benchmarks such as HellaSwag, Winograd, and TriviaQA highlights its superior capabilities compared to other models such as Gemma 2 9B and Llama 3 8B.
Model | Context window | Hellaswag (0-shot) | Winograd (0-shot) | Natural Q (5 shots) | TriviaQA (5 shots) | MMLU (5 shots) | OpenBookQA(0-shot) | CommonSenseQA(0-shot) | TruthfulQA(0-shot) | MBPP(Pass@1 3-shot) |
Mistral Nemo 12B | 128k | 83.5% | 76.8% | 31.2% | 73.8% | 68.0% | 60.6% | 70.4% | 50.3% | 61.8% |
Gemma 2 9B | 8k | 80.1% | 74.0% | 29.8% | 71.3% | 71.5% | 50.8% | 60.8% | 46.6% | 56.0% |
Call 3 8B | 8k | 80.6% | 73.5% | 28.2% | 61.0% | 62.3% | 56.4% | 66.7% | 43.0% | 57.2% |
Mistral NeMo can process vast and complex information with a context length of 128K, producing consistent and contextually relevant output. The model is trained on Mistral’s proprietary dataset containing a significant amount of multilingual and coded data, enhancing feature learning and reducing bias.
Optimized training and inference
Mistral NeMo training is powered by NVIDIA Megatron-LM, a PyTorch-based library that provides GPU-optimized techniques and system-level innovations. The library includes key components such as attention mechanisms, transformer blocks, and distributed checkpointing to facilitate large-scale model training.
For inference, Mistral NeMo leverages the TensorRT-LLM engine, which compiles model layers into optimized CUDA kernels. These engines maximize inference performance through techniques such as pattern matching and fusion. The model supports inference in FP8 precision using NVIDIA TensorRT-Model-Optimizer, allowing for smaller models with a lower memory footprint without sacrificing accuracy.
The ability to run Mistral NeMo models on a single GPU improves compute efficiency, reduces costs, and enhances security and privacy. This makes it suitable for a variety of commercial applications, including document summarization, classification, multi-turn conversations, language translation, and code generation.
Deployment using NVIDIA NIM
Mistral NeMo models are available as NVIDIA NIM inference microservices, designed to simplify the deployment of generative AI models on NVIDIA’s accelerated infrastructure. NIM supports a wide range of generative AI models, providing high-throughput AI inference that scales on demand. Businesses can increase revenue by increasing token throughput.
Use Cases and Customizations
The Mistral NeMo model is particularly effective as a coding pilot, providing AI-based code suggestions, documentation, unit tests, and bug fixes. The model can be fine-tuned with domain-specific data for greater accuracy, and NVIDIA provides tools to tailor the model to specific use cases.
Mistral NeMo’s instruction-tuning variants have shown strong performance across multiple benchmarks and can be customized using NVIDIA NeMo, an end-to-end platform for developing custom generative AI. NeMo supports a variety of fine-tuning techniques, including parameter-efficient fine-tuning (PEFT), supervised fine-tuning (SFT), and reinforcement learning from human feedback (RLHF).
Get started
To learn more about the capabilities of the Mistral NeMo model, visit our AI Solutions page. NVIDIA also offers free cloud credits to test your model at scale and build proofs of concept by connecting to NVIDIA hosted API endpoints.
Image source: Shutterstock