Mistral AI announced the launch of Codestral, a groundbreaking generative AI model designed specifically for code generation tasks. This innovative model aims to empower developers by simplifying and accelerating the coding process through advanced features. According to Mistral AI, Codestral is an open model that helps developers write and interact with code through shared instructions and completion API endpoints.
Model fluent in over 80 programming languages
Codestral is trained on a variety of datasets covering more than 80 programming languages, including popular languages such as Python, Java, C, C++, JavaScript, and Bash, as well as niche languages such as Swift and Fortran. This broad language base allows Codestral to support developers in a variety of coding environments and projects.
This model significantly reduces the time and effort required for coding by completing features, writing tests, and writing partial code. This automation not only improves developer productivity but also minimizes the risk of errors and bugs, making it an invaluable tool for software development.
Setting standards for code generation performance
Codestral, a 22 billion parameter model, sets a new benchmark in the performance/latency space for code generation. It offers a larger context window of 32K, outperforming competitors that offer 4K, 8K or 16K. This makes Codestral very efficient for long-distance code generation tasks, as evidenced by its outstanding performance in the RepoBench evaluation.
Codestral’s functionality has been rigorously tested using multiple benchmarks.
- Python: HumanEval pass@1, MBPP removed pass@1, CruxEval and RepoBench EM.
- SQL: Spyder benchmark.
- Multiple languages: Use HumanEval pass@1 in C++, bash, Java, PHP, TypeScript, and C#.
- Intermediate Fill (FIM): Compare HumanEval pass@1 in Python, JavaScript, and Java with DeepSeek Coder 33B.
Getting started with Codestral
Codestral is available for research and testing under the new Mistral AI Non-Production License. The model can be downloaded from HuggingFace. Additionally, a dedicated endpoint (codestral.mistral.ai
) offers free use during an 8-week beta period and has a managed waiting list to ensure high quality service.
For broader applications, Codestral provides common API endpoints (api.mistral.ai
), where queries are billed per token. Developers can start building applications with Codestral by creating an account on La Plateforme.
This model integrates with popular tools such as Continue.dev and Tabnine for VSCode and JetBrains environments to improve developer productivity. More information about these integrations can be found in the Mistral AI documentation.
Community and expert feedback
Industry experts praised Codestral’s performance:
- “A public autocompletion model with this combination of speed and quality has never existed before and will be a phase change for all developers.” – Nate Sesti, CTO and co-founder of Continue.dev
- “We are very excited about the features Mistral is releasing and are excited to see JetBrains focus on code and development support, an area we are deeply interested in.” – Vladislav Tankov, Head of AI at JetBrains
- “We used Codestral to run tests on the Kotlin-HumanEval benchmark and were impressed with the results. For example, for a passing rate of T=0.2, Codestral scored 73.75 points, outperforming GPT-4-Turbo’s 72.05 points and GPT-3.5-Turbo’s 54.66 points.” – Mikhail Evtikhiev, JetBrains Researcher
This approval highlights Codestral’s potential to revolutionize code creation, making it a powerful tool for developers around the world.
Image source: Shutterstock
. . .
tag