Mistral launches new code embedding model that outperforms OpenAI and Cohere in real-world retrieval tasks

Blockonomics
Mistral launches new code embedding model that outperforms OpenAI and Cohere in real-world retrieval tasks
fiverr


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

With demand for enterprise retrieval augmented generation (RAG) on the rise, the opportunity is ripe for model providers to offer their take on embedding models. 

French AI company Mistral threw its hat into the ring with Codestral Embed, its first embedding model, which it said outperforms existing embedding models on benchmarks like SWE-Bench.

The model specializes in code and “performs especially well for retrieval use cases on real-world code data.” The model is available to developers for $0.15 per million tokens. 

Tokenmetrics

The company said the Codestral Embed “significantly outperforms leading code embedders” like Voyage Code 3, Cohere Embed v4.0 and OpenAI’s embedding model, Text Embedding 3 Large. 

Codestral Embed, part of Mistral’s Codestral family of coding models, can make embeddings that transform code and data into numerical representations for RAG. 

“Codestral Embed can output embeddings with different dimensions and precisions, and the figure below illustrates the trade-offs between retrieval quality and storage costs,” Mistral said in a blog post. “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors. The dimensions of our embeddings are ordered by relevance. For any integer target dimension n, you can choose to keep the first n dimensions for a smooth trade-off between quality and cost.”

Mistral tested the model on several benchmarks, including SWE-Bench and Text2Code from GitHub. In both cases, the company said Codestral Embed outperformed leading embedding models. 

SWE- Bench

Text2Code

Use cases

Mistral said Codestral Embed is optimized for “high-performance code retrieval” and semantic understanding. The company said the code works best for at least four kinds of use cases: RAG, semantic code search, similarity search and code analytics. 

Embedding models generally target RAG use cases, as they can facilitate faster information retrieval for tasks or agentic processes. Therefore, it’s not surprising that Codestral Embed would focus on that. 

The model can also perform semantic code search, allowing developers to find code snippets using natural language. This use case works well for developer tool platforms, documentation systems and coding copilots. Codestral Embed can also help developers identify duplicated code segments or similar code strings, which can be helpful for enterprises with policies regarding reused code. 

The model supports semantic clustering, which involves grouping code based on its functionality or structure. This use case would help analyze repositories, categorize and find patterns in code architecture. 

Competition is increasing in the embedding space

Mistral has been on a roll with releasing new models and agentic tools. It released Mistral Medium 3, a medium version of its flagship large language model (LLM), which currently powers its enterprise-focused platform Le Chat Enterprise. 

It also announced the Agents API, which allows developers to access tools for creating agents that perform real-world tasks and orchestrate multiple agents. 

Mistral’s moves to offer more model options to developers have not gone unnoticed in developer spaces. Some on X note that Mistral’s timing in releasing Codestral Embed is “coming on the heels of increased competition.”

However, Mistral must prove that Codestral Embed performs well not just in benchmark testing. While it competes against more closed models, such as those from OpenAI and Cohere, Codestral Embed also faces open-source options from Qodo, including Qodo-Embed-1-1.5 B.

VentureBeat reached out to Mistral about Codestral Embed’s licensing options. 





Source link

Coinmama

Be the first to comment

Leave a Reply

Your email address will not be published.


*