What is the Mistral Platform?
The Mistral platform is an early access generative AI platform developed by Mistral AI, the European (via Paris) provider of artificial intelligence models and solutions. The platform serves open and optimized models for generation and embeddings, with a focus on making AI models compute efficient, helpful, and trustworthy.
One of the key offerings of the Mistral platform is the Mixtral model. Mixtral is a powerful and fast model adaptable to many use-cases. It matches or outperforms Llama 2 70B on all benchmarks, speaks many languages, and has natural coding abilities. It can handle a sequence length of 32k. Users can access it through the Mistral API, or deploy it themselves as it is licensed under Apache 2.0.
Mistral AI has also partnered with Google Cloud to distribute and commercialize its large language models (LLMs), allowing for the development of larger and more sophisticated open-source AI models across multiple sectors. The company's 7B open LLM has been integrated into Google's Vertex AI Model Garden and AWS' SageMaker Jumpstart, enabling developers and businesses to deploy their own AI applications and services using Mistral AI.
Mistral AI's platform is also available on other platforms like Together, Anyscale, Replicate, and Perplexity. The company has plans to monetize its foundational models by allowing other companies to pay to use Mistral AI's models via APIs.
However, it's worth noting that the AI sector is highly competitive, and cost-effectiveness is critical. For instance, Together AI has been able to offer a cheaper and faster version of Mistral's open-sourced Mixture of Experts model due to their current infrastructure and the Together Inference Engine.
What is the difference between Mistral AI and Together AI?
Mistral AI and Together AI are key contributors to the Large Language Model (LLM) space with distinct focuses. The selection between them hinges on the specific needs and use cases of users.
Mistral AI, headquartered in Europe, offers the open-source Mistral-7B and Mixtral models, notable for their rapid inference and extended sequence handling capabilities. These models employ grouped-query and sliding-window attention mechanisms to achieve low latency and high throughput, making them suitable for large-scale applications. In collaboration with Google Cloud, Mistral AI has expanded the distribution and commercialization of its LLMs. Additionally, its models are accessible on platforms such as Together, Anyscale, Replicate, and Perplexity.
On the other hand, Together AI offers a range of generative AI models and tools that are both fast and cost-efficient. One of their key offerings is the Mixtral-8x7B LLM, a pretrained generative Sparse Mixture of Experts. This model uses a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in Hyena blocks. Together AI also offers the Striped Hyena Nous, another LLM with a unique architecture. They have also announced the Together Embeddings endpoint, which they claim provides higher quality than OpenAI or Cohere in the MTEB benchmark.
A key difference between the two companies is their approach to cost and performance. Together AI has been able to offer a cheaper and faster version of Mistral's open-sourced Mixture of Experts model due to their existing infrastructure and the Together Inference Engine. This raises questions about how Mistral AI will maintain its commitment to commercially permissive open-source models as it grapples with the financial burden of compute costs.
Who is the team behind the Mistral Platform?
Mistral champions open science and community-driven development. The company is recognized for its generative AI platform that offers efficient and powerful AI models for generation and embeddings. Notably, its Mixtral model and other deployment tools are released under the Apache 2.0 license, reflecting its commitment to free software and open-source principles.
The company's leadership team includes co-founders with strong academic and professional backgrounds:
- Arthur Mensch: Co-founder and CEO, Arthur Mensch is recognized for his role in advancing LLMs and is part of the founding team. He has a vision to create a European champion in generative AI with a global vocation, focusing on an open, responsible, and decentralized approach to technology.
- Guillaume Lample: Scientific Director, Guillaume Lample is a Polytechnique alumnus and has contributed to the development of language processing models. He has a prestigious profile combining French academic excellence in mathematics and computer science.
- Timothee Lacroix: Founder and Chief Technology Officer, Timothee Lacroix is part of the executive team and brings his expertise to the company's technological development.
Achievements and Partnerships
Mistral AI has made significant strides in the AI industry, including raising substantial funding and partnering with major companies:
- The company raised a Series A round of €385 million to accelerate its development and delivery of AI solutions.
- Mistral AI has partnered with Google Cloud to distribute and commercialize its large language models, integrating its 7B open LLM into Google's Vertex AI Model Garden.
Mistral AI's approach is characterized by a commitment to open-source principles, aiming to make generative AI more accessible and trustworthy. The company's products are designed to be transparent, allowing users to fully customize them without requiring user data.
With its focus on open models and a small, creative team with high scientific standards, Mistral AI is crafting a new path in the AI industry, challenging established players and contributing to the advancement of LLMs.