The Model Context Protocol (MCP) is an open standard designed to streamline the integration of AI models with various data sources and tools. By providing a standardized interface, MCP enables seamless and secure connections, allowing AI systems to access and utilize contextual information efficiently. It simplifies the development process, making it easier to build robust and interconnected AI applications.| docs.mistral.ai
When creating and using Agents, often with access to specific tools, there are moments where it is desired to call other Agents mid-action. To elaborate and engineer workflows for diverse tasks that you may want automated, this ability to give Agents tasks or hand over a conversation to other agents is called Handoffs.| docs.mistral.ai
Websearch is the capability to browse the web in search of information, this tool does not only fix the limitations of models of not being up to date due to their training data, but also allows them to actually retrieve recent information or access specific websites.| docs.mistral.ai
Image Generation is a built-in connector tool that enables agents to generate images of all kinds and forms.| docs.mistral.ai
Document Library is a built-in connector tool that enables agents to access documents from Mistral Cloud.| docs.mistral.ai
Code Interpreter adds the capability to safely execute code in an isolated container, this built-in connector tool allows Agents to run code at any point on demand, practical to draw graphs, data analysis, mathematical operations, code validation, and much more.| docs.mistral.ai
Embeddings are vectorial representations of text that capture the semantic meaning of paragraphs through their position in a high dimensional vector space. Mistral AI Embeddings API offers cutting-edge, state-of-the-art embeddings for text, which can be used for many NLP tasks. In this guide, we will cover the fundamentals of the embeddings API, including how to measure the distance between text embeddings, and explore its main use cases: clustering and classification.| docs.mistral.ai
Tokenization is a fundamental step in LLMs. It is the process of breaking down text into smaller subword units, known as tokens. We recently open-sourced our tokenizer at Mistral AI. This guide will walk you through the fundamentals of tokenization, details about our open-source tokenizers, and how to use our tokenizers in Python.| docs.mistral.ai
Our Chat Completion and Embeddings APIs specification. Create your account on [La Plateforme](https://console.mistral.ai) to get access and read the [docs](https://docs.mistral.ai) to learn how to use it.| docs.mistral.ai