mistral_common.protocol.instruct.request
ChatCompletionRequest(**data)
Bases: BaseCompletionRequest, Generic[ChatMessageType]
Request for a chat completion.
Attributes:
| Name | Type | Description |
|---|---|---|
model |
str | None
|
The model to use for the chat completion. |
messages |
list[ChatMessageType]
|
The messages to use for the chat completion. |
response_format |
ResponseFormat
|
The format of the response. |
tools |
list[Tool] | None
|
The tools to use for the chat completion. |
tool_choice |
ToolChoice
|
The tool choice to use for the chat completion. |
truncate_for_context_length |
bool
|
Whether to truncate the messages for the context length. |
continue_final_message |
bool
|
Whether to continue the final message. |
reasoning_effort |
ReasoningEffort | None
|
Controls how much reasoning effort the model should apply. |
Examples:
>>> from mistral_common.protocol.instruct.messages import UserMessage, AssistantMessage
>>> from mistral_common.protocol.instruct.tool_calls import ToolTypes, Function
>>> request = ChatCompletionRequest(
... messages=[
... UserMessage(content="Hello!"),
... AssistantMessage(content="Hi! How can I help you?"),
... ],
... response_format=ResponseFormat(type=ResponseFormats.text),
... tools=[Tool(type=ToolTypes.function, function=Function(name="get_weather", parameters={}))],
... tool_choice=ToolChoiceEnum.auto,
... truncate_for_context_length=True,
... )
Source code in .venv/lib/python3.14/site-packages/pydantic/main.py
from_openai(messages, tools=None, continue_final_message=False, **kwargs)
classmethod
Create a chat completion request from the OpenAI format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[dict[str, str | list[dict[str, str | dict[str, Any]]]]]
|
The messages in the OpenAI format. |
required |
tools
|
list[dict[str, Any]] | None
|
The tools in the OpenAI format. |
None
|
continue_final_message
|
bool
|
Whether to continue the final message. |
False
|
**kwargs
|
Any
|
Additional keyword arguments to pass to the constructor. These should be the same as the fields of the request class or the OpenAI API equivalent. |
{}
|
Returns:
| Type | Description |
|---|---|
ChatCompletionRequest
|
The chat completion request. |
Source code in src/mistral_common/protocol/instruct/request.py
to_openai(**kwargs)
Convert the request messages and tools into the OpenAI format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kwargs
|
Any
|
Additional parameters to be added to the request. |
{}
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
The request in the OpenAI format. |
Examples:
>>> from mistral_common.protocol.instruct.messages import UserMessage
>>> from mistral_common.protocol.instruct.tool_calls import Tool, Function
>>> request = ChatCompletionRequest(messages=[UserMessage(content="Hello, how are you?")], temperature=0.15)
>>> request.to_openai(stream=True)
{'temperature': 0.15, 'top_p': 1.0, 'response_format': {'type': 'text'}, 'continue_final_message': False, 'messages': [{'role': 'user', 'content': 'Hello, how are you?'}], 'tool_choice': 'auto', 'stream': True}
>>> request = ChatCompletionRequest(messages=[UserMessage(content="Hello, how are you?")], tools=[
... Tool(function=Function(
... name="get_current_weather",
... description="Get the current weather in a given location",
... parameters={
... "type": "object",
... "properties": {
... "location": {
... "type": "string",
... "description": "The city and state, e.g. San Francisco, CA",
... },
... "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
... },
... "required": ["location"],
... },
... ),
... )])
>>> request.to_openai()
{'temperature': 0.7, 'top_p': 1.0, 'response_format': {'type': 'text'}, 'continue_final_message': False, 'messages': [{'role': 'user', 'content': 'Hello, how are you?'}], 'tools': [{'type': 'function', 'function': {'name': 'get_current_weather', 'description': 'Get the current weather in a given location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']}}, 'required': ['location']}, 'strict': False}}], 'tool_choice': 'auto'}
Source code in src/mistral_common/protocol/instruct/request.py
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 | |
InstructRequest(**data)
Bases: MistralBase, Generic[ChatMessageType, ToolType]
A valid Instruct request to be tokenized.
Attributes:
| Name | Type | Description |
|---|---|---|
messages |
list[ChatMessageType]
|
The history of the conversation. |
system_prompt |
str | None
|
The system prompt to be used for the conversation. |
available_tools |
list[ToolType] | None
|
The tools available to the assistant. |
truncate_at_max_tokens |
int | None
|
The maximum number of tokens to truncate the conversation at. |
continue_final_message |
bool
|
Whether to continue the final message. |
settings |
ModelSettings
|
Model configuration settings for the request. |
Examples:
>>> from mistral_common.protocol.instruct.messages import UserMessage, SystemMessage
>>> request = InstructRequest(
... messages=[UserMessage(content="Hello, how are you?")], system_prompt="You are a helpful assistant."
... )
Source code in .venv/lib/python3.14/site-packages/pydantic/main.py
from_openai(messages, tools=None, continue_final_message=False, **kwargs)
classmethod
Create an instruct request from the OpenAI format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[dict[str, str | list[dict[str, str | dict[str, Any]]]]]
|
The messages in the OpenAI format. |
required |
tools
|
list[dict[str, Any]] | None
|
The tools in the OpenAI format. |
None
|
continue_final_message
|
bool
|
Whether to continue the final message. |
False
|
**kwargs
|
Any
|
Additional keyword arguments to pass to the constructor. These should be the same as the fields of the request class or the OpenAI API equivalent. |
{}
|
Returns:
| Type | Description |
|---|---|
InstructRequest
|
The instruct request. |
Source code in src/mistral_common/protocol/instruct/request.py
to_openai(**kwargs)
Convert the request messages and tools into the OpenAI format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kwargs
|
Any
|
Additional parameters to be added to the request. |
{}
|
Returns:
| Type | Description |
|---|---|
dict[str, list[dict[str, Any]]]
|
The request in the OpenAI format. |
Examples:
>>> from mistral_common.protocol.instruct.messages import UserMessage
>>> from mistral_common.protocol.instruct.tool_calls import Tool, Function
>>> request = InstructRequest(messages=[UserMessage(content="Hello, how are you?")])
>>> request.to_openai(temperature=0.15, stream=True)
{'continue_final_message': False, 'messages': [{'role': 'user', 'content': 'Hello, how are you?'}], 'temperature': 0.15, 'stream': True}
>>> request = InstructRequest(
... messages=[UserMessage(content="Hello, how are you?")],
... available_tools=[
... Tool(function=Function(
... name="get_current_weather",
... description="Get the current weather in a given location",
... parameters={
... "type": "object",
... "properties": {
... "location": {
... "type": "string",
... "description": "The city and state, e.g. San Francisco, CA",
... },
... "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
... },
... "required": ["location"],
... },
... ),
... )])
>>> request.to_openai()
{'continue_final_message': False, 'messages': [{'role': 'user', 'content': 'Hello, how are you?'}], 'tools': [{'type': 'function', 'function': {'name': 'get_current_weather', 'description': 'Get the current weather in a given location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']}}, 'required': ['location']}, 'strict': False}}]}
Source code in src/mistral_common/protocol/instruct/request.py
ModelSettings(**data)
Bases: MistralBase
Model configuration settings for instruct requests.
This class encapsulates various model configuration options that can be passed to the model during inference. Currently supports reasoning effort configuration, but can be extended with additional settings in the future.
Attributes:
| Name | Type | Description |
|---|---|---|
reasoning_effort |
ReasoningEffort | None
|
Controls how much reasoning effort the model should apply when generating responses. Supported for tokenizer >= v15 and not supported for earlier versions. |
Source code in .venv/lib/python3.14/site-packages/pydantic/main.py
from_openai(**kwargs)
classmethod
Create ModelSettings from OpenAI keywords API format.
none()
staticmethod
to_openai()
ReasoningEffort
ResponseFormat(**data)
Bases: MistralBase
The format of the response.
Attributes:
| Name | Type | Description |
|---|---|---|
type |
ResponseFormats
|
The type of the response. |
Examples: