mistral_common.tokens.instruct.request
FIMRequest(**data)
Bases: MistralBase
A valid Fill in the Middle completion request to be tokenized.
Attributes:
Name | Type | Description |
---|---|---|
prompt |
str
|
The prompt to be completed. |
suffix |
Optional[str]
|
The suffix of the prompt. If provided, the model will generate text between the prompt and the suffix. |
Examples:
Source code in .venv/lib/python3.13/site-packages/pydantic/main.py
InstructRequest(**data)
Bases: MistralBase
, Generic[ChatMessageType, ToolType]
A valid Instruct request to be tokenized.
Attributes:
Name | Type | Description |
---|---|---|
messages |
List[ChatMessageType]
|
The history of the conversation. |
system_prompt |
Optional[str]
|
The system prompt to be used for the conversation. |
available_tools |
Optional[List[ToolType]]
|
The tools available to the assistant. |
truncate_at_max_tokens |
Optional[int]
|
The maximum number of tokens to truncate the conversation at. |
Examples:
>>> from mistral_common.protocol.instruct.messages import UserMessage, SystemMessage
>>> request = InstructRequest(
... messages=[UserMessage(content="Hello, how are you?")], system_prompt="You are a helpful assistant."
... )