mistral_common.experimental.app.routers
detokenize_to_assistant_message(settings, tokens=Body(default_factory=list))
async
Detokenize a list of tokens to an assistant message.
Parse tool calls from the tokens and extract content before the first tool call.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tokens
|
list[int]
|
The tokens to detokenize. |
Body(default_factory=list)
|
Returns:
Type | Description |
---|---|
AssistantMessage
|
The detokenized assistant message. |
Source code in src/mistral_common/experimental/app/routers.py
detokenize_to_string(settings, tokens=Body(default_factory=list), special_token_policy=Body(default=SpecialTokenPolicy.IGNORE))
async
Detokenize a list of tokens to a string.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tokens
|
list[int]
|
The tokens to detokenize. |
Body(default_factory=list)
|
special_token_policy
|
SpecialTokenPolicy
|
The policy to use for special tokens. |
Body(default=IGNORE)
|
Returns:
Type | Description |
---|---|
str
|
The detokenized string or assistant message. |
Source code in src/mistral_common/experimental/app/routers.py
generate(request, settings)
async
Generate a chat completion.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
request
|
Union[ChatCompletionRequest, OpenAIChatCompletionRequest]
|
The chat completion request. |
required |
settings
|
Annotated[Settings, Depends(get_settings)]
|
The settings for the Mistral-common API. |
required |
Returns:
Type | Description |
---|---|
AssistantMessage
|
The generated chat completion. |
Source code in src/mistral_common/experimental/app/routers.py
redirect_to_docs()
async
tokenize_request(request, settings)
async
Tokenize a chat completion request.