providers
Attribute KNOWN_PROVIDER_IDS
Type: get_args(KnownProviderId)
Attribute AnthropicModelId
Type: TypeAlias
The Anthropic model ids registered with Mirascope.
Class AnthropicProvider
The client for the Anthropic LLM model.
Bases:
BaseProvider[Anthropic]Attributes
| Name | Type | Description |
|---|---|---|
| id | 'anthropic' | - |
| default_scope | 'anthropic/' | - |
| client | Anthropic(api_key=api_key, base_url=base_url) | - |
| async_client | AsyncAnthropic(api_key=api_key, base_url=base_url) | - |
Class BaseProvider
Base abstract provider for LLM interactions.
This class defines explicit methods for each type of call, eliminating the need for complex overloads in provider implementations.
Bases: Generic[ProviderClientT], ABC
Attributes
| Name | Type | Description |
|---|---|---|
| id | ProviderId | Provider identifier (e.g., "anthropic", "openai"). |
| default_scope | str | list[str] | Default scope(s) for this provider when explicitly registered. Can be a single scope string or a list of scopes. For example: - "anthropic/" - Single scope - ["anthropic/", "openai/"] - Multiple scopes (e.g., for AWS Bedrock) |
| client | ProviderClientT | - |
Function call
Generate an llm.Response by synchronously calling this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | str | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | An `llm.Response` object containing the LLM-generated content. |
Function context_call
Generate an llm.ContextResponse by synchronously calling this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | str | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | An `llm.ContextResponse` object containing the LLM-generated content. |
Function call_async
Generate an llm.AsyncResponse by asynchronously calling this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | str | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | An `llm.AsyncResponse` object containing the LLM-generated content. |
Function context_call_async
Generate an llm.AsyncContextResponse by asynchronously calling this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | str | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | An `llm.AsyncContextResponse` object containing the LLM-generated content. |
Function stream
Generate an llm.StreamResponse by synchronously streaming from this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | str | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | An `llm.StreamResponse` object for iterating over the LLM-generated content. |
Function context_stream
Generate an llm.ContextStreamResponse by synchronously streaming from this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | str | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | An `llm.ContextStreamResponse` object for iterating over the LLM-generated content. |
Function stream_async
Generate an llm.AsyncStreamResponse by asynchronously streaming from this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | str | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function context_stream_async
Generate an llm.AsyncContextStreamResponse by asynchronously streaming from this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | str | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function resume
Generate a new llm.Response by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | str | Model identifier to use. |
| response | Response | Response[FormattableT] | Previous response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | A new `llm.Response` object containing the extended conversation. |
Function resume_async
Generate a new llm.AsyncResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | str | Model identifier to use. |
| response | AsyncResponse | AsyncResponse[FormattableT] | Previous async response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | A new `llm.AsyncResponse` object containing the extended conversation. |
Function context_resume
Generate a new llm.ContextResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | str | Model identifier to use. |
| response | ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | Previous context response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | A new `llm.ContextResponse` object containing the extended conversation. |
Function context_resume_async
Generate a new llm.AsyncContextResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | str | Model identifier to use. |
| response | AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | Previous async context response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | A new `llm.AsyncContextResponse` object containing the extended conversation. |
Function resume_stream
Generate a new llm.StreamResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | str | Model identifier to use. |
| response | StreamResponse | StreamResponse[FormattableT] | Previous stream response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | A new `llm.StreamResponse` object for streaming the extended conversation. |
Function resume_stream_async
Generate a new llm.AsyncStreamResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | str | Model identifier to use. |
| response | AsyncStreamResponse | AsyncStreamResponse[FormattableT] | Previous async stream response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | A new `llm.AsyncStreamResponse` object for asynchronously streaming the extended conversation. |
Function context_resume_stream
Generate a new llm.ContextStreamResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | str | Model identifier to use. |
| response | ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | Previous context stream response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | A new `llm.ContextStreamResponse` object for streaming the extended conversation. |
Function context_resume_stream_async
Generate a new llm.AsyncContextStreamResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | str | Model identifier to use. |
| response | AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | Previous async context stream response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | A new `llm.AsyncContextStreamResponse` object for asynchronously streaming the extended conversation. |
Attribute GoogleModelId
Type: TypeAlias
The Google model ids registered with Mirascope.
Class GoogleProvider
The client for the Google LLM model.
Bases:
BaseProvider[Client]Attributes
| Name | Type | Description |
|---|---|---|
| id | 'google' | - |
| default_scope | 'google/' | - |
| client | Client(api_key=api_key, http_options=http_options) | - |
Attribute MLXModelId
Type: TypeAlias
The identifier of the MLX model to be loaded by the MLX client.
An MLX model identifier might be a local path to a model's file, or a huggingface repository such as:
- "mlx-community/Qwen3-8B-4bit-DWQ-053125"
- "mlx-community/gpt-oss-20b-MXFP4-Q8"
For more details, see:
- https://github.com/ml-explore/mlx-lm/?tab=readme-ov-file#supported-models
- https://huggingface.co/mlx-community
Class MLXProvider
Client for interacting with MLX language models.
This client provides methods for generating responses from MLX models, supporting both synchronous and asynchronous operations, as well as streaming responses.
Bases:
BaseProvider[None]Attributes
| Name | Type | Description |
|---|---|---|
| id | 'mlx' | - |
| default_scope | 'mlx-community/' | - |
Attribute ModelId
Type: TypeAlias
Attribute OpenAIModelId
Type: OpenAIKnownModels | str
Valid OpenAI model IDs including API-specific variants.
Class OpenAIProvider
Unified provider for OpenAI that routes to Completions or Responses API based on model_id.
Bases:
BaseProvider[OpenAI]Attributes
| Name | Type | Description |
|---|---|---|
| id | 'openai' | - |
| default_scope | 'openai/' | - |
| client | self._completions_provider.client | - |
Class Params
Common parameters shared across LLM providers.
Note: Each provider may handle these parameters differently or not support them at all. Please check provider-specific documentation for parameter support and behavior.
Bases:
TypedDictAttributes
| Name | Type | Description |
|---|---|---|
| temperature | float | Controls randomness in the output (0.0 to 1.0). Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. |
| max_tokens | int | Maximum number of tokens to generate. |
| top_p | float | Nucleus sampling parameter (0.0 to 1.0). Tokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses. |
| top_k | int | Limits token selection to the k most probable tokens (typically 1 to 100). For each token selection step, the ``top_k`` tokens with the highest probabilities are sampled. Then tokens are further filtered based on ``top_p`` with the final token selected using temperature sampling. Use a lower number for less random responses and a higher number for more random responses. |
| seed | int | Random seed for reproducibility. When ``seed`` is fixed to a specific number, the model makes a best effort to provide the same response for repeated requests. Not supported by all providers, and does not guarantee strict reproducibility. |
| stop_sequences | list[str] | Stop sequences to end generation. The model will stop generating text if one of these strings is encountered in the response. |
| thinking | bool | Configures whether the model should use thinking. Thinking is a process where the model spends additional tokens thinking about the prompt before generating a response. You may configure thinking either by passing a bool to enable or disable it. If `params.thinking` is `True`, then thinking and thought summaries will be enabled (if supported by the model/provider), with a default budget for thinking tokens. If `params.thinking` is `False`, then thinking will be wholly disabled, assuming the model allows this (some models, e.g. `google:gemini-2.5-pro`, do not allow disabling thinking). If `params.thinking` is unset (or `None`), then we will use provider-specific default behavior for the chosen model. |
| encode_thoughts_as_text | bool | Configures whether `Thought` content should be re-encoded as text for model consumption. If `True`, then when an `AssistantMessage` contains `Thoughts` and is being passed back to an LLM, those `Thoughts` will be encoded as `Text`, so that the assistant can read those thoughts. That ensures the assistant has access to (at least the summarized output of) its reasoning process, and contrasts with provider default behaviors which may ignore prior thoughts, particularly if tool calls are not involved. When `True`, we will always re-encode Mirascope messages being passed to the provider, rather than reusing raw provider response content. This may disable provider-specific behavior like cached reasoning tokens. If `False`, then `Thoughts` will not be encoded as text, and whether reasoning context is available to the model depends entirely on the provider's behavior. Defaults to `False` if unset. |
Attribute Provider
Type: TypeAlias
Type alias for BaseProvider with any client type.
Attribute ProviderId
Type: KnownProviderId | str
Function get_provider_for_model
Get the provider for a model_id based on the registry.
Uses longest prefix matching to find the most specific provider for the model. If no explicit registration is found, checks for auto-registration defaults and automatically registers the provider on first use.
Parameters
| Name | Type | Description |
|---|---|---|
| model_id | str | The full model ID (e.g., "anthropic/claude-4-5-sonnet"). |
Returns
| Type | Description |
|---|---|
| Provider | The provider instance registered for this model. |
Attribute load
Type: load_provider
Convenient alias as llm.providers.load
Function load_provider
Create a cached provider instance for the specified provider id.
Parameters
| Name | Type | Description |
|---|---|---|
| provider_id | ProviderId | The provider name ("openai", "anthropic", or "google"). |
| api_key= None | str | None | API key for authentication. If None, uses provider-specific env var. |
| base_url= None | str | None | Base URL for the API. If None, uses provider-specific env var. |
Returns
| Type | Description |
|---|---|
| Provider | A cached provider instance for the specified provider with the given parameters. |
Function register_provider
Register a provider with scope(s) in the global registry.
Scopes use prefix matching on model IDs:
- "anthropic/" matches "anthropic/*"
- "anthropic/claude-4-5" matches "anthropic/claude-4-5*"
- "anthropic/claude-4-5-sonnet" matches exactly "anthropic/claude-4-5-sonnet"
When multiple scopes match a model_id, the longest match wins.
Parameters
| Name | Type | Description |
|---|---|---|
| provider | ProviderId | Provider | Either a provider ID string or a provider instance. |
| scope= None | str | list[str] | None | Scope string or list of scopes for prefix matching on model IDs. If None, uses the provider's default_scope attribute. Can be a single string or a list of strings. |
| api_key= None | str | None | API key for authentication (only used if provider is a string). |
| base_url= None | str | None | Base URL for the API (only used if provider is a string). |
Returns
| Type | Description |
|---|---|
| Provider | - |