models
Class Model
The unified LLM interface that delegates to provider-specific clients.
This class provides a consistent interface for interacting with language models from various providers. It handles the common operations like generating responses, streaming, and async variants by delegating to the appropriate client methods.
Usage Note: In most cases, you should use llm.use_model() instead of instantiating
Model directly. This preserves the ability to override the model at runtime using
the llm.model() context manager. Only instantiate Model directly if you want to
hardcode a specific model and prevent it from being overridden by context.
Example (recommended - allows override):
from mirascope import llm
def recommend_book(genre: str) -> llm.Response:
# Uses context model if available, otherwise creates default
model = llm.use_model("openai/gpt-5-mini")
message = llm.messages.user(f"Please recommend a book in {genre}.")
return model.call(messages=[message])
# Uses default model
response = recommend_book("fantasy")
# Override with different model
with llm.model(provider="anthropic", model_id="anthropic/claude-sonnet-4-5"):
response = recommend_book("fantasy") # Uses ClaudeExample (direct instantiation - prevents override):
from mirascope import llm
def recommend_book(genre: str) -> llm.Response:
# Hardcoded model, cannot be overridden by context
model = llm.Model("openai/gpt-5-mini")
message = llm.messages.user(f"Please recommend a book in {genre}.")
return model.call(messages=[message])Attributes
| Name | Type | Description |
|---|---|---|
| model_id | ModelId | The model being used (e.g. `"openai/gpt-4o-mini"`). |
| params | Params | The default parameters for the model (temperature, max_tokens, etc.). |
| provider | Provider | The provider being used (e.g. an `OpenAIProvider`). This property dynamically looks up the provider from the registry based on the current model_id. This allows provider overrides via `llm.register_provider()` to take effect even after the model instance is created. |
| provider_id | ProviderId | The string id of the provider being used (e.g. `"openai"`). This property returns the `id` field of the dynamically resolved provider. |
Function call
Generate an llm.Response by synchronously calling this model's LLM provider.
Parameters
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | An `llm.Response` object containing the LLM-generated content. |
Function call_async
Generate an llm.AsyncResponse by asynchronously calling this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | An `llm.AsyncResponse` object containing the LLM-generated content. |
Function stream
Generate an llm.StreamResponse by synchronously streaming from this model's LLM provider.
Parameters
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | An `llm.StreamResponse` object for iterating over the LLM-generated content. |
Function stream_async
Generate an llm.AsyncStreamResponse by asynchronously streaming from this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function context_call
Generate an llm.ContextResponse by synchronously calling this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | An `llm.ContextResponse` object containing the LLM-generated content. |
Function context_call_async
Generate an llm.AsyncContextResponse by asynchronously calling this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | An `llm.AsyncContextResponse` object containing the LLM-generated content. |
Function context_stream
Generate an llm.ContextStreamResponse by synchronously streaming from this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | An `llm.ContextStreamResponse` object for iterating over the LLM-generated content. |
Function context_stream_async
Generate an llm.AsyncContextStreamResponse by asynchronously streaming from this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function resume
Generate a new llm.Response by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| response | Response | Response[FormattableT] | Previous response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | A new `llm.Response` object containing the extended conversation. |
Function resume_async
Generate a new llm.AsyncResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| response | AsyncResponse | AsyncResponse[FormattableT] | Previous async response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | A new `llm.AsyncResponse` object containing the extended conversation. |
Function context_resume
Generate a new llm.ContextResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| response | ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | Previous context response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | A new `llm.ContextResponse` object containing the extended conversation. |
Function context_resume_async
Generate a new llm.AsyncContextResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| response | AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | Previous async context response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | A new `llm.AsyncContextResponse` object containing the extended conversation. |
Function resume_stream
Generate a new llm.StreamResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| response | StreamResponse | StreamResponse[FormattableT] | Previous stream response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | A new `llm.StreamResponse` object for streaming the extended conversation. |
Function resume_stream_async
Generate a new llm.AsyncStreamResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| response | AsyncStreamResponse | AsyncStreamResponse[FormattableT] | Previous async stream response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | A new `llm.AsyncStreamResponse` object for asynchronously streaming the extended conversation. |
Function context_resume_stream
Generate a new llm.ContextStreamResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| response | ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | Previous context stream response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | A new `llm.ContextStreamResponse` object for streaming the extended conversation. |
Function context_resume_stream_async
Generate a new llm.AsyncContextStreamResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| response | AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | Previous async context stream response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | A new `llm.AsyncContextStreamResponse` object for asynchronously streaming the extended conversation. |
Function model
Helper for creating a Model instance (which may be used as a context manager).
This is just an alias for the Model constructor, added for convenience.
This function returns a Model instance that implements the context manager protocol.
When used with a with statement, the model will be set in context and used by both
llm.use_model() and llm.call() within that context. This allows you to override
the default model at runtime without modifying function definitions.
The returned Model instance can also be stored and reused:
m = llm.model("openai/gpt-4o")
# Use directly
response = m.call(messages=[...])
# Or use as context manager
with m:
response = recommend_book("fantasy")When a model is set in context, it completely overrides any model ID or parameters
specified in llm.use_model() or llm.call(). The context model's parameters take
precedence, and any unset parameters use default values.
Parameters
Returns
| Type | Description |
|---|---|
| Model | A Model instance that can be used as a context manager. |
Function model_from_context
Get the LLM currently set via context, if any.
Function use_model
Get the model from context if available, otherwise create a new Model.
This function checks if a model has been set in the context (via llm.model()
context manager). If a model is found in the context, it returns that model,
ignoring any model ID or parameters passed to this function. Otherwise, it creates
and returns a new llm.Model instance with the provided arguments.
This allows you to write functions that work with a default model but can be
overridden at runtime using the llm.model() context manager.
Example:
import mirascope.llm as llm
def recommend_book(genre: str) -> llm.Response:
model = llm.use_model("openai/gpt-5-mini")
message = llm.messages.user(f"Please recommend a book in {genre}.")
return model.call(messages=[message])
# Uses the default model (gpt-5-mini)
response = recommend_book("fantasy")
# Override with a different model
with llm.model(provider="anthropic", model_id="anthropic/claude-sonnet-4-5"):
response = recommend_book("fantasy") # Uses Claude insteadParameters
Returns
| Type | Description |
|---|---|
| Model | An `llm.Model` instance from context (if set) or a new instance with the specified settings. |