Mirascopev2
Lilypad

llm

Class APIError

Base class for API-related errors.

Bases:

MirascopeLLMError

Attributes

NameTypeDescription
status_codeint | None-

Attribute AssistantContent

Type: TypeAlias

Type alias for content that can fit into an AssistantMessage.

Attribute AssistantContentChunk

Type: TypeAlias

Chunks of assistant content that may be streamed as generated by the LLM.

Attribute AssistantContentPart

Type: TypeAlias

Content parts that can be included in an AssistantMessage.

Class AssistantMessage

An assistant message containing the model's response.

Attributes

NameTypeDescription
roleLiteral['assistant']The role of this message. Always "assistant".
contentSequence[AssistantContentPart]The content of the assistant message.
namestr | NoneA name identifying the creator of this message.
provider_idProviderId | NoneThe LLM provider that generated this assistant message, if available.
model_idModelId | NoneThe model identifier of the LLM that generated this assistant message, if available.
provider_model_namestr | NoneThe provider-specific model identifier (e.g. "gpt-5:responses"), if available.
raw_messageJsonable | NoneThe provider-specific raw representation of this assistant message, if available. If raw_content is truthy, then it may be used for provider-specific behavior when resuming an LLM interaction that included this assistant message. For example, we can reuse the provider-specific raw encoding rather than re-encoding the message from it's Mirascope content representation. This may also take advantage of server-side provider context, e.g. identifiers of reasoning context tokens that the provider generated. If present, the content should be encoded as JSON-serializable data, and in a format that matches representation the provider expects representing the Mirascope data. This may involve e.g. converting Pydantic `BaseModel`s into plain dicts via `model_dump`. Raw content is not required, as the Mirascope content can also be used to generate a valid input to the provider (potentially without taking advantage of provider-specific reasoning caches, etc). In that case raw content should be left empty.

Class AsyncCall

An async call that directly generates LLM responses without requiring a model argument.

Created by decorating an async MessageTemplate with llm.call. The decorated async function becomes directly callable to generate responses asynchronously, with the Model bundled in.

An AsyncCall is essentially: async MessageTemplate + tools + format + Model. It can be invoked directly: await call(*args, **kwargs) (no model argument needed).

The model can be overridden at runtime using with llm.model(...) context manager.

Bases: BaseCall, Generic[P, FormattableT]

Attributes

NameTypeDescription
promptAsyncPrompt[P, FormattableT]The underlying AsyncPrompt instance that generates messages with tools and format.

Function call

Generates a response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Attribute AsyncChunkIterator

Type: TypeAlias

Asynchronous iterator yielding chunks with raw data.

Class AsyncContextCall

An async context-aware call that directly generates LLM responses without requiring a model argument.

Created by decorating an async ContextMessageTemplate with llm.call. The decorated async function (with first parameter 'ctx' of type Context[DepsT]) becomes directly callable to generate responses asynchronously with context dependencies, with the Model bundled in.

An AsyncContextCall is essentially: async ContextMessageTemplate + tools + format + Model. It can be invoked directly: await call(ctx, *args, **kwargs) (no model argument needed).

The model can be overridden at runtime using with llm.model(...) context manager.

Bases: BaseCall, Generic[P, DepsT, FormattableT]

Attributes

NameTypeDescription
promptAsyncContextPrompt[P, DepsT, FormattableT]The underlying AsyncContextPrompt instance that generates messages with tools and format.

Function call

Generates a response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Class AsyncContextPrompt

An async context-aware prompt that can be called with a model to generate a response.

Created by decorating an async ContextMessageTemplate with llm.prompt. The decorated async function (with first parameter 'ctx' of type Context[DepsT]) becomes callable with a Model to generate LLM responses asynchronously with context dependencies.

An AsyncContextPrompt is essentially: async ContextMessageTemplate + tools + format. It can be invoked with a model: await prompt(model, ctx, *args, **kwargs).

Bases:

Generic[P, DepsT, FormattableT]

Attributes

NameTypeDescription
fnAsyncContextMessageTemplate[P, DepsT]The underlying async context-aware prompt function that generates message content.
toolkitAsyncContextToolkit[DepsT]The toolkit containing this prompt's async context-aware tools.
formattype[FormattableT] | Format[FormattableT] | NoneThe response format for the generated response.

Function call

Generates a response using the provided model asynchronously.

Parameters

NameTypeDescription
selfAny-
modelModel-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the provided model asynchronously.

Parameters

NameTypeDescription
selfAny-
modelModel-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Class AsyncContextResponse

The response generated by an LLM from an async context call.

Bases: BaseResponse[AsyncContextToolkit[DepsT], FormattableT], Generic[DepsT, FormattableT]

Function execute_tools

Execute and return all of the tool calls in the response.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]A `Context` with the required deps type.

Returns

TypeDescription
Sequence[ToolOutput[Jsonable]]A sequence containing a `ToolOutput` for every tool call in the order they appeared.

Function resume

Generate a new AsyncContextResponse using this response's messages with additional user content.

Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]A Context with the required deps type.
contentUserContentThe new user message content to append to the message history.

Returns

TypeDescription
AsyncContextResponse[DepsT] | AsyncContextResponse[DepsT, FormattableT]A new `AsyncContextResponse` instance generated from the extended message history.

Class AsyncContextStreamResponse

An AsyncContextStreamResponse wraps response content from the LLM with a streaming interface.

This class supports iteration to process chunks as they arrive from the model.

Content can be streamed in one of three ways:

  • Via .streams(), which provides an iterator of streams, where each stream contains chunks of streamed data. The chunks contain deltas (new content in that particular chunk), and the stream itself accumulates the collected state of all the chunks processed thus far.
  • Via .chunk_stream() which allows iterating over Mirascope's provider- agnostic chunk representation.
  • Via .pretty_stream() a helper method which provides all response content as str deltas. Iterating through pretty_stream will yield text content and optionally placeholder representations for other content types, but it will still consume the full stream.
  • Via .structured_stream(), a helper method which provides partial structured outputs from a response (useful when FormatT is set). Iterating through structured_stream will only yield structured partials, but it will still consume the full stream.

As chunks are consumed, they are collected in-memory on the AsyncContextStreamResponse, and they become available in .content, .messages, .tool_calls, etc. All of the stream iterators can be restarted after the stream has been consumed, in which case they will yield chunks from memory in the original sequence that came from the LLM. If the stream is only partially consumed, a fresh iterator will first iterate through in-memory content, and then will continue consuming fresh chunks from the LLM.

In the specific case of text chunks, they are included in the response content as soon as they become available, via an llm.Text part that updates as more deltas come in. This enables the behavior where resuming a partially-streamed response will include as much text as the model generated.

For other chunks, like Thinking or ToolCall, they are only added to response content once the corresponding part has fully streamed. This avoids issues like adding incomplete tool calls, or thinking blocks missing signatures, to the response.

For each iterator, fully iterating through the iterator will consume the whole LLM stream. You can pause stream execution midway by breaking out of the iterator, and you can safely resume execution from the same iterator if desired.

Bases: BaseAsyncStreamResponse[AsyncContextToolkit[DepsT], FormattableT], Generic[DepsT, FormattableT]

Function execute_tools

Execute and return all of the tool calls in the response.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]A `Context` with the required deps type.

Returns

TypeDescription
Sequence[ToolOutput[Jsonable]]A sequence containing a `ToolOutput` for every tool call in the order they appeared.

Function resume

Generate a new AsyncContextStreamResponse using this response's messages with additional user content.

Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]A Context with the required deps type.
contentUserContentThe new user message content to append to the message history.

Returns

TypeDescription
AsyncContextStreamResponse[DepsT] | AsyncContextStreamResponse[DepsT, FormattableT]A new `AsyncContextStreamResponse` instance generated from the extended message history.

Class AsyncContextTool

Protocol defining an async tool that can be used by LLMs with context.

An AsyncContextTool represents an async function that can be called by an LLM during a call. It includes metadata like name, description, and parameter schema.

This class is not instantiated directly but created by the @tool() decorator.

Bases: ToolSchema[AsyncContextToolFn[DepsT, AnyP, JsonableCovariantT]], Generic[DepsT, JsonableCovariantT, AnyP]

Function execute

Execute the async context tool using an LLM-provided ToolCall.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
tool_callToolCall-

Returns

Class AsyncContextToolkit

A collection of AsyncContextTools, with helpers for getting and executing specific tools.

Bases: BaseToolkit[AsyncTool | AsyncContextTool[DepsT]], Generic[DepsT]

Function execute

Execute an AsyncContextTool using the provided tool call.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]The context containing dependencies that match the tool.
tool_callToolCallThe tool call to execute.

Returns

TypeDescription
ToolOutput[Jsonable]The output from executing the `AsyncContextTool`.

Class AsyncPrompt

An async prompt that can be called with a model to generate a response.

Created by decorating an async MessageTemplate with llm.prompt. The decorated async function becomes callable with a Model to generate LLM responses asynchronously.

An AsyncPrompt is essentially: async MessageTemplate + tools + format. It can be invoked with a model: await prompt(model, *args, **kwargs).

Bases:

Generic[P, FormattableT]

Attributes

NameTypeDescription
fnAsyncMessageTemplate[P]The underlying async prompt function that generates message content.
toolkitAsyncToolkitThe toolkit containing this prompt's async tools.
formattype[FormattableT] | Format[FormattableT] | NoneThe response format for the generated response.

Function call

Generates a response using the provided model asynchronously.

Parameters

NameTypeDescription
selfAny-
modelModel-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the provided model asynchronously.

Parameters

NameTypeDescription
selfAny-
modelModel-
args= ()P.args-
kwargs= {}P.kwargs-

Class AsyncResponse

The response generated by an LLM in async mode.

Bases:

BaseResponse[AsyncToolkit, FormattableT]

Function execute_tools

Execute and return all of the tool calls in the response.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
Sequence[ToolOutput[Jsonable]]A sequence containing a `ToolOutput` for every tool call in the order they appeared.

Function resume

Generate a new AsyncResponse using this response's messages with additional user content.

Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.

Parameters

NameTypeDescription
selfAny-
contentUserContentThe new user message content to append to the message history.

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]A new `AsyncResponse` instance generated from the extended message history.

Attribute AsyncStream

Type: TypeAlias

An asynchronous assistant content stream.

Class AsyncStreamResponse

An AsyncStreamResponse wraps response content from the LLM with a streaming interface.

This class supports iteration to process chunks as they arrive from the model.

Content can be streamed in one of three ways:

  • Via .streams(), which provides an iterator of streams, where each stream contains chunks of streamed data. The chunks contain deltas (new content in that particular chunk), and the stream itself accumulates the collected state of all the chunks processed thus far.
  • Via .chunk_stream() which allows iterating over Mirascope's provider- agnostic chunk representation.
  • Via .pretty_stream() a helper method which provides all response content as str deltas. Iterating through pretty_stream will yield text content and optionally placeholder representations for other content types, but it will still consume the full stream.
  • Via .structured_stream(), a helper method which provides partial structured outputs from a response (useful when FormatT is set). Iterating through structured_stream will only yield structured partials, but it will still consume the full stream.

As chunks are consumed, they are collected in-memory on the AsyncContextStreamResponse, and they become available in .content, .messages, .tool_calls, etc. All of the stream iterators can be restarted after the stream has been consumed, in which case they will yield chunks from memory in the original sequence that came from the LLM. If the stream is only partially consumed, a fresh iterator will first iterate through in-memory content, and then will continue consuming fresh chunks from the LLM.

In the specific case of text chunks, they are included in the response content as soon as they become available, via an llm.Text part that updates as more deltas come in. This enables the behavior where resuming a partially-streamed response will include as much text as the model generated.

For other chunks, like Thinking or ToolCall, they are only added to response content once the corresponding part has fully streamed. This avoids issues like adding incomplete tool calls, or thinking blocks missing signatures, to the response.

For each iterator, fully iterating through the iterator will consume the whole LLM stream. You can pause stream execution midway by breaking out of the iterator, and you can safely resume execution from the same iterator if desired.

Bases:

BaseAsyncStreamResponse[AsyncToolkit, FormattableT]

Function execute_tools

Execute and return all of the tool calls in the response.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
Sequence[ToolOutput[Jsonable]]A sequence containing a `ToolOutput` for every tool call in the order they appeared.

Function resume

Generate a new AsyncStreamResponse using this response's messages with additional user content.

Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.

Parameters

NameTypeDescription
selfAny-
contentUserContentThe new user message content to append to the message history.

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]A new `AsyncStreamResponse` instance generated from the extended message history.

Class AsyncTextStream

Asynchronous text stream implementation.

Bases:

BaseAsyncStream[Text, str]

Attributes

NameTypeDescription
typeLiteral['async_text_stream']-
content_typeLiteral['text']The type of content stored in this stream.
partial_textstrThe accumulated text content as chunks are received.

Function collect

Asynchronously collect all chunks and return the final Text content.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
TextThe complete text content after consuming all chunks.

Class AsyncThoughtStream

Asynchronous thought stream implementation.

Bases:

BaseAsyncStream[Thought, str]

Attributes

NameTypeDescription
typeLiteral['async_thought_stream']-
content_typeLiteral['thought']The type of content stored in this stream.
partial_thoughtstrThe accumulated thought content as chunks are received.

Function collect

Asynchronously collect all chunks and return the final Thought content.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
ThoughtThe complete thought content after consuming all chunks.

Class AsyncTool

An async tool that can be used by LLMs.

An AsyncTool represents an async function that can be called by an LLM during a call. It includes metadata like name, description, and parameter schema.

This class is not instantiated directly but created by the @tool() decorator.

Bases: ToolSchema[AsyncToolFn[AnyP, JsonableCovariantT]], Generic[AnyP, JsonableCovariantT]

Function execute

Execute the async tool using an LLM-provided ToolCall.

Parameters

NameTypeDescription
selfAny-
tool_callToolCall-

Returns

Class AsyncToolCallStream

Asynchronous tool call stream implementation.

Bases:

BaseAsyncStream[ToolCall, str]

Attributes

NameTypeDescription
typeLiteral['async_tool_call_stream']-
content_typeLiteral['tool_call']The type of content stored in this stream.
tool_idstrA unique identifier for this tool call.
tool_namestrThe name of the tool being called.
partial_argsstrThe accumulated tool arguments as chunks are received.

Function collect

Asynchronously collect all chunks and return the final ToolCall content.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
ToolCallThe complete tool call after consuming all chunks.

Class AsyncToolkit

A collection of AsyncTools, with helpers for getting and executing specific tools.

Bases:

BaseToolkit[AsyncTool]

Function execute

Execute an AsyncTool using the provided tool call.

Parameters

NameTypeDescription
selfAny-
tool_callToolCallThe tool call to execute.

Returns

TypeDescription
ToolOutput[Jsonable]The output from executing the `AsyncTool`.

Class Audio

Audio content for a message.

Audio can be included in messages for voice or sound-based interactions.

Attributes

NameTypeDescription
typeLiteral['audio']-
sourceBase64AudioSource-

Function download

Download and encode an audio file from a URL.

Parameters

NameTypeDescription
clsAny-
urlstrThe URL of the audio file to download
max_size= MAX_AUDIO_SIZEintMaximum allowed audio size in bytes (default: 25MB)

Returns

TypeDescription
AudioAn `Audio` with a `Base64AudioSource`

Function download_async

Asynchronously download and encode an audio file from a URL.

Parameters

NameTypeDescription
clsAny-
urlstrThe URL of the audio file to download
max_size= MAX_AUDIO_SIZEintMaximum allowed audio size in bytes (default: 25MB)

Returns

TypeDescription
AudioAn `Audio` with a `Base64AudioSource`

Function from_file

Create an Audio from a file path.

Parameters

NameTypeDescription
clsAny-
file_pathstrPath to the audio file
max_size= MAX_AUDIO_SIZEintMaximum allowed audio size in bytes (default: 25MB)

Returns

TypeDescription
Audio-

Function from_bytes

Create an Audio from raw bytes.

Parameters

NameTypeDescription
clsAny-
databytesRaw audio bytes
max_size= MAX_AUDIO_SIZEintMaximum allowed audio size in bytes (default: 25MB)

Returns

TypeDescription
Audio-

Class AuthenticationError

Raised for authentication failures (401, invalid API keys).

Bases:

APIError

Class BadRequestError

Raised for malformed requests (400, 422).

Bases:

APIError

Class Base64AudioSource

Audio data represented as a base64 encoded string.

Attributes

NameTypeDescription
typeLiteral['base64_audio_source']-
datastrThe audio data, as a base64 encoded string.
mime_typeAudioMimeTypeThe mime type of the audio (e.g. audio/mp3).

Class Base64ImageSource

Image data represented as a base64 encoded string.

Attributes

NameTypeDescription
typeLiteral['base64_image_source']-
datastrThe image data, as a base64 encoded string.
mime_typeImageMimeTypeThe mime type of the image (e.g. image/png).

Class Call

A call that directly generates LLM responses without requiring a model argument.

Created by decorating a MessageTemplate with llm.call. The decorated function becomes directly callable to generate responses, with the Model bundled in.

A Call is essentially: MessageTemplate + tools + format + Model. It can be invoked directly: call(*args, **kwargs) (no model argument needed).

The model can be overridden at runtime using with llm.model(...) context manager.

Bases: BaseCall, Generic[P, FormattableT]

Attributes

NameTypeDescription
promptPrompt[P, FormattableT]The underlying Prompt instance that generates messages with tools and format.

Function call

Generates a response using the LLM.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
Response | Response[FormattableT]-

Function stream

Generates a streaming response using the LLM.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Class CallDecorator

Decorator for converting a MessageTemplate into a Call.

Takes a raw prompt function that returns message content and wraps it with tools, format, and a model to create a Call that can be invoked directly without needing to pass a model argument.

The decorator automatically detects whether the function is async or context-aware and creates the appropriate Call variant (Call, AsyncCall, ContextCall, or AsyncContextCall).

Conceptually: CallDecorator = PromptDecorator + Model Result: Call = MessageTemplate + tools + format + Model

Bases:

Generic[ToolT, FormattableT]

Attributes

NameTypeDescription
modelModelThe default model to use with this call. May be overridden.
toolsSequence[ToolT] | NoneThe tools that are included in the prompt, if any.
formattype[FormattableT] | Format[FormattableT] | NoneThe structured output format off the prompt, if any.

Attribute ChunkIterator

Type: TypeAlias

Synchronous iterator yielding chunks with raw data.

Class ConnectionError

Raised when unable to connect to the API (network issues, timeouts).

Bases:

MirascopeLLMError

Class Context

Context for LLM calls.

This class provides a context for LLM calls, including the model, parameters, and any dependencies needed for the call.

Bases:

Generic[DepsT]

Attributes

NameTypeDescription
depsDepsTThe dependencies needed for a call.

Class ContextCall

A context-aware call that directly generates LLM responses without requiring a model argument.

Created by decorating a ContextMessageTemplate with llm.call. The decorated function (with first parameter 'ctx' of type Context[DepsT]) becomes directly callable to generate responses with context dependencies, with the Model bundled in.

A ContextCall is essentially: ContextMessageTemplate + tools + format + Model. It can be invoked directly: call(ctx, *args, **kwargs) (no model argument needed).

The model can be overridden at runtime using with llm.model(...) context manager.

Bases: BaseCall, Generic[P, DepsT, FormattableT]

Attributes

NameTypeDescription
promptContextPrompt[P, DepsT, FormattableT]The underlying ContextPrompt instance that generates messages with tools and format.

Function call

Generates a response using the LLM.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]-

Function stream

Generates a streaming response using the LLM.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Class ContextPrompt

A context-aware prompt that can be called with a model to generate a response.

Created by decorating a ContextMessageTemplate with llm.prompt. The decorated function (with first parameter 'ctx' of type Context[DepsT]) becomes callable with a Model to generate LLM responses with context dependencies.

A ContextPrompt is essentially: ContextMessageTemplate + tools + format. It can be invoked with a model: prompt(model, ctx, *args, **kwargs).

Bases:

Generic[P, DepsT, FormattableT]

Attributes

NameTypeDescription
fnContextMessageTemplate[P, DepsT]The underlying context-aware prompt function that generates message content.
toolkitContextToolkit[DepsT]The toolkit containing this prompt's context-aware tools.
formattype[FormattableT] | Format[FormattableT] | NoneThe response format for the generated response.

Function call

Generates a response using the provided model.

Parameters

NameTypeDescription
selfAny-
modelModel-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]-

Function stream

Generates a streaming response using the provided model.

Parameters

NameTypeDescription
selfAny-
modelModel-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Class ContextResponse

The response generated by an LLM from a context call.

Bases: BaseResponse[ContextToolkit[DepsT], FormattableT], Generic[DepsT, FormattableT]

Function execute_tools

Execute and return all of the tool calls in the response.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]A `Context` with the required deps type.

Returns

TypeDescription
Sequence[ToolOutput[Jsonable]]A sequence containing a `ToolOutput` for every tool call.

Function resume

Generate a new ContextResponse using this response's messages with additional user content.

Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]A `Context` with the required deps type.
contentUserContentThe new user message content to append to the message history.

Returns

TypeDescription
ContextResponse[DepsT] | ContextResponse[DepsT, FormattableT]A new `ContextResponse` instance generated from the extended message history.

Class ContextStreamResponse

A ContextStreamResponse wraps response content from the LLM with a streaming interface.

This class supports iteration to process chunks as they arrive from the model.

Content can be streamed in one of three ways:

  • Via .streams(), which provides an iterator of streams, where each stream contains chunks of streamed data. The chunks contain deltas (new content in that particular chunk), and the stream itself accumulates the collected state of all the chunks processed thus far.
  • Via .chunk_stream() which allows iterating over Mirascope's provider- agnostic chunk representation.
  • Via .pretty_stream() a helper method which provides all response content as str deltas. Iterating through pretty_stream will yield text content and optionally placeholder representations for other content types, but it will still consume the full stream.
  • Via .structured_stream(), a helper method which provides partial structured outputs from a response (useful when FormatT is set). Iterating through structured_stream will only yield structured partials, but it will still consume the full stream.

As chunks are consumed, they are collected in-memory on the ContextStreamResponse, and they become available in .content, .messages, .tool_calls, etc. All of the stream iterators can be restarted after the stream has been consumed, in which case they will yield chunks from memory in the original sequence that came from the LLM. If the stream is only partially consumed, a fresh iterator will first iterate through in-memory content, and then will continue consuming fresh chunks from the LLM.

In the specific case of text chunks, they are included in the response content as soon as they become available, via an llm.Text part that updates as more deltas come in. This enables the behavior where resuming a partially-streamed response will include as much text as the model generated.

For other chunks, like Thinking or ToolCall, they are only added to response content once the corresponding part has fully streamed. This avoids issues like adding incomplete tool calls, or thinking blocks missing signatures, to the response.

For each iterator, fully iterating through the iterator will consume the whole LLM stream. You can pause stream execution midway by breaking out of the iterator, and you can safely resume execution from the same iterator if desired.

Bases: BaseSyncStreamResponse[ContextToolkit[DepsT], FormattableT], Generic[DepsT, FormattableT]

Function execute_tools

Execute and return all of the tool calls in the response.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]A `Context` with the required deps type.

Returns

TypeDescription
Sequence[ToolOutput[Jsonable]]A sequence containing a `ToolOutput` for every tool call.

Function resume

Generate a new ContextStreamResponse using this response's messages with additional user content.

Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]A Context with the required deps type.
contentUserContentThe new user message content to append to the message history.

Returns

TypeDescription
ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT]A new `ContextStreamResponse` instance generated from the extended message history.

Class ContextTool

Protocol defining a tool that can be used by LLMs.

A ContextTool represents a function that can be called by an LLM during a call. It includes metadata like name, description, and parameter schema.

This class is not instantiated directly but created by the @tool() decorator.

Bases: ToolSchema[ContextToolFn[DepsT, AnyP, JsonableCovariantT]], Generic[DepsT, JsonableCovariantT, AnyP]

Function execute

Execute the context tool using an LLM-provided ToolCall.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
tool_callToolCall-

Returns

Class ContextToolkit

A collection of ContextTools, with helpers for getting and executing specific tools.

Bases: BaseToolkit[Tool | ContextTool[DepsT]], Generic[DepsT]

Function execute

Execute a ContextTool using the provided tool call.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]The context containing dependencies that match the tool.
tool_callToolCallThe tool call to execute.

Returns

TypeDescription
ToolOutput[Jsonable]The output from executing the `ContextTool`.

Class Document

Document content for a message.

Documents (like PDFs) can be included for the model to analyze or reference.

Attributes

NameTypeDescription
typeLiteral['document']-
sourceBase64DocumentSource | TextDocumentSource | URLDocumentSource-

Function from_url

Create a Document from a URL.

Parameters

NameTypeDescription
clsAny-
urlstr-
download= Falsebool-

Returns

TypeDescription
Document-

Function from_file

Create a Document from a file path.

Parameters

NameTypeDescription
clsAny-
file_pathstr-
mime_typeDocumentTextMimeType | DocumentBase64MimeType | None-

Returns

TypeDescription
Document-

Function from_bytes

Create a Document from raw bytes.

Parameters

NameTypeDescription
clsAny-
databytes-
mime_typeDocumentTextMimeType | DocumentBase64MimeType | None-

Returns

TypeDescription
Document-

Class FeatureNotSupportedError

Raised if a Mirascope feature is unsupported by chosen provider.

If compatibility is model-specific, then model_id should be specified. If the feature is not supported by the provider at all, then it may be None.

Bases:

MirascopeLLMError

Attributes

NameTypeDescription
provider_idProviderId-
model_idModelId | None-
featurestr-

Class FinishReason

The reason why the LLM finished generating a response.

FinishReason is only set when the response did not have a normal finish (e.g. it ran out of tokens). When a response finishes generating normally, no finish reason is set.

Bases: str, Enum

Attributes

NameTypeDescription
MAX_TOKENS'max_tokens'-
REFUSAL'refusal'-

Class Format

Class representing a structured output format for LLM responses.

A Format contains metadata needed to describe a structured output type to the LLM, including the expected schema. This class is not instantiated directly, but is created by calling llm.format, or is automatically generated by LLM providers when a Formattable is passed to a call method.

Example:

from mirascope import llm

class Book:
    title: str
    author: str

print(llm.format(Book, mode="tool"))

Bases:

Generic[FormattableT]

Attributes

NameTypeDescription
namestrThe name of the response format.
descriptionstr | NoneA description of the response format, if available.
schemadict[str, object]JSON schema representation of the structured output format.
modeFormattingModeThe decorator-provided mode of the response format. Determines how the LLM call may be modified in order to extract the expected format.
formatting_instructionsstr | NoneThe formatting instructions that will be added to the LLM system prompt. If the format type has a `formatting_instructions` class method, the output of that call will be used for instructions. Otherwise, instructions may be auto-generated based on the formatting mode.
formattabletype[FormattableT]The `Formattable` type that this `Format` describes. While the `FormattbleT` typevar allows for `None`, a `Format` will never be constructed when the `FormattableT` is `None`, so you may treat this as a `RequiredFormattableT` in practice.

Attribute FormattingMode

Type: Literal['strict', 'json', 'tool']

Available modes for response format generation.

  • "strict": Use strict mode for structured outputs, asking the LLM to strictly adhere to a given JSON schema. Not all providers or models support it, and may not be compatible with tool calling. When making a call using this mode, an llm.FormattingModeNotSupportedError error may be raised (if "strict" mode is wholly unsupported), or an llm.FeatureNotSupportedError may be raised (if trying to use strict along with tools and that is unsupported).

  • "json": Use JSON mode for structured outputs. In contrast to strict mode, we ask the LLM to output JSON as text, though without guarantees that the model will output the expected format schema. If the provider has explicit JSON mode, it will be used; otherwise, Mirascope will modify the system prompt to request JSON output. May raise an llm.FeatureNotSupportedError if tools are present and the model does not support tool calling when using JSON mode.

  • "tool": Use forced tool calling to structure outputs. Mirascope will construct an ad-hoc tool with the required json schema as tool args. When the LLM chooses that tool, it will automatically be converted from a ToolCall into regular response content (abstracting over the tool call). If other tools are present, they will be handled as regular tool calls.

Note: When llm.format is not used, the provider will automatically choose a mode at call time.

Class FormattingModeNotSupportedError

Raised when trying to use a formatting mode that is not supported by the chosen model.

Bases:

FeatureNotSupportedError

Attributes

NameTypeDescription
formatting_modeFormattingMode-

Class Image

Image content for a message.

Images can be included in messages to provide visual context. This can be used for both input (e.g., user uploading an image) and output (e.g., model generating an image).

Attributes

NameTypeDescription
typeLiteral['image']-
sourceBase64ImageSource | URLImageSource-

Function from_url

Create an Image reference from a URL, without downloading it.

Parameters

NameTypeDescription
clsAny-
urlstrThe URL of the image

Returns

TypeDescription
ImageAn `Image` with a `URLImageSource`

Function download

Download and encode an image from a URL.

Parameters

NameTypeDescription
clsAny-
urlstrThe URL of the image to download
max_size= MAX_IMAGE_SIZEintMaximum allowed image size in bytes (default: 20MB)

Returns

TypeDescription
ImageAn `Image` with a `Base64ImageSource`

Function download_async

Asynchronously download and encode an image from a URL.

Parameters

NameTypeDescription
clsAny-
urlstrThe URL of the image to download
max_size= MAX_IMAGE_SIZEintMaximum allowed image size in bytes (default: 20MB)

Returns

TypeDescription
ImageAn `Image` with a `Base64ImageSource`

Function from_file

Create an Image from a file path.

Parameters

NameTypeDescription
clsAny-
file_pathstrPath to the image file
max_size= MAX_IMAGE_SIZEintMaximum allowed image size in bytes (default: 20MB)

Returns

TypeDescription
Image-

Function from_bytes

Create an Image from raw bytes.

Parameters

NameTypeDescription
clsAny-
databytesRaw image bytes
max_size= MAX_IMAGE_SIZEintMaximum allowed image size in bytes (default: 20MB)

Returns

TypeDescription
Image-

Attribute Message

Type: TypeAlias

A message in an LLM interaction.

Messages have a role (system, user, or assistant) and content that is a sequence of content parts. The content can include text, images, audio, documents, and tool interactions.

For most use cases, prefer the convenience functions system(), user(), and assistant() instead of directly creating Message objects.

Example:

from mirascope import llm

messages = [
    llm.messages.system("You are a helpful assistant."),
    llm.messages.user("Hello, how are you?"),
]

Class MirascopeLLMError

Base exception for all Mirascope LLM errors.

Bases:

Exception

Attributes

NameTypeDescription
original_exceptionException | None-

Class Model

The unified LLM interface that delegates to provider-specific clients.

This class provides a consistent interface for interacting with language models from various providers. It handles the common operations like generating responses, streaming, and async variants by delegating to the appropriate client methods.

Usage Note: In most cases, you should use llm.use_model() instead of instantiating Model directly. This preserves the ability to override the model at runtime using the llm.model() context manager. Only instantiate Model directly if you want to hardcode a specific model and prevent it from being overridden by context.

Example (recommended - allows override):

from mirascope import llm

def recommend_book(genre: str) -> llm.Response:
    # Uses context model if available, otherwise creates default
    model = llm.use_model("openai/gpt-5-mini")
    message = llm.messages.user(f"Please recommend a book in {genre}.")
    return model.call(messages=[message])

# Uses default model
response = recommend_book("fantasy")

# Override with different model
with llm.model(provider="anthropic", model_id="anthropic/claude-sonnet-4-5"):
    response = recommend_book("fantasy")  # Uses Claude

Example (direct instantiation - prevents override):

from mirascope import llm

def recommend_book(genre: str) -> llm.Response:
    # Hardcoded model, cannot be overridden by context
    model = llm.Model("openai/gpt-5-mini")
    message = llm.messages.user(f"Please recommend a book in {genre}.")
    return model.call(messages=[message])

Attributes

NameTypeDescription
model_idModelIdThe model being used (e.g. `"openai/gpt-4o-mini"`).
paramsParamsThe default parameters for the model (temperature, max_tokens, etc.).
providerProviderThe provider being used (e.g. an `OpenAIProvider`). This property dynamically looks up the provider from the registry based on the current model_id. This allows provider overrides via `llm.register_provider()` to take effect even after the model instance is created.
provider_idProviderIdThe string id of the provider being used (e.g. `"openai"`). This property returns the `id` field of the dynamically resolved provider.

Function call

Generate an llm.Response by synchronously calling this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
Response | Response[FormattableT]An `llm.Response` object containing the LLM-generated content.

Function call_async

Generate an llm.AsyncResponse by asynchronously calling this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]An `llm.AsyncResponse` object containing the LLM-generated content.

Function stream

Generate an llm.StreamResponse by synchronously streaming from this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]An `llm.StreamResponse` object for iterating over the LLM-generated content.

Function stream_async

Generate an llm.AsyncStreamResponse by asynchronously streaming from this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function context_call

Generate an llm.ContextResponse by synchronously calling this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]An `llm.ContextResponse` object containing the LLM-generated content.

Function context_call_async

Generate an llm.AsyncContextResponse by asynchronously calling this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]An `llm.AsyncContextResponse` object containing the LLM-generated content.

Function context_stream

Generate an llm.ContextStreamResponse by synchronously streaming from this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]An `llm.ContextStreamResponse` object for iterating over the LLM-generated content.

Function context_stream_async

Generate an llm.AsyncContextStreamResponse by asynchronously streaming from this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function resume

Generate a new llm.Response by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
responseResponse | Response[FormattableT]Previous response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
Response | Response[FormattableT]A new `llm.Response` object containing the extended conversation.

Function resume_async

Generate a new llm.AsyncResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
responseAsyncResponse | AsyncResponse[FormattableT]Previous async response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]A new `llm.AsyncResponse` object containing the extended conversation.

Function context_resume

Generate a new llm.ContextResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
responseContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]Previous context response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]A new `llm.ContextResponse` object containing the extended conversation.

Function context_resume_async

Generate a new llm.AsyncContextResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
responseAsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]Previous async context response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]A new `llm.AsyncContextResponse` object containing the extended conversation.

Function resume_stream

Generate a new llm.StreamResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
responseStreamResponse | StreamResponse[FormattableT]Previous stream response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]A new `llm.StreamResponse` object for streaming the extended conversation.

Function resume_stream_async

Generate a new llm.AsyncStreamResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
responseAsyncStreamResponse | AsyncStreamResponse[FormattableT]Previous async stream response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]A new `llm.AsyncStreamResponse` object for asynchronously streaming the extended conversation.

Function context_resume_stream

Generate a new llm.ContextStreamResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
responseContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]Previous context stream response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]A new `llm.ContextStreamResponse` object for streaming the extended conversation.

Function context_resume_stream_async

Generate a new llm.AsyncContextStreamResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
responseAsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]Previous async context stream response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]A new `llm.AsyncContextStreamResponse` object for asynchronously streaming the extended conversation.

Attribute ModelId

Type: TypeAlias

Class NoRegisteredProviderError

Raised when no provider is registered for a given model_id.

Bases:

MirascopeLLMError

Attributes

NameTypeDescription
model_idstr-

Class NotFoundError

Raised when requested resource is not found (404).

Bases:

APIError

Class Params

Common parameters shared across LLM providers.

Note: Each provider may handle these parameters differently or not support them at all. Please check provider-specific documentation for parameter support and behavior.

Bases:

TypedDict

Attributes

NameTypeDescription
temperaturefloatControls randomness in the output (0.0 to 1.0). Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results.
max_tokensintMaximum number of tokens to generate.
top_pfloatNucleus sampling parameter (0.0 to 1.0). Tokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses.
top_kintLimits token selection to the k most probable tokens (typically 1 to 100). For each token selection step, the ``top_k`` tokens with the highest probabilities are sampled. Then tokens are further filtered based on ``top_p`` with the final token selected using temperature sampling. Use a lower number for less random responses and a higher number for more random responses.
seedintRandom seed for reproducibility. When ``seed`` is fixed to a specific number, the model makes a best effort to provide the same response for repeated requests. Not supported by all providers, and does not guarantee strict reproducibility.
stop_sequenceslist[str]Stop sequences to end generation. The model will stop generating text if one of these strings is encountered in the response.
thinkingboolConfigures whether the model should use thinking. Thinking is a process where the model spends additional tokens thinking about the prompt before generating a response. You may configure thinking either by passing a bool to enable or disable it. If `params.thinking` is `True`, then thinking and thought summaries will be enabled (if supported by the model/provider), with a default budget for thinking tokens. If `params.thinking` is `False`, then thinking will be wholly disabled, assuming the model allows this (some models, e.g. `google:gemini-2.5-pro`, do not allow disabling thinking). If `params.thinking` is unset (or `None`), then we will use provider-specific default behavior for the chosen model.
encode_thoughts_as_textboolConfigures whether `Thought` content should be re-encoded as text for model consumption. If `True`, then when an `AssistantMessage` contains `Thoughts` and is being passed back to an LLM, those `Thoughts` will be encoded as `Text`, so that the assistant can read those thoughts. That ensures the assistant has access to (at least the summarized output of) its reasoning process, and contrasts with provider default behaviors which may ignore prior thoughts, particularly if tool calls are not involved. When `True`, we will always re-encode Mirascope messages being passed to the provider, rather than reusing raw provider response content. This may disable provider-specific behavior like cached reasoning tokens. If `False`, then `Thoughts` will not be encoded as text, and whether reasoning context is available to the model depends entirely on the provider's behavior. Defaults to `False` if unset.

Class Partial

Generate a new class with all attributes optionals.

Bases:

Generic[FormattableT]

Class PermissionError

Raised for permission/authorization failures (403).

Bases:

APIError

Class Prompt

A prompt that can be called with a model to generate a response.

Created by decorating a MessageTemplate with llm.prompt. The decorated function becomes callable with a Model to generate LLM responses.

A Prompt is essentially: MessageTemplate + tools + format. It can be invoked with a model: prompt(model, *args, **kwargs).

Bases:

Generic[P, FormattableT]

Attributes

NameTypeDescription
fnMessageTemplate[P]The underlying prompt function that generates message content.
toolkitToolkitThe toolkit containing this prompt's tools.
formattype[FormattableT] | Format[FormattableT] | NoneThe response format for the generated response.

Function call

Generates a response using the provided model.

Parameters

NameTypeDescription
selfAny-
modelModel-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
Response | Response[FormattableT]-

Function stream

Generates a streaming response using the provided model.

Parameters

NameTypeDescription
selfAny-
modelModel-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Class PromptDecorator

Decorator for converting a MessageTemplate into a Prompt.

Takes a raw prompt function that returns message content and wraps it with tools and format support, creating a Prompt that can be called with a model.

The decorator automatically detects whether the function is async or context-aware and creates the appropriate Prompt variant (Prompt, AsyncPrompt, ContextPrompt, or AsyncContextPrompt).

Bases:

Generic[ToolT, FormattableT]

Attributes

NameTypeDescription
toolsSequence[ToolT] | NoneThe tools that are included in the prompt, if any.
formattype[FormattableT] | Format[FormattableT] | NoneThe structured output format off the prompt, if any.

Attribute Provider

Type: TypeAlias

Type alias for BaseProvider with any client type.

Attribute ProviderId

Type: KnownProviderId | str

Class RateLimitError

Raised when rate limits are exceeded (429).

Bases:

APIError

Class RawMessageChunk

A chunk containing provider-specific raw message content that will be added to the AssistantMessage.

This chunk contains a provider-specific representation of a piece of content that will be added to the AssistantMessage reconstructed by the containing stream. This content should be a Jsonable Python object for serialization purposes.

The intention is that this content may be passed as-is back to the provider when the generated AssistantMessage is being reused in conversation.

Attributes

NameTypeDescription
typeLiteral['raw_message_chunk']-
raw_messageJsonableThe provider-specific raw content. Should be a Jsonable object.

Class Response

The response generated by an LLM.

Bases:

BaseResponse[Toolkit, FormattableT]

Function execute_tools

Execute and return all of the tool calls in the response.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
Sequence[ToolOutput[Jsonable]]A sequence containing a `ToolOutput` for every tool call in the order they appeared.

Function resume

Generate a new Response using this response's messages with additional user content.

Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.

Parameters

NameTypeDescription
selfAny-
contentUserContentThe new user message content to append to the message history.

Returns

TypeDescription
Response | Response[FormattableT]A new `Response` instance generated from the extended message history.

Class ServerError

Raised for server-side errors (500+).

Bases:

APIError

Attribute Stream

Type: TypeAlias

A synchronous assistant content stream.

Class StreamResponse

A StreamResponse wraps response content from the LLM with a streaming interface.

This class supports iteration to process chunks as they arrive from the model.

Content can be streamed in one of three ways:

  • Via .streams(), which provides an iterator of streams, where each stream contains chunks of streamed data. The chunks contain deltas (new content in that particular chunk), and the stream itself accumulates the collected state of all the chunks processed thus far.
  • Via .chunk_stream() which allows iterating over Mirascope's provider- agnostic chunk representation.
  • Via .pretty_stream() a helper method which provides all response content as str deltas. Iterating through pretty_stream will yield text content and optionally placeholder representations for other content types, but it will still consume the full stream.
  • Via .structured_stream(), a helper method which provides partial structured outputs from a response (useful when FormatT is set). Iterating through structured_stream will only yield structured partials, but it will still consume the full stream.

As chunks are consumed, they are collected in-memory on the StreamResponse, and they become available in .content, .messages, .tool_calls, etc. All of the stream iterators can be restarted after the stream has been consumed, in which case they will yield chunks from memory in the original sequence that came from the LLM. If the stream is only partially consumed, a fresh iterator will first iterate through in-memory content, and then will continue consuming fresh chunks from the LLM.

In the specific case of text chunks, they are included in the response content as soon as they become available, via an llm.Text part that updates as more deltas come in. This enables the behavior where resuming a partially-streamed response will include as much text as the model generated.

For other chunks, like Thinking or ToolCall, they are only added to response content once the corresponding part has fully streamed. This avoids issues like adding incomplete tool calls, or thinking blocks missing signatures, to the response.

For each iterator, fully iterating through the iterator will consume the whole LLM stream. You can pause stream execution midway by breaking out of the iterator, and you can safely resume execution from the same iterator if desired.

Bases:

BaseSyncStreamResponse[Toolkit, FormattableT]

Function execute_tools

Execute and return all of the tool calls in the response.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
Sequence[ToolOutput[Jsonable]]A sequence containing a `ToolOutput` for every tool call in the order they appeared.

Function resume

Generate a new StreamResponse using this response's messages with additional user content.

Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.

Parameters

NameTypeDescription
selfAny-
contentUserContentThe new user message content to append to the message history.

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]A new `StreamResponse` instance generated from the extended message history.

Attribute StreamResponseChunk

Type: TypeAlias

Attribute SystemContent

Type: TypeAlias

Type alias for content that can fit into a SystemMessage.

Class SystemMessage

A system message that sets context and instructions for the conversation.

Attributes

NameTypeDescription
roleLiteral['system']The role of this message. Always "system".
contentTextThe content of this `SystemMesssage`.

Class Text

Text content for a message.

Attributes

NameTypeDescription
typeLiteral['text']-
textstrThe text content.

Class TextChunk

Represents an incremental text chunk in a stream.

Attributes

NameTypeDescription
typeLiteral['text_chunk']-
content_typeLiteral['text']The type of content reconstructed by this chunk.
deltastrThe incremental text added in this chunk.

Class TextEndChunk

Represents the end of a text chunk stream.

Attributes

NameTypeDescription
typeLiteral['text_end_chunk']-
content_typeLiteral['text']The type of content reconstructed by this chunk.

Class TextStartChunk

Represents the start of a text chunk stream.

Attributes

NameTypeDescription
typeLiteral['text_start_chunk']-
content_typeLiteral['text']The type of content reconstructed by this chunk.

Class TextStream

Synchronous text stream implementation.

Bases:

BaseStream[Text, str]

Attributes

NameTypeDescription
typeLiteral['text_stream']-
content_typeLiteral['text']The type of content stored in this stream.
partial_textstrThe accumulated text content as chunks are received.

Function collect

Collect all chunks and return the final Text content.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
TextThe complete text content after consuming all chunks.

Class Thought

Thinking content for a message.

Represents the thinking or thought process of the assistant. These generally are summaries of the model's reasoning process, rather than the direct reasoning tokens, although this behavior is model and provider specific.

Attributes

NameTypeDescription
typeLiteral['thought']-
thoughtstrThe thoughts or reasoning of the assistant.

Class ThoughtChunk

Represents an incremental thought chunk in a stream.

Attributes

NameTypeDescription
typeLiteral['thought_chunk']-
content_typeLiteral['thought']The type of content reconstructed by this chunk.
deltastrThe incremental thoughts added in this chunk.

Class ThoughtEndChunk

Represents the end of a thought chunk stream.

Attributes

NameTypeDescription
typeLiteral['thought_end_chunk']-
content_typeLiteral['thought']The type of content reconstructed by this chunk.

Class ThoughtStartChunk

Represents the start of a thought chunk stream.

Attributes

NameTypeDescription
typeLiteral['thought_start_chunk']-
content_typeLiteral['thought']The type of content reconstructed by this chunk.

Class ThoughtStream

Synchronous thought stream implementation.

Bases:

BaseStream[Thought, str]

Attributes

NameTypeDescription
typeLiteral['thought_stream']-
content_typeLiteral['thought']The type of content stored in this stream.
partial_thoughtstrThe accumulated thought content as chunks are received.

Function collect

Collect all chunks and return the final Thought content.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
ThoughtThe complete thought content after consuming all chunks.

Class TimeoutError

Raised when requests timeout or deadline exceeded.

Bases:

MirascopeLLMError

Class Tool

A tool that can be used by LLMs.

A Tool represents a function that can be called by an LLM during a call. It includes metadata like name, description, and parameter schema.

This class is not instantiated directly but created by the @tool() decorator.

Bases: ToolSchema[ToolFn[AnyP, JsonableCovariantT]], Generic[AnyP, JsonableCovariantT]

Function execute

Execute the tool using an LLM-provided ToolCall.

Parameters

NameTypeDescription
selfAny-
tool_callToolCall-

Returns

Class ToolCall

Tool call content for a message.

Represents a request from the assistant to call a tool. This is part of an assistant message's content.

Attributes

NameTypeDescription
typeLiteral['tool_call']-
idstrA unique identifier for this tool call.
namestrThe name of the tool to call.
argsstrThe arguments to pass to the tool, stored as stringified json.

Class ToolCallChunk

Represents an incremental tool call chunk in a stream.

Attributes

NameTypeDescription
typeLiteral['tool_call_chunk']-
content_typeLiteral['tool_call']The type of content reconstructed by this chunk.
deltastrThe incremental json args added in this chunk.

Class ToolCallEndChunk

Represents the end of a tool call chunk stream.

Attributes

NameTypeDescription
typeLiteral['tool_call_end_chunk']-
content_typeLiteral['tool_call']The type of content reconstructed by this chunk.

Class ToolCallStartChunk

Represents the start of a tool call chunk stream.

Attributes

NameTypeDescription
typeLiteral['tool_call_start_chunk']-
content_typeLiteral['tool_call']The type of content reconstructed by this chunk.
idstrA unique identifier for this tool call.
namestrThe name of the tool to call.

Class ToolCallStream

Synchronous tool call stream implementation.

Bases:

BaseStream[ToolCall, str]

Attributes

NameTypeDescription
typeLiteral['tool_call_stream']-
content_typeLiteral['tool_call']The type of content stored in this stream.
tool_idstrA unique identifier for this tool call.
tool_namestrThe name of the tool being called.
partial_argsstrThe accumulated tool arguments as chunks are received.

Function collect

Collect all chunks and return the final ToolCall content.

Parameters

NameTypeDescription
selfAny-

Returns

TypeDescription
ToolCallThe complete tool call after consuming all chunks.

Class ToolNotFoundError

Raised if a tool_call cannot be converted to any corresponding tool.

Bases:

MirascopeLLMError

Class ToolOutput

Tool output content for a message.

Represents the output from a tool call. This is part of a user message's content, typically following a tool call from the assistant.

Bases:

Generic[JsonableT]

Attributes

NameTypeDescription
typeLiteral['tool_output']-
idstrThe ID of the tool call that this output is for.
namestrThe name of the tool that created this output.
valueJsonableTThe output value from the tool call.

Class Toolkit

A collection of Tools, with helpers for getting and executing specific tools.

Bases:

BaseToolkit[Tool]

Function execute

Execute a Tool using the provided tool call.

Parameters

NameTypeDescription
selfAny-
tool_callToolCallThe tool call to execute.

Returns

TypeDescription
ToolOutput[Jsonable]The output from executing the `Tool`.

Class URLImageSource

Image data referenced via external URL.

Attributes

NameTypeDescription
typeLiteral['url_image_source']-
urlstrThe url of the image (e.g. https://example.com/sazed.png).

Attribute UserContent

Type: TypeAlias

Type alias for content that can fit into a UserMessage.

Attribute UserContentPart

Type: TypeAlias

Content parts that can be included in a UserMessage.

Class UserMessage

A user message containing input from the user.

Attributes

NameTypeDescription
roleLiteral['user']The role of this message. Always "user".
contentSequence[UserContentPart]The content of the user message.
namestr | NoneA name identifying the creator of this message.

Function call

Decorates a MessageTemplate to create a Call that can be invoked directly.

The llm.call decorator is the most convenient way to use Mirascope. It transforms a raw prompt function (that returns message content) into a Call object that bundles the function with tools, format, and a model. The resulting Call can be invoked directly to generate LLM responses without needing to pass a model argument.

The decorator automatically detects the function type:

  • If the first parameter is named 'ctx' with type llm.Context[T] (or a subclass thereof), creates a ContextCall
  • If the function is async, creates an AsyncCall or AsyncContextCall
  • Otherwise, creates a regular Call

The model specified in the decorator can be overridden at runtime using the llm.model() context manager. When overridden, the context model completely replaces the decorated model, including all parameters.

Conceptual flow:

  • MessageTemplate: raw function returning content
  • @llm.prompt: MessageTemplatePrompt Includes tools and format, if applicable. Can be called by providing a Model.
  • @llm.call: MessageTemplateCall. Includes a model, tools, and format. The model may be created on the fly from a model identifier and optional params, or provided outright.

Example:

Regular call:

from mirascope import llm

@llm.call("openai/gpt-4")
def recommend_book(genre: str):
    return f"Please recommend a book in {genre}."

response: llm.Response = recommend_book("fantasy")
print(response.pretty())

Example:

Context call:

from dataclasses import dataclass
from mirascope import llm

@dataclass
class User:
    name: str
    age: int

@llm.call("openai/gpt-4")
def recommend_book(ctx: llm.Context[User], genre: str):
    return f"Recommend a {genre} book for {ctx.deps.name}, age {ctx.deps.age}."

ctx = llm.Context(deps=User(name="Alice", age=15))
response = recommend_book(ctx, "fantasy")
print(response.pretty())

Parameters

NameTypeDescription
modelModelId | ModelA model ID string (e.g., "openai/gpt-4") or a `Model` instance
tools= NoneSequence[ToolT] | NoneOptional `Sequence` of tools to make available to the LLM
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format class (`BaseModel`) or Format instance
params= {}Unpack[Params]-

Returns

TypeDescription
CallDecorator[ToolT, FormattableT]A `CallDecorator` that converts prompt functions into `Call` variants

Module calls

The llm.calls module.

Module content

The llm.messages.content module.

Module exceptions

Mirascope llm exception hierarchy for unified error handling across providers.

Function format

Returns a Format that describes structured output for a Formattable type.

This function converts a Formattable type (e.g. Pydantic BaseModel) into a Format object that describes how the object should be formatted. Calling llm.format is optional, as all the APIs that expect a Format can also take the Formattable type directly. However, calling llm.format is necessary in order to specify the formatting mode that will be used.

The Formattable type may provide custom formatting instructions via a formatting_instructions(cls) classmethod. If that method is present, it will be called, and the resulting instructions will automatically be appended to the system prompt.

If no formatting instructions are present, then Mirascope may auto-generate instructions based on the active format mode. To disable this behavior and all prompt modification, you can add the formatting_instructions classmethod and have it return None.

Parameters

NameTypeDescription
formattabletype[FormattableT] | None-
modeFormattingModeThe format mode to use, one of the following: - "strict": Use model strict structured outputs, or fail if unavailable. - "tool": Use forced tool calling with a special tool that represents a formatted response. - "json": Use provider json mode if available, or modify prompt to request json if not.

Returns

TypeDescription
Format[FormattableT] | NoneA `Format` object describing the Formattable type.

Module formatting

Response formatting interfaces for structuring LLM outputs.

This module provides a way to define structured output formats for LLM responses. The @format decorator can be applied to classes to specify how LLM outputs should be structured and parsed.

Function load_provider

Create a cached provider instance for the specified provider id.

Parameters

NameTypeDescription
provider_idProviderIdThe provider name ("openai", "anthropic", or "google").
api_key= Nonestr | NoneAPI key for authentication. If None, uses provider-specific env var.
base_url= Nonestr | NoneBase URL for the API. If None, uses provider-specific env var.

Returns

TypeDescription
ProviderA cached provider instance for the specified provider with the given parameters.

Module mcp

MCP compatibility module.

Module messages

The messages module for LLM interactions.

This module defines the message types used in LLM interactions. Messages are represented as a unified Message class with different roles (system, user, assistant) and flexible content arrays that can include text, images, audio, documents, and tool interactions.

Function model

Helper for creating a Model instance (which may be used as a context manager).

This is just an alias for the Model constructor, added for convenience.

This function returns a Model instance that implements the context manager protocol. When used with a with statement, the model will be set in context and used by both llm.use_model() and llm.call() within that context. This allows you to override the default model at runtime without modifying function definitions.

The returned Model instance can also be stored and reused:

m = llm.model("openai/gpt-4o")
# Use directly
response = m.call(messages=[...])
# Or use as context manager
with m:
    response = recommend_book("fantasy")

When a model is set in context, it completely overrides any model ID or parameters specified in llm.use_model() or llm.call(). The context model's parameters take precedence, and any unset parameters use default values.

Parameters

NameTypeDescription
model_idModelIdA model ID string (e.g., "openai/gpt-4").
params= {}Unpack[Params]-

Returns

TypeDescription
ModelA Model instance that can be used as a context manager.

Function model_from_context

Get the LLM currently set via context, if any.

Returns

TypeDescription
Model | None-

Module models

The llm.models module for implementing the Model interface and utilities.

This module provides a unified interface for interacting with different LLM models through the Model class. The llm.model() context manager allows you to override the model at runtime, and llm.use_model() retrieves the model from context or creates a default one.

Function prompt

Decorates a MessageTemplate to create a Prompt callable with a model.

This decorator transforms a raw prompt function (that returns message content) into a Prompt object that can be invoked with a model to generate LLM responses.

The decorator automatically detects the function type:

  • If the first parameter is named 'ctx' with type llm.Context[T], creates a ContextPrompt
  • If the function is async, creates an AsyncPrompt or AsyncContextPrompt
  • Otherwise, creates a regular Prompt

Parameters

NameTypeDescription
__fn= NoneAsyncContextMessageTemplate[P, DepsT] | ContextMessageTemplate[P, DepsT] | AsyncMessageTemplate[P] | MessageTemplate[P] | NoneThe prompt function to decorate (optional, for decorator syntax without parens)
tools= NoneSequence[ToolT] | NoneOptional `Sequence` of tools to make available to the LLM
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format class (`BaseModel`) or Format instance

Returns

TypeDescription
AsyncContextPrompt[P, DepsT, FormattableT] | ContextPrompt[P, DepsT, FormattableT] | AsyncPrompt[P, FormattableT] | Prompt[P, FormattableT] | PromptDecorator[ToolT, FormattableT]A `Prompt` variant (Prompt, AsyncPrompt, ContextPrompt, or AsyncContextPrompt)

Module prompts

The prompt templates module for LLM interactions.

This module defines the prompt templates used in LLM interactions, which are written as python functions.

Module providers

Interfaces for LLM providers.

Function register_provider

Register a provider with scope(s) in the global registry.

Scopes use prefix matching on model IDs:

  • "anthropic/" matches "anthropic/*"
  • "anthropic/claude-4-5" matches "anthropic/claude-4-5*"
  • "anthropic/claude-4-5-sonnet" matches exactly "anthropic/claude-4-5-sonnet"

When multiple scopes match a model_id, the longest match wins.

Parameters

NameTypeDescription
providerProviderId | ProviderEither a provider ID string or a provider instance.
scope= Nonestr | list[str] | NoneScope string or list of scopes for prefix matching on model IDs. If None, uses the provider's default_scope attribute. Can be a single string or a list of strings.
api_key= Nonestr | NoneAPI key for authentication (only used if provider is a string).
base_url= Nonestr | NoneBase URL for the API (only used if provider is a string).

Returns

TypeDescription
Provider-

Module responses

The Responses module for LLM responses.

Function tool

Decorator that turns a function into a tool definition.

This decorator creates a Tool or ContextTool that can be used with llm.call. The function's name, docstring, and type hints are used to generate the tool's metadata.

If the first parameter is named 'ctx' or typed as llm.Context[T], it creates a ContextTool. Otherwise, it creates a regular Tool.

Examples:

Regular tool:

from mirascope import llm

@llm.tool
def available_books() -> list[str]:
    """Returns the list of available books."""
    return ["The Name of the Wind"]

Context tool:

from dataclasses import dataclass

from mirascope import llm


@dataclass
class Library:
    books: list[str]


library = Library(books=["Mistborn", "Gödel, Escher, Bach", "Dune"])

@llm.tool
def available_books(ctx: llm.Context[Library]) -> list[str]:
    """Returns the list of available books."""
    return ctx.deps.books

Parameters

NameTypeDescription
__fn= NoneContextToolFn[DepsT, P, JsonableCovariantT] | AsyncContextToolFn[DepsT, P, JsonableCovariantT] | ToolFn[P, JsonableCovariantT] | AsyncToolFn[P, JsonableCovariantT] | None-
strict= FalseboolWhether the tool should use strict mode when supported by the model.

Returns

TypeDescription
ContextTool[DepsT, JsonableCovariantT, P] | AsyncContextTool[DepsT, JsonableCovariantT, P] | Tool[P, JsonableCovariantT] | AsyncTool[P, JsonableCovariantT] | ToolDecoratorA decorator function that converts the function into a Tool or ContextTool.

Module tools

The Tools module for LLMs.

Module types

Types for the LLM module.

Function use_model

Get the model from context if available, otherwise create a new Model.

This function checks if a model has been set in the context (via llm.model() context manager). If a model is found in the context, it returns that model, ignoring any model ID or parameters passed to this function. Otherwise, it creates and returns a new llm.Model instance with the provided arguments.

This allows you to write functions that work with a default model but can be overridden at runtime using the llm.model() context manager.

Example:

import mirascope.llm as llm

def recommend_book(genre: str) -> llm.Response:
    model = llm.use_model("openai/gpt-5-mini")
    message = llm.messages.user(f"Please recommend a book in {genre}.")
    return model.call(messages=[message])

# Uses the default model (gpt-5-mini)
response = recommend_book("fantasy")

# Override with a different model
with llm.model(provider="anthropic", model_id="anthropic/claude-sonnet-4-5"):
    response = recommend_book("fantasy")  # Uses Claude instead

Parameters

NameTypeDescription
modelModel | ModelIdA model ID string (e.g., "openai/gpt-4") or a Model instance
params= {}Unpack[Params]-

Returns

TypeDescription
ModelAn `llm.Model` instance from context (if set) or a new instance with the specified settings.