calls
Class AsyncCall
An async call that directly generates LLM responses without requiring a model argument.
Created by decorating an async MessageTemplate with llm.call. The decorated async
function becomes directly callable to generate responses asynchronously, with the Model bundled in.
An AsyncCall is essentially: async MessageTemplate + tools + format + Model.
It can be invoked directly: await call(*args, **kwargs) (no model argument needed).
The model can be overridden at runtime using with llm.model(...) context manager.
Bases: BaseCall, Generic[P, FormattableT]
Attributes
| Name | Type | Description |
|---|---|---|
| prompt | AsyncPrompt[P, FormattableT] | The underlying AsyncPrompt instance that generates messages with tools and format. |
Function call
Generates a response using the LLM asynchronously.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | - |
Function stream
Generates a streaming response using the LLM asynchronously.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse[FormattableT] | AsyncStreamResponse | - |
Class AsyncContextCall
An async context-aware call that directly generates LLM responses without requiring a model argument.
Created by decorating an async ContextMessageTemplate with llm.call. The decorated async
function (with first parameter 'ctx' of type Context[DepsT]) becomes directly callable to generate
responses asynchronously with context dependencies, with the Model bundled in.
An AsyncContextCall is essentially: async ContextMessageTemplate + tools + format + Model.
It can be invoked directly: await call(ctx, *args, **kwargs) (no model argument needed).
The model can be overridden at runtime using with llm.model(...) context manager.
Bases: BaseCall, Generic[P, DepsT, FormattableT]
Attributes
| Name | Type | Description |
|---|---|---|
| prompt | AsyncContextPrompt[P, DepsT, FormattableT] | The underlying AsyncContextPrompt instance that generates messages with tools and format. |
Function call
Generates a response using the LLM asynchronously.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | - |
Function stream
Generates a streaming response using the LLM asynchronously.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | - |
Class Call
A call that directly generates LLM responses without requiring a model argument.
Created by decorating a MessageTemplate with llm.call. The decorated function
becomes directly callable to generate responses, with the Model bundled in.
A Call is essentially: MessageTemplate + tools + format + Model.
It can be invoked directly: call(*args, **kwargs) (no model argument needed).
The model can be overridden at runtime using with llm.model(...) context manager.
Bases: BaseCall, Generic[P, FormattableT]
Attributes
| Name | Type | Description |
|---|---|---|
| prompt | Prompt[P, FormattableT] | The underlying Prompt instance that generates messages with tools and format. |
Function call
Generates a response using the LLM.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | - |
Function stream
Generates a streaming response using the LLM.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | - |
Class CallDecorator
Decorator for converting a MessageTemplate into a Call.
Takes a raw prompt function that returns message content and wraps it with tools,
format, and a model to create a Call that can be invoked directly without needing
to pass a model argument.
The decorator automatically detects whether the function is async or context-aware
and creates the appropriate Call variant (Call, AsyncCall, ContextCall, or AsyncContextCall).
Conceptually: CallDecorator = PromptDecorator + Model
Result: Call = MessageTemplate + tools + format + Model
Bases:
Generic[ToolT, FormattableT]Attributes
| Name | Type | Description |
|---|---|---|
| model | Model | The default model to use with this call. May be overridden. |
| tools | Sequence[ToolT] | None | The tools that are included in the prompt, if any. |
| format | type[FormattableT] | Format[FormattableT] | None | The structured output format off the prompt, if any. |
Class ContextCall
A context-aware call that directly generates LLM responses without requiring a model argument.
Created by decorating a ContextMessageTemplate with llm.call. The decorated function
(with first parameter 'ctx' of type Context[DepsT]) becomes directly callable to generate
responses with context dependencies, with the Model bundled in.
A ContextCall is essentially: ContextMessageTemplate + tools + format + Model.
It can be invoked directly: call(ctx, *args, **kwargs) (no model argument needed).
The model can be overridden at runtime using with llm.model(...) context manager.
Bases: BaseCall, Generic[P, DepsT, FormattableT]
Attributes
| Name | Type | Description |
|---|---|---|
| prompt | ContextPrompt[P, DepsT, FormattableT] | The underlying ContextPrompt instance that generates messages with tools and format. |
Function call
Generates a response using the LLM.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | - |
Function stream
Generates a streaming response using the LLM.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | - |
Function call
Decorates a MessageTemplate to create a Call that can be invoked directly.
The llm.call decorator is the most convenient way to use Mirascope. It transforms
a raw prompt function (that returns message content) into a Call object that bundles
the function with tools, format, and a model. The resulting Call can be invoked
directly to generate LLM responses without needing to pass a model argument.
The decorator automatically detects the function type:
- If the first parameter is named
'ctx'with typellm.Context[T](or a subclass thereof), creates aContextCall - If the function is async, creates an
AsyncCallorAsyncContextCall - Otherwise, creates a regular
Call
The model specified in the decorator can be overridden at runtime using the
llm.model() context manager. When overridden, the context model completely
replaces the decorated model, including all parameters.
Conceptual flow:
MessageTemplate: raw function returning content@llm.prompt:MessageTemplate→PromptIncludes tools and format, if applicable. Can be called by providing aModel.@llm.call:MessageTemplate→Call. Includes a model, tools, and format. The model may be created on the fly from a model identifier and optional params, or provided outright.
Example:
Regular call:
from mirascope import llm
@llm.call("openai/gpt-4")
def recommend_book(genre: str):
return f"Please recommend a book in {genre}."
response: llm.Response = recommend_book("fantasy")
print(response.pretty())Example:
Context call:
from dataclasses import dataclass
from mirascope import llm
@dataclass
class User:
name: str
age: int
@llm.call("openai/gpt-4")
def recommend_book(ctx: llm.Context[User], genre: str):
return f"Recommend a {genre} book for {ctx.deps.name}, age {ctx.deps.age}."
ctx = llm.Context(deps=User(name="Alice", age=15))
response = recommend_book(ctx, "fantasy")
print(response.pretty())Parameters
| Name | Type | Description |
|---|---|---|
| model | ModelId | Model | A model ID string (e.g., "openai/gpt-4") or a `Model` instance |
| tools= None | Sequence[ToolT] | None | Optional `Sequence` of tools to make available to the LLM |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format class (`BaseModel`) or Format instance |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| CallDecorator[ToolT, FormattableT] | A `CallDecorator` that converts prompt functions into `Call` variants |