Mirascopev2
Lilypad

calls

Class AsyncCall

An async call that directly generates LLM responses without requiring a model argument.

Created by decorating an async MessageTemplate with llm.call. The decorated async function becomes directly callable to generate responses asynchronously, with the Model bundled in.

An AsyncCall is essentially: async MessageTemplate + tools + format + Model. It can be invoked directly: await call(*args, **kwargs) (no model argument needed).

The model can be overridden at runtime using with llm.model(...) context manager.

Bases: BaseCall, Generic[P, FormattableT]

Attributes

NameTypeDescription
promptAsyncPrompt[P, FormattableT]The underlying AsyncPrompt instance that generates messages with tools and format.

Function call

Generates a response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Class AsyncContextCall

An async context-aware call that directly generates LLM responses without requiring a model argument.

Created by decorating an async ContextMessageTemplate with llm.call. The decorated async function (with first parameter 'ctx' of type Context[DepsT]) becomes directly callable to generate responses asynchronously with context dependencies, with the Model bundled in.

An AsyncContextCall is essentially: async ContextMessageTemplate + tools + format + Model. It can be invoked directly: await call(ctx, *args, **kwargs) (no model argument needed).

The model can be overridden at runtime using with llm.model(...) context manager.

Bases: BaseCall, Generic[P, DepsT, FormattableT]

Attributes

NameTypeDescription
promptAsyncContextPrompt[P, DepsT, FormattableT]The underlying AsyncContextPrompt instance that generates messages with tools and format.

Function call

Generates a response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Class Call

A call that directly generates LLM responses without requiring a model argument.

Created by decorating a MessageTemplate with llm.call. The decorated function becomes directly callable to generate responses, with the Model bundled in.

A Call is essentially: MessageTemplate + tools + format + Model. It can be invoked directly: call(*args, **kwargs) (no model argument needed).

The model can be overridden at runtime using with llm.model(...) context manager.

Bases: BaseCall, Generic[P, FormattableT]

Attributes

NameTypeDescription
promptPrompt[P, FormattableT]The underlying Prompt instance that generates messages with tools and format.

Function call

Generates a response using the LLM.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
Response | Response[FormattableT]-

Function stream

Generates a streaming response using the LLM.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Class CallDecorator

Decorator for converting a MessageTemplate into a Call.

Takes a raw prompt function that returns message content and wraps it with tools, format, and a model to create a Call that can be invoked directly without needing to pass a model argument.

The decorator automatically detects whether the function is async or context-aware and creates the appropriate Call variant (Call, AsyncCall, ContextCall, or AsyncContextCall).

Conceptually: CallDecorator = PromptDecorator + Model Result: Call = MessageTemplate + tools + format + Model

Bases:

Generic[ToolT, FormattableT]

Attributes

NameTypeDescription
modelModelThe default model to use with this call. May be overridden.
toolsSequence[ToolT] | NoneThe tools that are included in the prompt, if any.
formattype[FormattableT] | Format[FormattableT] | NoneThe structured output format off the prompt, if any.

Class ContextCall

A context-aware call that directly generates LLM responses without requiring a model argument.

Created by decorating a ContextMessageTemplate with llm.call. The decorated function (with first parameter 'ctx' of type Context[DepsT]) becomes directly callable to generate responses with context dependencies, with the Model bundled in.

A ContextCall is essentially: ContextMessageTemplate + tools + format + Model. It can be invoked directly: call(ctx, *args, **kwargs) (no model argument needed).

The model can be overridden at runtime using with llm.model(...) context manager.

Bases: BaseCall, Generic[P, DepsT, FormattableT]

Attributes

NameTypeDescription
promptContextPrompt[P, DepsT, FormattableT]The underlying ContextPrompt instance that generates messages with tools and format.

Function call

Generates a response using the LLM.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]-

Function stream

Generates a streaming response using the LLM.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function call

Decorates a MessageTemplate to create a Call that can be invoked directly.

The llm.call decorator is the most convenient way to use Mirascope. It transforms a raw prompt function (that returns message content) into a Call object that bundles the function with tools, format, and a model. The resulting Call can be invoked directly to generate LLM responses without needing to pass a model argument.

The decorator automatically detects the function type:

  • If the first parameter is named 'ctx' with type llm.Context[T] (or a subclass thereof), creates a ContextCall
  • If the function is async, creates an AsyncCall or AsyncContextCall
  • Otherwise, creates a regular Call

The model specified in the decorator can be overridden at runtime using the llm.model() context manager. When overridden, the context model completely replaces the decorated model, including all parameters.

Conceptual flow:

  • MessageTemplate: raw function returning content
  • @llm.prompt: MessageTemplatePrompt Includes tools and format, if applicable. Can be called by providing a Model.
  • @llm.call: MessageTemplateCall. Includes a model, tools, and format. The model may be created on the fly from a model identifier and optional params, or provided outright.

Example:

Regular call:

from mirascope import llm

@llm.call("openai/gpt-4")
def recommend_book(genre: str):
    return f"Please recommend a book in {genre}."

response: llm.Response = recommend_book("fantasy")
print(response.pretty())

Example:

Context call:

from dataclasses import dataclass
from mirascope import llm

@dataclass
class User:
    name: str
    age: int

@llm.call("openai/gpt-4")
def recommend_book(ctx: llm.Context[User], genre: str):
    return f"Recommend a {genre} book for {ctx.deps.name}, age {ctx.deps.age}."

ctx = llm.Context(deps=User(name="Alice", age=15))
response = recommend_book(ctx, "fantasy")
print(response.pretty())

Parameters

NameTypeDescription
modelModelId | ModelA model ID string (e.g., "openai/gpt-4") or a `Model` instance
tools= NoneSequence[ToolT] | NoneOptional `Sequence` of tools to make available to the LLM
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format class (`BaseModel`) or Format instance
params= {}Unpack[Params]-

Returns

TypeDescription
CallDecorator[ToolT, FormattableT]A `CallDecorator` that converts prompt functions into `Call` variants