Known model names that can be used with the model parameter of Agent.
KnownModelName is provided as a concise way to specify a model.
ModelRequestParametersdataclass
Configuration for an agent's request to a model, specifically related to tools and output handling.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
255256257258259260261
@dataclassclassModelRequestParameters:"""Configuration for an agent's request to a model, specifically related to tools and output handling."""function_tools:list[ToolDefinition]allow_text_output:booloutput_tools:list[ToolDefinition]
classModel(ABC):"""Abstract class for a model."""@abstractmethodasyncdefrequest(self,messages:list[ModelMessage],model_settings:ModelSettings|None,model_request_parameters:ModelRequestParameters,)->tuple[ModelResponse,Usage]:"""Make a request to the model."""raiseNotImplementedError()@asynccontextmanagerasyncdefrequest_stream(self,messages:list[ModelMessage],model_settings:ModelSettings|None,model_request_parameters:ModelRequestParameters,)->AsyncIterator[StreamedResponse]:"""Make a request to the model and return a streaming response."""# This method is not required, but you need to implement it if you want to support streamed responsesraiseNotImplementedError(f'Streamed requests not supported by this {self.__class__.__name__}')# yield is required to make this a generator for type checking# noinspection PyUnreachableCodeyield# pragma: no coverdefcustomize_request_parameters(self,model_request_parameters:ModelRequestParameters)->ModelRequestParameters:"""Customize the request parameters for the model. This method can be overridden by subclasses to modify the request parameters before sending them to the model. In particular, this method can be used to make modifications to the generated tool JSON schemas if necessary for vendor/model-specific reasons. """returnmodel_request_parameters@property@abstractmethoddefmodel_name(self)->str:"""The model name."""raiseNotImplementedError()@property@abstractmethoddefsystem(self)->str:"""The system / model provider, ex: openai. Use to populate the `gen_ai.system` OpenTelemetry semantic convention attribute, so should use well-known values listed in https://opentelemetry.io/docs/specs/semconv/attributes-registry/gen-ai/#gen-ai-system when applicable. """raiseNotImplementedError()@propertydefbase_url(self)->str|None:"""The base URL for the provider API, if available."""returnNonedef_get_instructions(self,messages:list[ModelMessage])->str|None:"""Get instructions from the first ModelRequest found when iterating messages in reverse."""formessageinreversed(messages):ifisinstance(message,ModelRequest):returnmessage.instructions
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
267268269270271272273274275
@abstractmethodasyncdefrequest(self,messages:list[ModelMessage],model_settings:ModelSettings|None,model_request_parameters:ModelRequestParameters,)->tuple[ModelResponse,Usage]:"""Make a request to the model."""raiseNotImplementedError()
Make a request to the model and return a streaming response.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
277278279280281282283284285286287288289
@asynccontextmanagerasyncdefrequest_stream(self,messages:list[ModelMessage],model_settings:ModelSettings|None,model_request_parameters:ModelRequestParameters,)->AsyncIterator[StreamedResponse]:"""Make a request to the model and return a streaming response."""# This method is not required, but you need to implement it if you want to support streamed responsesraiseNotImplementedError(f'Streamed requests not supported by this {self.__class__.__name__}')# yield is required to make this a generator for type checking# noinspection PyUnreachableCodeyield# pragma: no cover
This method can be overridden by subclasses to modify the request parameters before sending them to the model.
In particular, this method can be used to make modifications to the generated tool JSON schemas if necessary
for vendor/model-specific reasons.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
291292293294295296297298
defcustomize_request_parameters(self,model_request_parameters:ModelRequestParameters)->ModelRequestParameters:"""Customize the request parameters for the model. This method can be overridden by subclasses to modify the request parameters before sending them to the model. In particular, this method can be used to make modifications to the generated tool JSON schemas if necessary for vendor/model-specific reasons. """returnmodel_request_parameters
Use to populate the gen_ai.system OpenTelemetry semantic convention attribute,
so should use well-known values listed in
https://opentelemetry.io/docs/specs/semconv/attributes-registry/gen-ai/#gen-ai-system
when applicable.
@dataclassclassStreamedResponse(ABC):"""Streamed response from an LLM when calling a tool."""_parts_manager:ModelResponsePartsManager=field(default_factory=ModelResponsePartsManager,init=False)_event_iterator:AsyncIterator[ModelResponseStreamEvent]|None=field(default=None,init=False)_usage:Usage=field(default_factory=Usage,init=False)def__aiter__(self)->AsyncIterator[ModelResponseStreamEvent]:"""Stream the response as an async iterable of [`ModelResponseStreamEvent`][pydantic_ai.messages.ModelResponseStreamEvent]s."""ifself._event_iteratorisNone:self._event_iterator=self._get_event_iterator()returnself._event_iterator@abstractmethodasyncdef_get_event_iterator(self)->AsyncIterator[ModelResponseStreamEvent]:"""Return an async iterator of [`ModelResponseStreamEvent`][pydantic_ai.messages.ModelResponseStreamEvent]s. This method should be implemented by subclasses to translate the vendor-specific stream of events into pydantic_ai-format events. It should use the `_parts_manager` to handle deltas, and should update the `_usage` attributes as it goes. """raiseNotImplementedError()# noinspection PyUnreachableCodeyielddefget(self)->ModelResponse:"""Build a [`ModelResponse`][pydantic_ai.messages.ModelResponse] from the data received from the stream so far."""returnModelResponse(parts=self._parts_manager.get_parts(),model_name=self.model_name,timestamp=self.timestamp)defusage(self)->Usage:"""Get the usage of the response so far. This will not be the final usage until the stream is exhausted."""returnself._usage@property@abstractmethoddefmodel_name(self)->str:"""Get the model name of the response."""raiseNotImplementedError()@property@abstractmethoddeftimestamp(self)->datetime:"""Get the timestamp of the response."""raiseNotImplementedError()
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
338339340341342
def__aiter__(self)->AsyncIterator[ModelResponseStreamEvent]:"""Stream the response as an async iterable of [`ModelResponseStreamEvent`][pydantic_ai.messages.ModelResponseStreamEvent]s."""ifself._event_iteratorisNone:self._event_iterator=self._get_event_iterator()returnself._event_iterator
Build a ModelResponse from the data received from the stream so far.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
357358359360361
defget(self)->ModelResponse:"""Build a [`ModelResponse`][pydantic_ai.messages.ModelResponse] from the data received from the stream so far."""returnModelResponse(parts=self._parts_manager.get_parts(),model_name=self.model_name,timestamp=self.timestamp)
This global setting allows you to disable request to most models, e.g. to make sure you don't accidentally
make costly requests to a model during tests.
If you're defining your own models that have costs or latency associated with their use, you should call this in
Model.request and Model.request_stream.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
391392393394395396397398399400401
defcheck_allow_model_requests()->None:"""Check if model requests are allowed. If you're defining your own models that have costs or latency associated with their use, you should call this in [`Model.request`][pydantic_ai.models.Model.request] and [`Model.request_stream`][pydantic_ai.models.Model.request_stream]. Raises: RuntimeError: If model requests are not allowed. """ifnotALLOW_MODEL_REQUESTS:raiseRuntimeError('Model requests are not allowed, since ALLOW_MODEL_REQUESTS is False')
Whether to allow model requests within the context.
required
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
404405406407408409410411412413414415416417
@contextmanagerdefoverride_allow_model_requests(allow_model_requests:bool)->Iterator[None]:"""Context manager to temporarily override [`ALLOW_MODEL_REQUESTS`][pydantic_ai.models.ALLOW_MODEL_REQUESTS]. Args: allow_model_requests: Whether to allow model requests within the context. """globalALLOW_MODEL_REQUESTSold_value=ALLOW_MODEL_REQUESTSALLOW_MODEL_REQUESTS=allow_model_requests# pyright: ignore[reportConstantRedefinition]try:yieldfinally:ALLOW_MODEL_REQUESTS=old_value# pyright: ignore[reportConstantRedefinition]