Releases: jackmpcollins/magentic
v0.32.0
What's Changed
Add support for OpenAI "strict" setting for structured outputs. This guarantees that the generated JSON schema matches that supplied by the user. In magentic, this is set via an extension of pydantic's ConfigDict
, and works for pydantic models as well as functions. See the docs for more info https://magentic.dev/structured-outputs/#configdict
For a BaseModel
from magentic import prompt, ConfigDict
from pydantic import BaseModel
class Superhero(BaseModel):
model_config = ConfigDict(openai_strict=True)
name: str
age: int
power: str
enemies: list[str]
@prompt("Create a Superhero named {name}.")
def create_superhero(name: str) -> Superhero: ...
create_superhero("Garden Man")
For a function
from typing import Annotated, Literal
from magentic import ConfigDict, with_config
from pydantic import Field
@with_config(ConfigDict(openai_strict=True))
def activate_oven(
temperature: Annotated[int, Field(description="Temp in Fahrenheit", lt=500)],
mode: Literal["broil", "bake", "roast"],
) -> str:
"""Turn the oven on with the provided settings."""
return f"Preheating to {temperature} F with mode {mode}"
@prompt(
"Do some cooking",
functions=[
activate_oven,
# ...
PRs
- Add support for OpenAI structured outputs by @jackmpcollins in #305
Full Changelog: v0.31.0...v0.32.0
v0.31.0
v0.30.0
What's Changed
Warning
Breaking change: StructuredOutputError
has been replaced by more specific exceptions StringNotAllowedError
and ToolSchemaParseError
in PR #288
🤖 ♻️ LLM-Assisted retries has been added. When enabled, this sends incorrectly formatted output back to the LLM along with the error message to have the LLM fix its mistakes. This can be used to enforce more complex validation on output schemas using pydantic validators.
For example, placing an arbitrary constraint on a string field
from typing import Annotated
from magentic import prompt
from pydantic import AfterValidator, BaseModel
def assert_is_ireland(v: str) -> str:
if v != "Ireland":
raise ValueError("Country must be Ireland")
return v
class Country(BaseModel):
name: Annotated[str, AfterValidator(assert_is_ireland)]
capital: str
@prompt(
"Return a country",
max_retries=3,
)
def get_country() -> Country: ...
get_country()
# 05:13:55.607 Calling prompt-function get_country
# 05:13:55.622 LLM-assisted retries enabled. Max 3
# 05:13:55.627 Chat Completion with 'gpt-4o' [LLM]
# 05:13:56.309 streaming response from 'gpt-4o' took 0.11s [LLM]
# 05:13:56.310 Retrying Chat Completion. Attempt 1.
# 05:13:56.322 Chat Completion with 'gpt-4o' [LLM]
# 05:13:57.456 streaming response from 'gpt-4o' took 0.00s [LLM]
#
# Country(name='Ireland', capital='Dublin')
See the new docs page on Retrying for more info.
PRs
- Bump aiohttp from 3.9.5 to 3.10.2 by @dependabot in #297
- Add LLM-assisted retries by @jackmpcollins in #288
- Set logfire OTEL scope to magentic by @jackmpcollins in #298
Full Changelog: v0.29.0...v0.30.0
v0.29.0
What's Changed
- Make Message a pydantic model / serializable by @jackmpcollins in #294
This means Message
objects can be used anywhere pydantic models can, including in prompt-functions. The new AnyMessage
type simplifies this. For example
from magentic import AnyMessage, prompt
@prompt("Create an example of few-shot prompting for a chatbot")
def make_few_shot_prompt() -> list[AnyMessage]: ...
make_few_shot_prompt()
# [SystemMessage('You are a helpful and knowledgeable assistant.'),
# UserMessage('What’s the weather like today?'),
# AssistantMessage[Any]('The weather today is sunny with a high of 75°F (24°C).'),
# UserMessage('Can you explain the theory of relativity in simple terms?'),
# AssistantMessage[Any]('Sure! The theory of relativity, developed by Albert Einstein, ...]
Dependabot
- Bump logfire-api from 0.46.1 to 0.49.0 by @dependabot in #292
- Bump logfire from 0.46.1 to 0.49.0 by @dependabot in #293
- Bump pytest from 8.2.2 to 8.3.2 by @dependabot in #286
- Bump openai from 1.35.13 to 1.38.0 by @dependabot in #290
- Bump mypy from 1.10.1 to 1.11.1 by @dependabot in #291
Full Changelog: v0.28.1...v0.29.0
v0.28.1
What's Changed
- Bump ruff from 0.4.10 to 0.5.2 by @dependabot in #269
- Limit tool call id to 9 chars for Mistral by @jackmpcollins in #279
Full Changelog: v0.28.0...v0.28.1
v0.28.0
What's Changed
🪵 🔥 Logfire / OpenTelemetry now supported!
This makes it much easier to follow what tool calls are being made by the LLM both as printed output locally and in Logfire or another monitoring service. It also lets you see the raw requests being sent to OpenAI/Anthropic so you can more easily debug issues.
All it takes to get set up is
pip install logfire
import logfire
logfire.configure(send_to_logfire=False) # Or True to use the Logfire service
logfire.instrument_openai() # optional, to trace OpenAI API calls
# logfire.instrument_anthropic() # optional, to trace Anthropic API calls
Check out the new docs page: https://magentic.dev/logging-and-tracing/
PRs
Add basic logging and MAGENTIC_VERBOSE env var by @jackmpcollins in #263- Update dependencies by @jackmpcollins in #264
- Instrument for Logfire / OpenTelemetry by @jackmpcollins in #265
- Do not set stream_options when using AzureOpenAI by @jackmpcollins in #262
- Use new
parallel_tool_calls
arg with OpenAI API by @jackmpcollins in #267 - Fix LitellmChatModel tool_choice parameter to force Anthropic tool use by @jackmpcollins in #268
Full Changelog: v0.27.0...v0.28.0
v0.27.0
What's Changed
- Add peek, apeek, adropwhile functions by @jackmpcollins in #229
- Update anthropic_chat_model.py to conform with latest anthropic package by @myousefi in #239
- Bump requests from 2.31.0 to 2.32.0 by @dependabot in #218
- Bump jinja2 from 3.1.3 to 3.1.4 by @dependabot in #203
- Bump urllib3 from 2.2.1 to 2.2.2 by @dependabot in #238
- Bump tornado from 6.4 to 6.4.1 by @dependabot in #233
New Contributors
Full Changelog: v0.26.0...v0.27.0
v0.26.0
What's Changed
- Return usage stats on AssistantMessage by @jackmpcollins in #214
Example of non-streamed response with usage immediately available
from magentic import OpenaiChatModel, UserMessage
chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")])
print(message.usage)
# > Usage(input_tokens=10, output_tokens=9)
Example of streamed response where usage only becomes available after the stream has been processed
from magentic import OpenaiChatModel, UserMessage
from magentic.streaming import StreamedStr
chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")], output_types=[StreamedStr])
print(message.usage)
# > `None` because stream has not be processed yet
# Process the stream (convert StreamedStr to str)
str(message.content)
print(message.usage)
# > Usage(input_tokens=10, output_tokens=9)
Full Changelog: v0.25.0...v0.26.0
v0.25.0
What's Changed
- Switch AnthropicChatModel to use streaming by @jackmpcollins in #215
StreamedStr
now streams correctly, but object streaming is waiting on Anthropic support for streaming array responses.from magentic import prompt, StreamedStr from magentic.chat_model.anthropic_chat_model import AnthropicChatModel @prompt( "Tell me about {topic}.", model=AnthropicChatModel("claude-3-opus-20240229"), ) def tell_me_about(topic: str) -> StreamedStr: ... for chunk in tell_me_about("chocolate"): print(chunk, end="", flush=True)
- add optional custom_llm_provider param for litellm by @entropi in #221
- Add tests for LiteLLM async callbacks by @jackmpcollins in #223
- Tidy up: Combine openai streamed_tool_call functions by @jackmpcollins in #225
New Contributors
Full Changelog: v0.24.0...v0.25.0