Skip to content

Releases: jackmpcollins/magentic

v0.32.0

18 Aug 09:16
Compare
Choose a tag to compare

What's Changed

Add support for OpenAI "strict" setting for structured outputs. This guarantees that the generated JSON schema matches that supplied by the user. In magentic, this is set via an extension of pydantic's ConfigDict, and works for pydantic models as well as functions. See the docs for more info https://magentic.dev/structured-outputs/#configdict

For a BaseModel

from magentic import prompt, ConfigDict
from pydantic import BaseModel


class Superhero(BaseModel):
    model_config = ConfigDict(openai_strict=True)

    name: str
    age: int
    power: str
    enemies: list[str]


@prompt("Create a Superhero named {name}.")
def create_superhero(name: str) -> Superhero: ...


create_superhero("Garden Man")

For a function

from typing import Annotated, Literal

from magentic import ConfigDict, with_config
from pydantic import Field


@with_config(ConfigDict(openai_strict=True))
def activate_oven(
    temperature: Annotated[int, Field(description="Temp in Fahrenheit", lt=500)],
    mode: Literal["broil", "bake", "roast"],
) -> str:
    """Turn the oven on with the provided settings."""
    return f"Preheating to {temperature} F with mode {mode}"


@prompt(
    "Do some cooking",
    functions=[
        activate_oven,
        # ...

PRs

Full Changelog: v0.31.0...v0.32.0

v0.31.0

13 Aug 07:29
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.30.0...v0.31.0

v0.30.0

12 Aug 07:30
Compare
Choose a tag to compare

What's Changed

Warning

Breaking change: StructuredOutputError has been replaced by more specific exceptions StringNotAllowedError and ToolSchemaParseError in PR #288

🤖 ♻️ LLM-Assisted retries has been added. When enabled, this sends incorrectly formatted output back to the LLM along with the error message to have the LLM fix its mistakes. This can be used to enforce more complex validation on output schemas using pydantic validators.

For example, placing an arbitrary constraint on a string field

from typing import Annotated

from magentic import prompt
from pydantic import AfterValidator, BaseModel


def assert_is_ireland(v: str) -> str:
    if v != "Ireland":
        raise ValueError("Country must be Ireland")
    return v


class Country(BaseModel):
    name: Annotated[str, AfterValidator(assert_is_ireland)]
    capital: str


@prompt(
    "Return a country",
    max_retries=3,
)
def get_country() -> Country: ...


get_country()
# 05:13:55.607 Calling prompt-function get_country
# 05:13:55.622   LLM-assisted retries enabled. Max 3
# 05:13:55.627     Chat Completion with 'gpt-4o' [LLM]
# 05:13:56.309     streaming response from 'gpt-4o' took 0.11s [LLM]
# 05:13:56.310     Retrying Chat Completion. Attempt 1.
# 05:13:56.322     Chat Completion with 'gpt-4o' [LLM]
# 05:13:57.456     streaming response from 'gpt-4o' took 0.00s [LLM]
#
# Country(name='Ireland', capital='Dublin')

See the new docs page on Retrying for more info.

PRs

Full Changelog: v0.29.0...v0.30.0

v0.29.0

08 Aug 06:29
Compare
Choose a tag to compare

What's Changed

This means Message objects can be used anywhere pydantic models can, including in prompt-functions. The new AnyMessage type simplifies this. For example

from magentic import AnyMessage, prompt

@prompt("Create an example of few-shot prompting for a chatbot")
def make_few_shot_prompt() -> list[AnyMessage]: ...

make_few_shot_prompt()
# [SystemMessage('You are a helpful and knowledgeable assistant.'),
#  UserMessage('What’s the weather like today?'),
#  AssistantMessage[Any]('The weather today is sunny with a high of 75°F (24°C).'),
#  UserMessage('Can you explain the theory of relativity in simple terms?'),
#  AssistantMessage[Any]('Sure! The theory of relativity, developed by Albert Einstein,  ...]

Dependabot

Full Changelog: v0.28.1...v0.29.0

v0.28.1

29 Jul 06:26
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.28.0...v0.28.1

v0.28.0

15 Jul 06:59
Compare
Choose a tag to compare

What's Changed

🪵 🔥 Logfire / OpenTelemetry now supported!

This makes it much easier to follow what tool calls are being made by the LLM both as printed output locally and in Logfire or another monitoring service. It also lets you see the raw requests being sent to OpenAI/Anthropic so you can more easily debug issues.

All it takes to get set up is

pip install logfire
import logfire

logfire.configure(send_to_logfire=False)  # Or True to use the Logfire service
logfire.instrument_openai()  # optional, to trace OpenAI API calls
# logfire.instrument_anthropic()  # optional, to trace Anthropic API calls

Check out the new docs page: https://magentic.dev/logging-and-tracing/

PRs

Full Changelog: v0.27.0...v0.28.0

v0.27.0

20 Jun 07:09
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.26.0...v0.27.0

v0.26.0

27 May 06:06
Compare
Choose a tag to compare

What's Changed

Example of non-streamed response with usage immediately available

from magentic import OpenaiChatModel, UserMessage

chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")])

print(message.usage)
# > Usage(input_tokens=10, output_tokens=9)

Example of streamed response where usage only becomes available after the stream has been processed

from magentic import OpenaiChatModel, UserMessage
from magentic.streaming import StreamedStr

chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")], output_types=[StreamedStr])

print(message.usage)
# > `None` because stream has not be processed yet

# Process the stream (convert StreamedStr to str)
str(message.content)

print(message.usage)
# > Usage(input_tokens=10, output_tokens=9)

Full Changelog: v0.25.0...v0.26.0

v0.25.0

23 May 06:55
Compare
Choose a tag to compare

What's Changed

  • Switch AnthropicChatModel to use streaming by @jackmpcollins in #215 StreamedStr now streams correctly, but object streaming is waiting on Anthropic support for streaming array responses.
     from magentic import prompt, StreamedStr
     from magentic.chat_model.anthropic_chat_model import AnthropicChatModel
     
     
     @prompt(
         "Tell me about {topic}.",
         model=AnthropicChatModel("claude-3-opus-20240229"),
     )
     def tell_me_about(topic: str) -> StreamedStr: ...
     
     
     for chunk in tell_me_about("chocolate"):
         print(chunk, end="", flush=True)
  • add optional custom_llm_provider param for litellm by @entropi in #221
  • Add tests for LiteLLM async callbacks by @jackmpcollins in #223
  • Tidy up: Combine openai streamed_tool_call functions by @jackmpcollins in #225

New Contributors

Full Changelog: v0.24.0...v0.25.0

v0.25.0a0

16 May 08:04
Compare
Choose a tag to compare
v0.25.0a0 Pre-release
Pre-release

Prerelease for testing PR #214