Releases: jackmpcollins/magentic
v0.11.0
What's Changed
- Add support for Azure via OpenaiChatModel by @jackmpcollins in #65
Full Changelog: v0.10.0...v0.11.0
v0.10.0
v0.9.1
Full Changelog: v0.9.0...v0.9.1
v0.9.0
What's Changed
- Add LiteLLM backend by @jackmpcollins in #54
Full Changelog: v0.8.0...v0.9.0
Example of LiteLLM backend
from magentic import prompt
from magentic.chat_model.litellm_chat_model import LitellmChatModel
@prompt(
"Talk to me! ",
model=LitellmChatModel("ollama/llama2"),
)
def say_hello() -> str:
...
say_hello()
See the Backend/LLM Configuration section of the README for how to set the LiteLLM backend as the default.
v0.8.0
What's Changed
- Make backend configurable by @jackmpcollins in #46
- Bump urllib3 from 2.0.6 to 2.0.7 by @dependabot in #47
- Replace black with ruff formatter by @jackmpcollins in #48
- Handle pydantic generic BaseModel in name_type and function schema by @jackmpcollins in #52
- Allow ChatModel to be set with context manager by @jackmpcollins in #53
Full Changelog: v0.7.2...v0.8.0
Allow the chat_model/LLM to be set using a context manager. This allows the same prompt-function to easily be reused with different models, and also makes it neater to dynamically set the model.
from magentic import OpenaiChatModel, prompt
@prompt("Say hello")
def say_hello() -> str:
...
@prompt(
"Say hello",
model=OpenaiChatModel("gpt-4", temperature=1),
)
def say_hello_gpt4() -> str:
...
say_hello() # Uses env vars or default settings
with OpenaiChatModel("gpt-3.5-turbo"):
say_hello() # Uses gpt-3.5-turbo due to context manager
say_hello_gpt4() # Uses gpt-4 with temperature=1 because explicitly configured
v0.7.2
What's Changed
- Allow setting max_tokens param by @jackmpcollins in #45
Full Changelog: v0.7.1...v0.7.2
Allow setting max_tokens
param in OpenaiChatModel
. The default value for this can also be set using an environment variable MAGENTIC_OPENAI_MAX_TOKENS
.
Example
from magentic import prompt
from magentic.chat_model.openai_chat_model import OpenaiChatModel
@prompt("Hello, how are you?", model=OpenaiChatModel(max_tokens=3))
def test() -> str:
...
test()
# 'Hello! I'
v0.7.1
What's Changed
- Bump urllib3 from 2.0.4 to 2.0.6 by @dependabot in #36
- add optional max_calls argument to prompt_chain by @pachacamac in #39
- Add instructions for setting OPENAI_API_BASE by @jackmpcollins in #40
New Contributors
- @pachacamac made their first contribution in #39
Full Changelog: v0.7.0...v0.7.1
v0.7.0
What's Changed
- Add Asyncio section to README by @jackmpcollins in #28
- Add chatprompt decorator by @jackmpcollins in #34
- Make openai tests less flaky by @jackmpcollins in #35
Full Changelog: v0.6.0...v0.7.0
Chat Prompting
The @chatprompt
decorator works just like @prompt
but allows you to pass chat messages as a template rather than a single text prompt. This can be used to provide a system message or for few-shot prompting where you provide example responses to guide the model's output. Format fields denoted by curly braces {example}
will be filled in all messages - use the escape_braces
function to prevent a string being used as a template.
from magentic import chatprompt, AssistantMessage, SystemMessage, UserMessage
from magentic.chatprompt import escape_braces
from pydantic import BaseModel
class Quote(BaseModel):
quote: str
character: str
@chatprompt(
SystemMessage("You are a movie buff."),
UserMessage("What is your favorite quote from Harry Potter?"),
AssistantMessage(
Quote(
quote="It does not do to dwell on dreams and forget to live.",
character="Albus Dumbledore",
)
),
UserMessage("What is your favorite quote from {movie}?"),
)
def get_movie_quote(movie: str) -> Quote:
...
get_movie_quote("Iron Man")
# Quote(quote='I am Iron Man.', character='Tony Stark')
v0.6.0
What's Changed
- Move function schemas into own file by @jackmpcollins in #18
- Bump certifi from 2023.5.7 to 2023.7.22 by @dependabot in #21
- Bump jupyter-server from 2.7.0 to 2.7.2 by @dependabot in #20
- Bump tornado from 6.3.2 to 6.3.3 by @dependabot in #19
- Add example notebook for Chain of Verification by @jackmpcollins in #22
- Handle Iterable type with no item type by @jackmpcollins in #24
- Handle BaseModel parameters in functions by @jackmpcollins in #23
- Make AsyncIterableFunctionSchema.serialize_args raise NotImplementedError by @jackmpcollins in #25
- Rename chat_model files by @jackmpcollins in #26
New Contributors
- @dependabot made their first contribution in #21
Full Changelog: v0.5.0...v0.6.0
v0.5.0
What's Changed
- Add docstrings where useful by @jackmpcollins in #14
- Enable async prompt_chain. Remove FunctionCallMessage by @jackmpcollins in #17
Full Changelog: v0.4.1...v0.5.0
from magentic import prompt_chain
async def get_current_weather(location, unit="fahrenheit"):
"""Get the current weather in a given location"""
return {
"location": location,
"temperature": "72",
"unit": unit,
"forecast": ["sunny", "windy"],
}
@prompt_chain(
template="What's the weather like in {city}?",
functions=[get_current_weather],
)
async def describe_weather(city: str) -> str:
...
output = await describe_weather("Boston")