Skip to content

Releases: jackmpcollins/magentic

v0.4.1

13 Sep 09:08
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.4.0...v0.4.1

v0.4.0

10 Sep 07:33
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.3.0...v0.4.0


Configuration

The order of precedence of configuration is

  1. Arguments passed when initializing an instance in Python
  2. Environment variables

The following environment variables can be set.

Environment Variable Description
MAGENTIC_OPENAI_MODEL OpenAI model e.g. "gpt-4"
MAGENTIC_OPENAI_TEMPERATURE OpenAI temperature, float

v0.3.0

09 Sep 09:51
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.2.0...v0.3.0


Object Streaming

Structured outputs can also be streamed from the LLM by using the return type annotation Iterable (or AsyncIterable). This allows each item to be processed while the next one is being generated. See the example in examples/quiz for how this can be used to improve user experience by quickly displaying/using the first item returned.

from collections.abc import Iterable
from time import time


@prompt("Create a Superhero team named {name}.")
def create_superhero_team(name: str) -> Iterable[Superhero]:
    ...


start_time = time()
for hero in create_superhero_team("The Food Dudes"):
    print(f"{time() - start_time:.2f}s : {hero}")

# 2.23s : name='Pizza Man' age=30 power='Can shoot pizza slices from his hands' enemies=['The Hungry Horde', 'The Junk Food Gang']
# 4.03s : name='Captain Carrot' age=35 power='Super strength and agility from eating carrots' enemies=['The Sugar Squad', 'The Greasy Gang']
# 6.05s : name='Ice Cream Girl' age=25 power='Can create ice cream out of thin air' enemies=['The Hot Sauce Squad', 'The Healthy Eaters']

v0.2.0

15 Aug 07:17
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.4...v0.2.0


Streaming

The StreamedStr (and AsyncStreamedStr) class can be used to stream the output of the LLM. This allows you to process the text while it is being generated, rather than receiving the whole output at once. Multiple StreamedStr can be created at the same time to stream LLM outputs concurrently. In the below example, generating the description for multiple countries takes approximately the same amount of time as for a single country.

from magentic import prompt, StreamedStr


@prompt("Tell me about {country}")
def describe_country(country: str) -> StreamedStr:
    ...


# Print the chunks while they are being received
for chunk in describe_country("Brazil"):
    print(chunk, end="")
# 'Brazil, officially known as the Federative Republic of Brazil, is ...'


# Generate text concurrently by creating the streams before consuming them
streamed_strs = [describe_country(c) for c in ["Australia", "Brazil", "Chile"]]
[str(s) for s in streamed_strs]
# ["Australia is a country ...", "Brazil, officially known as ...", "Chile, officially known as ..."]

v0.1.4

12 Aug 06:01
Compare
Choose a tag to compare

What's Changed

  • Remove ability to use function docstring as template by @jackmpcollins in #3
  • Raise StructuredOutputError from ValidationError to clarify error by @jackmpcollins in #4

Full Changelog: v0.1.3...v0.1.4

v0.1.3

31 Jul 05:57
Compare
Choose a tag to compare

Main changes

Commits

  • b7adc1a Support async prompt functions (#2)
  • 428596e Add example for RAG with wikipedia
  • 7e96aae Add test for parsing/serializing str|None
  • 2d676c9 Use all to explicitly export from top-level
  • c74d121 poetry add --group examples wikipedia
  • 6fe55d9 Add examples/quiz
  • e389547 Set --cov-report=term-missing for pytest-cov

Full Changelog: v0.1.2...v0.1.3

v0.1.2

21 Jul 07:41
Compare
Choose a tag to compare

Main Changes

  • Handle pydantic models as dictionaries values in DictFunctionSchema.serialize_args
  • Exclude unset parameters when creating FunctionCall in FunctionCallFunctionSchema.parse_args
  • Add FunctionCall.__eq__ method
  • Increase test coverage

Commits

  • 506d689 poetry update - address aiohttp CVE
  • feac090 Update README: improve first example, add more explanation
  • dab90cf poetry add jupyter --group examples
  • 992e65e poetry add pytest-cov
  • a05f057 Test FunctionCallFunctionSchema serialize_args, and FunctionCall
  • ed8e9d9 Test AnyFunctionSchema serialize_args
  • 606cb30 Test DictFunctionSchema serialize_args
  • ae6218e Test OrderedDict works with parse_args
  • 82c1d41 Tidy function_schemas creation in Model.complete

Full Changelog: v0.1.1...v0.1.2

v0.1.1

17 Jul 06:29
Compare
Choose a tag to compare

Main Changes

  • Improve handling of dict return types
  • Increase test coverage

Commits

  • 9953f3c Add DictFunctionSchema to improve handling dict return type
  • 9ac454f Handle Any in is_origin_subclass. Add tests for this
  • 8316213 Handle missing type hints in FunctionCallFunctionSchema. Extend tests for this.
  • 90e98f5 Add test cases for AnyFunctionSchema
  • 77e08ed Add magnetic as known-first-party for ruff
  • 03eac45 Add name_type function. Add docstrings to typing.py
  • 0cee18f Use pydantic.create_model in FunctionCallFunctionSchema to fix warning
  • 2726f78 Fix type hint for FunctionResultMessage.from_function_call
  • c0cf2d5 Disambiguate return_type variable in PromptFunction init
  • ec0e114 Fix FunctionCall example type annotations in readme

Full Changelog: v0.1.0...v0.1.1

v0.1.0

15 Jul 07:32
Compare
Choose a tag to compare

Initial release