diff --git a/docs/examples.md b/docs/examples.md
index 95f336d92..c251fbff2 100644
--- a/docs/examples.md
+++ b/docs/examples.md
@@ -73,6 +73,7 @@ Efficiently process data at scale.
| [Push notifications with workflows](https://github.com/neuml/txtai/blob/master/examples/28_Push_notifications_with_workflows.ipynb) | Generate and push notifications with workflows | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/28_Push_notifications_with_workflows.ipynb) |
| [Pictures are a worth a thousand words](https://github.com/neuml/txtai/blob/master/examples/35_Pictures_are_worth_a_thousand_words.ipynb) | Generate webpage summary images with DALL-E mini | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/35_Pictures_are_worth_a_thousand_words.ipynb) |
| [Run txtai with native code](https://github.com/neuml/txtai/blob/master/examples/36_Run_txtai_in_native_code.ipynb) | Execute workflows in native code with the Python C API | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/36_Run_txtai_in_native_code.ipynb) |
+| [Generative Audio](https://github.com/neuml/txtai/blob/master/examples/66_Generative_Audio.ipynb) | Create generative audio workflows | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/66_Generative_Audio.ipynb) |
## Model Training
diff --git a/examples/66_Generative_Audio.ipynb b/examples/66_Generative_Audio.ipynb
new file mode 100644
index 000000000..4e613d549
--- /dev/null
+++ b/examples/66_Generative_Audio.ipynb
@@ -0,0 +1,402 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Generative Audio\n",
+ "\n",
+ "[txtai](https://github.com/neuml/txtai) is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.\n",
+ "\n",
+ "txtai works with much more than just text! It has rich multimedia and multimodal capabilities.\n",
+ "\n",
+ "This notebook will demonstrate how to build generative audio workflows. These workflows will generate a combined audio stream with text and relevant audio for a series of poems."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Install dependencies\n",
+ "\n",
+ "Install `txtai` and all dependencies."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%%capture\n",
+ "!pip install git+https://github.com/neuml/txtai#egg=txtai[pipeline-audio] autoawq"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Define a Generative Audio workflow\n",
+ "\n",
+ "The next section defines a generative audio workflow. This workflow consists of a set of pipelines as follows:\n",
+ "\n",
+ "- LLM\n",
+ " - Llama 3 model used to describe the emotions of a given story or poem\n",
+ "- Text To Audio\n",
+ " - Builds audio given a text prompt\n",
+ "- Text To Speech\n",
+ " - Converts text to speech\n",
+ "- Audio Mixer\n",
+ " - Joins multiple audio streams together into a single stream"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import logging\n",
+ "\n",
+ "import soundfile as sf\n",
+ "\n",
+ "from IPython.display import Audio, display\n",
+ "\n",
+ "from txtai import LLM\n",
+ "from txtai.pipeline import AudioMixer, TextToAudio, TextToSpeech\n",
+ "from txtai.workflow import Workflow, Task, TemplateTask\n",
+ "\n",
+ "# Enable DEBUG logging\n",
+ "logging.basicConfig()\n",
+ "logging.getLogger(\"txtai.workflow.base\").setLevel(logging.DEBUG)\n",
+ "logging.getLogger(\"txtai.workflow.task.base\").setLevel(logging.DEBUG)\n",
+ "\n",
+ "def play(audio):\n",
+ " # Convert to MP3 to save space\n",
+ " sf.write(\"audio.wav\", audio[0].T, audio[1])\n",
+ " !ffmpeg -i audio.wav -y -b:a 64 audio.mp3 2> /dev/null\n",
+ "\n",
+ " # Play speech\n",
+ " display(Audio(filename=\"audio.mp3\"))\n",
+ " return audio\n",
+ "\n",
+ "# LLM\n",
+ "llm = LLM(\"hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4\")\n",
+ "\n",
+ "# Text to Audio\n",
+ "# Important: The code for musicgen is licensed as MIT but model weights are CC-BY-NC\n",
+ "tta = TextToAudio(\"facebook/musicgen-stereo-small\")\n",
+ "\n",
+ "# Audio mixer\n",
+ "mixer = AudioMixer()\n",
+ "\n",
+ "# Define prompt template\n",
+ "template = \"\"\"\n",
+ "<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n",
+ "Write 3-5 emotions, keywords and holidays to describe the following text. ONLY answer with a comma separated list and no preceding statement.\n",
+ "\n",
+ "{text}\n",
+ "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n",
+ "\"\"\"\n",
+ "\n",
+ "# Background music subworkflow\n",
+ "music = Workflow([\n",
+ " TemplateTask(\n",
+ " template=template,\n",
+ " action=llm\n",
+ " ),\n",
+ " Task(action=tta),\n",
+ "])\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# \"The Raven\" by Edgar Allan Poe\n",
+ "\n",
+ "The first workflow will generate speech and corresponding background music for the first verse of \"The Raven\" by Edgar Allan Poe.\n",
+ "\n",
+ "This poem is fitting given that Halloween was close at the time of publishing. š¦āā¬šš»"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "DEBUG:txtai.workflow.base:Running Task #0\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: ['\\nOnce upon a midnight dreary, while I pondered, weak and weary,\\n\\nOver many a quaint and curious volume of forgotten loreā\\n\\nWhile I nodded, nearly napping, suddenly there came a tapping,\\n\\nAs of some one gently rapping, rapping at my chamber door.\\n\\nāTis some visitor, I muttered, \"tapping at my chamber doorā Only this and nothing more.ā\\n']\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([0.00204471, 0.00245908, 0.00251085, ..., 0.00101355, 0.00124749,\n",
+ " 0.00157734], dtype=float32), 32000)]\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: ['\\nOnce upon a midnight dreary, while I pondered, weak and weary,\\n\\nOver many a quaint and curious volume of forgotten loreā\\n\\nWhile I nodded, nearly napping, suddenly there came a tapping,\\n\\nAs of some one gently rapping, rapping at my chamber door.\\n\\nāTis some visitor, I muttered, \"tapping at my chamber doorā Only this and nothing more.ā\\n']\n",
+ "DEBUG:txtai.workflow.base:Running Task #0\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: ['\\n<|begin_of_text|><|start_header_id|>user<|end_header_id|>\\nWrite 3-5 emotions, keywords and holidays to describe the following text. ONLY answer with a comma separated list and no preceding statement.\\n\\n\\nOnce upon a midnight dreary, while I pondered, weak and weary,\\n\\nOver many a quaint and curious volume of forgotten loreā\\n\\nWhile I nodded, nearly napping, suddenly there came a tapping,\\n\\nAs of some one gently rapping, rapping at my chamber door.\\n\\nāTis some visitor, I muttered, \"tapping at my chamber doorā Only this and nothing more.ā\\n\\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>\\n']\n",
+ "Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\n",
+ "Starting from v4.46, the `logits` model output will have the same type as the model (except at train time, where it will always be FP32)\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: ['melancholy, mystery, curiosity, introspection, Halloween']\n",
+ "DEBUG:txtai.workflow.base:Running Task #1\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: ['melancholy, mystery, curiosity, introspection, Halloween']\n",
+ "`torch.nn.functional.scaled_dot_product_attention` does not support having an empty attention mask. Falling back to the manual attention implementation. This warning can be removed using the argument `attn_implementation=\"eager\"` when loading the model.Note that this probably happens because `guidance_scale>1` or because you used `get_unconditional_inputs`. See https://github.com/huggingface/transformers/issues/31189 for more information.\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([[-0.01685709, -0.0192524 , -0.01729976, ..., 0.02864039,\n",
+ " 0.02873872, 0.02577066],\n",
+ " [-0.02714959, -0.0311739 , -0.02744334, ..., 0.2672284 ,\n",
+ " 0.266621 , 0.26353633]], dtype=float32), 32000)]\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([[-0.01685709, -0.0192524 , -0.01729976, ..., 0.02864039,\n",
+ " 0.02873872, 0.02577066],\n",
+ " [-0.02714959, -0.0311739 , -0.02744334, ..., 0.2672284 ,\n",
+ " 0.266621 , 0.26353633]], dtype=float32), 32000)]\n",
+ "DEBUG:txtai.workflow.base:Running Task #1\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: [((array([0.00204471, 0.00245908, 0.00251085, ..., 0.00101355, 0.00124749,\n",
+ " 0.00157734], dtype=float32), 32000), (array([[-0.01685709, -0.0192524 , -0.01729976, ..., 0.02864039,\n",
+ " 0.02873872, 0.02577066],\n",
+ " [-0.02714959, -0.0311739 , -0.02744334, ..., 0.2672284 ,\n",
+ " 0.266621 , 0.26353633]], dtype=float32), 32000))]\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([[-0.00638384, -0.00716712, -0.00613903, ..., -0.05081543,\n",
+ " -0.05321765, -0.0549943 ],\n",
+ " [-0.01153009, -0.01312787, -0.01121082, ..., -0.01355931,\n",
+ " -0.02704029, -0.03997342]], dtype=float32), 32000)]\n",
+ "DEBUG:txtai.workflow.base:Running Task #2\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: [(array([[-0.00638384, -0.00716712, -0.00613903, ..., -0.05081543,\n",
+ " -0.05321765, -0.0549943 ],\n",
+ " [-0.01153009, -0.01312787, -0.01121082, ..., -0.01355931,\n",
+ " -0.02704029, -0.03997342]], dtype=float32), 32000)]\n"
+ ]
+ },
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ " \n",
+ " "
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([[-0.00638384, -0.00716712, -0.00613903, ..., -0.05081543,\n",
+ " -0.05321765, -0.0549943 ],\n",
+ " [-0.01153009, -0.01312787, -0.01121082, ..., -0.01355931,\n",
+ " -0.02704029, -0.03997342]], dtype=float32), 32000)]\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "[(array([[-0.00638384, -0.00716712, -0.00613903, ..., -0.05081543,\n",
+ " -0.05321765, -0.0549943 ],\n",
+ " [-0.01153009, -0.01312787, -0.01121082, ..., -0.01355931,\n",
+ " -0.02704029, -0.03997342]], dtype=float32),\n",
+ " 32000)]"
+ ]
+ },
+ "execution_count": 2,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Text to speech\n",
+ "tts = TextToSpeech(\"neuml/vctk-vits-onnx\", rate=32000)\n",
+ "\n",
+ "# Define the workflow\n",
+ "workflow = Workflow(tasks=[\n",
+ " Task(action=[lambda x: tts(x, speaker=3), music], merge=\"hstack\", unpack=False),\n",
+ " Task(action=lambda x: mixer(x, scale2=0.5), unpack=False),\n",
+ " Task(action=lambda x: [play(y) for y in x], unpack=False)\n",
+ "])\n",
+ "\n",
+ "list(workflow([\"\"\"\n",
+ "Once upon a midnight dreary, while I pondered, weak and weary,\n",
+ "\n",
+ "Over many a quaint and curious volume of forgotten loreā\n",
+ "\n",
+ "While I nodded, nearly napping, suddenly there came a tapping,\n",
+ "\n",
+ "As of some one gently rapping, rapping at my chamber door.\n",
+ "\n",
+ "āTis some visitor, I muttered, \"tapping at my chamber doorā Only this and nothing more.ā\n",
+ "\"\"\"]))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This is quite amazing š„\n",
+ "\n",
+ "From a single text verse, we not only generated speech, we also generated spooky background music to go along with it.\n",
+ "\n",
+ "The LLM reads the text and writes a series of emotions, keywords and other descriptive words. That is then passed to a music generation model which creates the corresponding background music. Finally, an audio mixer pipeline joins the streams together and the audio is saved for playback.\n",
+ "\n",
+ "This is the powerā” of txtai workflows. Some may call it \"agentic\". Whatever we want to call it, it is able to combine multiple models small and large into a single execution flow."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# \"A Visit from St. Nicholas\" by Clement Clarke Moore\n",
+ "\n",
+ "Next, we'll create audio for the classic Christmas tale, also known as \"The Night Before Christmas\" š šāļø\n",
+ "\n",
+ "We'll use a different voice this time, mine! This is the default voice for [txtai-speecht5-onnx](https://hf.co/neuml/txtai-speecht5-onnx)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "DEBUG:txtai.workflow.base:Running Task #0\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: [\"\\n'Twas the night before Christmas, when all through the house, not a creature was stirring, not even a mouse.\\n\\nThe stockings were hung by the chimney with care, in hopes that Saint Nicholas soon would be there.\\n\\nThe children were nestled all snug in their beds, while visions of sugar plums danced in their heads.\\n\\nAnd mamma in her kerchief, and I in my cap, had just settled our brains, for a long winterās nap.\\n\\nWhen out on the lawn there arose such a clatter, I sprang from my bed to see what was the matter.\\n\"]\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([-3.9214676e-05, 1.7410064e-04, 1.9779154e-04, ...,\n",
+ " -1.0386602e-03, -9.1643957e-04, -4.9463823e-04], dtype=float32), 32000)]\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: [\"\\n'Twas the night before Christmas, when all through the house, not a creature was stirring, not even a mouse.\\n\\nThe stockings were hung by the chimney with care, in hopes that Saint Nicholas soon would be there.\\n\\nThe children were nestled all snug in their beds, while visions of sugar plums danced in their heads.\\n\\nAnd mamma in her kerchief, and I in my cap, had just settled our brains, for a long winterās nap.\\n\\nWhen out on the lawn there arose such a clatter, I sprang from my bed to see what was the matter.\\n\"]\n",
+ "DEBUG:txtai.workflow.base:Running Task #0\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: [\"\\n<|begin_of_text|><|start_header_id|>user<|end_header_id|>\\nWrite 3-5 emotions, keywords and holidays to describe the following text. ONLY answer with a comma separated list and no preceding statement.\\n\\n\\n'Twas the night before Christmas, when all through the house, not a creature was stirring, not even a mouse.\\n\\nThe stockings were hung by the chimney with care, in hopes that Saint Nicholas soon would be there.\\n\\nThe children were nestled all snug in their beds, while visions of sugar plums danced in their heads.\\n\\nAnd mamma in her kerchief, and I in my cap, had just settled our brains, for a long winterās nap.\\n\\nWhen out on the lawn there arose such a clatter, I sprang from my bed to see what was the matter.\\n\\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>\\n\"]\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: ['Peaceful, Hope, Joy, Christmas, Calm, Slumber, Wonder, Excitement']\n",
+ "DEBUG:txtai.workflow.base:Running Task #1\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: ['Peaceful, Hope, Joy, Christmas, Calm, Slumber, Wonder, Excitement']\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([[ 0.01238994, 0.00964349, 0.02490981, ..., -0.02256315,\n",
+ " -0.02624696, -0.01479813],\n",
+ " [-0.0111579 , -0.01307007, 0.00245246, ..., 0.02333916,\n",
+ " 0.01998244, 0.02509145]], dtype=float32), 32000)]\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([[ 0.01238994, 0.00964349, 0.02490981, ..., -0.02256315,\n",
+ " -0.02624696, -0.01479813],\n",
+ " [-0.0111579 , -0.01307007, 0.00245246, ..., 0.02333916,\n",
+ " 0.01998244, 0.02509145]], dtype=float32), 32000)]\n",
+ "DEBUG:txtai.workflow.base:Running Task #1\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: [((array([-3.9214676e-05, 1.7410064e-04, 1.9779154e-04, ...,\n",
+ " -1.0386602e-03, -9.1643957e-04, -4.9463823e-04], dtype=float32), 32000), (array([[ 0.01238994, 0.00964349, 0.02490981, ..., -0.02256315,\n",
+ " -0.02624696, -0.01479813],\n",
+ " [-0.0111579 , -0.01307007, 0.00245246, ..., 0.02333916,\n",
+ " 0.01998244, 0.02509145]], dtype=float32), 32000))]\n",
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([[ 5.8028224e-04, 6.5627520e-04, 1.4432818e-03, ...,\n",
+ " 2.3652171e-04, -6.1154435e-04, 1.3553995e-03],\n",
+ " [-5.9710984e-04, -4.7940284e-04, 3.2041437e-04, ...,\n",
+ " -2.5115125e-03, -7.7958062e-04, -9.4321877e-05]], dtype=float32), 32000)]\n",
+ "DEBUG:txtai.workflow.base:Running Task #2\n",
+ "DEBUG:txtai.workflow.task.base:Inputs: [(array([[ 5.8028224e-04, 6.5627520e-04, 1.4432818e-03, ...,\n",
+ " 2.3652171e-04, -6.1154435e-04, 1.3553995e-03],\n",
+ " [-5.9710984e-04, -4.7940284e-04, 3.2041437e-04, ...,\n",
+ " -2.5115125e-03, -7.7958062e-04, -9.4321877e-05]], dtype=float32), 32000)]\n"
+ ]
+ },
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ " \n",
+ " "
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "DEBUG:txtai.workflow.task.base:Outputs: [(array([[ 5.8028224e-04, 6.5627520e-04, 1.4432818e-03, ...,\n",
+ " 2.3652171e-04, -6.1154435e-04, 1.3553995e-03],\n",
+ " [-5.9710984e-04, -4.7940284e-04, 3.2041437e-04, ...,\n",
+ " -2.5115125e-03, -7.7958062e-04, -9.4321877e-05]], dtype=float32), 32000)]\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "[(array([[ 5.8028224e-04, 6.5627520e-04, 1.4432818e-03, ...,\n",
+ " 2.3652171e-04, -6.1154435e-04, 1.3553995e-03],\n",
+ " [-5.9710984e-04, -4.7940284e-04, 3.2041437e-04, ...,\n",
+ " -2.5115125e-03, -7.7958062e-04, -9.4321877e-05]], dtype=float32),\n",
+ " 32000)]"
+ ]
+ },
+ "execution_count": 11,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "tts = TextToSpeech(\"neuml/txtai-speecht5-onnx\", rate=32000)\n",
+ "\n",
+ "# Define the workflow\n",
+ "workflow = Workflow(tasks=[\n",
+ " Task(action=[tts, music], merge=\"hstack\", unpack=False),\n",
+ " Task(action=lambda x: mixer(x, scale2=0.05), unpack=False),\n",
+ " Task(action=lambda x: [play(y) for y in x], unpack=False)\n",
+ "])\n",
+ "\n",
+ "list(workflow([\"\"\"\n",
+ "'Twas the night before Christmas, when all through the house, not a creature was stirring, not even a mouse.\n",
+ "\n",
+ "The stockings were hung by the chimney with care, in hopes that Saint Nicholas soon would be there.\n",
+ "\n",
+ "The children were nestled all snug in their beds, while visions of sugar plums danced in their heads.\n",
+ "\n",
+ "And mamma in her kerchief, and I in my cap, had just settled our brains, for a long winterās nap.\n",
+ "\n",
+ "When out on the lawn there arose such a clatter, I sprang from my bed to see what was the matter.\n",
+ "\"\"\"]))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Wrapping up\n",
+ "\n",
+ "This notebook demonstrated how to build a series of Generative Audio workflows for poems. This capability has potential applications in the creative field.\n",
+ "\n",
+ "Are we at a place where we can have a full pipeline that takes a prompt and generates a full multimedia video? Not quite but we're quite close. Interesting times certainly are ahead!"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "local",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.20"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}