From 9c3e5cff5d01b7783676a7ea21776e14ddd40468 Mon Sep 17 00:00:00 2001 From: fm1320 Date: Mon, 23 Dec 2024 15:04:02 +0000 Subject: [PATCH 1/3] add integrations page and lint --- docs/source/get_started/index.rst | 11 +- docs/source/get_started/integrations.rst | 157 +++++++++++++++++++++++ 2 files changed, 161 insertions(+), 7 deletions(-) create mode 100644 docs/source/get_started/integrations.rst diff --git a/docs/source/get_started/index.rst b/docs/source/get_started/index.rst index b678634e1..ab0956aac 100644 --- a/docs/source/get_started/index.rst +++ b/docs/source/get_started/index.rst @@ -6,12 +6,9 @@ Here is the content of our documentation project. .. toctree:: - :maxdepth: 2 - :caption: Get Started + :maxdepth: 1 + :caption: Contents: installation - .. adalflow_in_15mins - - community - -.. lightrag_in_10_mins + integrations + quickstart diff --git a/docs/source/get_started/integrations.rst b/docs/source/get_started/integrations.rst new file mode 100644 index 000000000..be3680915 --- /dev/null +++ b/docs/source/get_started/integrations.rst @@ -0,0 +1,157 @@ +.. _get_started-integrations: + +Integrations +=========== + +AdalFlow integrates with many popular AI and database platforms to provide a comprehensive solution for your LLM applications. + +Model Providers +------------- + +.. raw:: html + +
+ + + + + +
+ +Vector Databases +-------------- + +.. raw:: html + +
+ + + + +
+ +Embedding Models +-------------- + +.. raw:: html + +
+ + +
+ +.. raw:: html + + + +Quick Start +---------- + +To use any of these integrations, first install AdalFlow with the appropriate extras: + +.. code-block:: bash + + # For model providers + pip install "adalflow[openai,anthropic,mistral,bedrock,groq]" + + # For vector databases + pip install "adalflow[qdrant,lancedb]" + +See the :ref:`installation guide ` for more details. + +Usage Examples +------------ + +Check out our tutorials for detailed examples of using these integrations: + +- :ref:`Model Clients ` +- :ref:`Vector Databases ` +- :ref:`Embeddings ` From a7f055c7b9c6c6e912212cbcd47b4c030cb6b70b Mon Sep 17 00:00:00 2001 From: fm1320 Date: Sun, 29 Dec 2024 23:43:20 +0000 Subject: [PATCH 2/3] add integrations and githubchat in docs --- docs/source/get_started/integrations.rst | 62 ++++-------- docs/source/tutorials/index.rst | 3 + docs/source/tutorials/rag_with_memory.rst | 117 ++++++++++++++++++++++ 3 files changed, 137 insertions(+), 45 deletions(-) create mode 100644 docs/source/tutorials/rag_with_memory.rst diff --git a/docs/source/get_started/integrations.rst b/docs/source/get_started/integrations.rst index be3680915..44f8313ce 100644 --- a/docs/source/get_started/integrations.rst +++ b/docs/source/get_started/integrations.rst @@ -23,18 +23,6 @@ Model Providers Anthropic - - - - -Embedding Models --------------- +Embedding and Reranking Models +--------------------------- .. raw:: html @@ -93,6 +69,12 @@ Embedding Models OpenAI Embeddings + .. raw:: html @@ -132,26 +114,16 @@ Embedding Models } -Quick Start ----------- - -To use any of these integrations, first install AdalFlow with the appropriate extras: - -.. code-block:: bash - - # For model providers - pip install "adalflow[openai,anthropic,mistral,bedrock,groq]" - - # For vector databases - pip install "adalflow[qdrant,lancedb]" - -See the :ref:`installation guide ` for more details. - Usage Examples ------------ -Check out our tutorials for detailed examples of using these integrations: +Have a look at our comprehensive :ref:`tutorials ` featuring all of these integrations, including: + +- Model Clients and LLM Integration +- Vector Databases and RAG +- Embeddings and Reranking +- Agent Development +- Evaluation and Optimization +- Logging and Tracing -- :ref:`Model Clients ` -- :ref:`Vector Databases ` -- :ref:`Embeddings ` +Each tutorial provides practical examples and best practices for building production-ready LLM applications. diff --git a/docs/source/tutorials/index.rst b/docs/source/tutorials/index.rst index 4985f92b4..754d02170 100644 --- a/docs/source/tutorials/index.rst +++ b/docs/source/tutorials/index.rst @@ -166,6 +166,8 @@ Putting it all together - Description * - :doc:`rag_playbook` - Comprehensive RAG playbook according to the sota research and the best practices in the industry. + * - :doc:`rag_with_memory` + - Building RAG systems with conversation memory for enhanced context retention and follow-up handling. .. toctree:: @@ -182,6 +184,7 @@ Putting it all together text_splitter db rag_playbook + rag_with_memory diff --git a/docs/source/tutorials/rag_with_memory.rst b/docs/source/tutorials/rag_with_memory.rst new file mode 100644 index 000000000..f1bfb8221 --- /dev/null +++ b/docs/source/tutorials/rag_with_memory.rst @@ -0,0 +1,117 @@ +.. _tutorials-rag_with_memory: + +RAG with Memory +============== + +This guide demonstrates how to implement a RAG system with conversation memory using AdalFlow, based on our `github_chat `_ reference implementation. + +Overview +-------- + +The github_chat project is a practical RAG implementation that allows you to chat with GitHub repositories while maintaining conversation context. It demonstrates: + +- Code-aware responses using RAG +- Memory management for conversation context +- Support for multiple programming languages +- Both web and command-line interfaces + +Architecture +----------- + +The system is built with several key components: + +Data Pipeline +^^^^^^^^^^^^ + +.. code-block:: text + + Input Documents → Text Splitter → Embedder → Vector Database + +The data pipeline processes repository content through: + +1. Document reading and preprocessing +2. Text splitting for optimal chunk sizes +3. Embedding generation +4. Storage in vector database + +RAG System +^^^^^^^^^^ + +.. code-block:: text + + User Query → RAG Component → [FAISS Retriever, Generator, Memory] + ↓ + Response + +The RAG system includes: + +- FAISS-based retrieval for efficient similarity search +- LLM-based response generation +- Memory component for conversation history + +Memory Management +--------------- + +The memory system maintains conversation context through: + +1. Dialog turn tracking +2. Context preservation +3. Dynamic memory updates + +This enables: + +- Follow-up questions +- Reference to previous context +- More coherent conversations + +Quick Start +---------- + +1. Installation: + +.. code-block:: bash + + git clone https://github.com/SylphAI-Inc/github_chat + cd github_chat + poetry install + +2. Set up your OpenAI API key: + +.. code-block:: bash + + mkdir -p .streamlit + echo 'OPENAI_API_KEY = "your-key-here"' > .streamlit/secrets.toml + +3. Run the application: + +.. code-block:: bash + + # Web interface + poetry run streamlit run app.py + + # Repository analysis + poetry run streamlit run app_repo.py + +Example Usage +----------- + +Here are some example queries you can try: + +.. code-block:: text + + "What does the RAG class do?" + "Can you explain how the memory system works?" + "Show me the implementation of text splitting" + "How is the conversation context maintained?" + +Implementation Details +------------------- + +The system uses AdalFlow's components: + +- :class:`core.embedder.Embedder` for document embedding +- :class:`core.retriever.Retriever` for similarity search +- :class:`core.generator.Generator` for response generation +- Custom memory management for conversation tracking + +For detailed implementation examples, check out the `github_chat repository `_. From 27d115bacfae945de9427aa9aa234af49971527b Mon Sep 17 00:00:00 2001 From: fm1320 Date: Sun, 29 Dec 2024 23:47:00 +0000 Subject: [PATCH 3/3] better explanations in tutorial --- docs/source/tutorials/rag_with_memory.rst | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/docs/source/tutorials/rag_with_memory.rst b/docs/source/tutorials/rag_with_memory.rst index f1bfb8221..6739898ff 100644 --- a/docs/source/tutorials/rag_with_memory.rst +++ b/docs/source/tutorials/rag_with_memory.rst @@ -95,14 +95,17 @@ Quick Start Example Usage ----------- -Here are some example queries you can try: - -.. code-block:: text - - "What does the RAG class do?" - "Can you explain how the memory system works?" - "Show me the implementation of text splitting" - "How is the conversation context maintained?" +1. **Demo Version (app.py)** + - Ask about Alice (software engineer) + - Ask about Bob (data scientist) + - Ask about the company cafeteria + - Test memory with follow-up questions + +2. **Repository Analysis (app_repo.py)** + - Enter your repository path + - Click "Load Repository" + - Ask questions about classes, functions, or code structure + - View implementation details in expandable sections Implementation Details -------------------