0.5.0 (2024-04-02)
- code: improve concat of strings in ui (#1785) (bac818a)
- docker: set default Docker to use Ollama (#1812) (f83abff)
- docs: Add guide Llama-CPP Linux AMD GPU support (#1782) (8a836e4)
- docs: Feature/upgrade docs (#1741) (5725181)
- docs: upgrade fern (#1596) (84ad16a)
- ingest: Created a faster ingestion mode - pipeline (#1750) (134fc54)
- llm - embed: Add support for Azure OpenAI (#1698) (1efac6a)
- llm: adds serveral settings for llamacpp and ollama (#1703) (02dc83e)
- llm: Ollama LLM-Embeddings decouple + longer keep_alive settings (#1800) (b3b0140)
- llm: Ollama timeout setting (#1773) (6f6c785)
- local: tiktoken cache within repo for offline (#1467) (821bca3)
- nodestore: add Postgres for the doc and index store (#1706) (68b3a34)
- rag: expose similarity_top_k and similarity_score to settings (#1771) (087cb0b)
- RAG: Introduce SentenceTransformer Reranker (#1810) (83adc12)
- scripts: Wipe qdrant and obtain db Stats command (#1783) (ea153fb)
- ui: Add Model Information to ChatInterface label (f0b174c)
- ui: add sources check to not repeat identical sources (#1705) (290b9fb)
- UI: Faster startup and document listing (#1763) (348df78)
- ui: maintain score order when curating sources (#1643) (410bf7a)
- unify settings for vector and nodestore connections to PostgreSQL (#1730) (63de7e4)
- wipe per storage type (#1772) (c2d6948)
- docs: Minor documentation amendment (#1739) (258d02d)
- Fixed docker-compose (#1758) (774e256)
- ingest: update script label (#1770) (7d2de5c)
- settings: set default tokenizer to avoid running make setup fail (#1709) (d17c34e)
0.4.0 (2024-03-06)
0.3.0 (2024-02-16)
- add mistral + chatml prompts (#1426) (e326126)
- Add stream information to generate SDKs (#1569) (24fae66)
- API: Ingest plain text (#1417) (6eeb95e)
- bulk-ingest: Add --ignored Flag to Exclude Specific Files and Directories During Ingestion (#1432) (b178b51)
- llm: Add openailike llm mode (#1447) (2d27a9f), closes #1424
- llm: Add support for Ollama LLM (#1526) (6bbec79)
- settings: Configurable context_window and tokenizer (#1437) (4780540)
- settings: Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415) (8ec7cf4)
- ui: make chat area stretch to fill the screen (#1397) (c71ae7c)
- UI: Select file to Query or Delete + Delete ALL (#1612) (aa13afd)
- Adding an LLM param to fix broken generator from llamacpp (#1519) (869233f)
- deploy: fix local and external dockerfiles (fde2b94)
- docker: docker broken copy (#1419) (059f358)
- docs: Update quickstart doc and set version in pyproject.toml to 0.2.0 (0a89d76)
- minor bug in chat stream output - python error being serialized (#1449) (6191bcd)
- settings: correct yaml multiline string (#1403) (2564f8d)
- tests: load the test settings only when running tests (d3acd85)
- UI: Updated ui.py. Frees up the CPU to not be bottlenecked. (24fb80c)
0.2.0 (2023-12-10)
- llm: drop default_system_prompt (#1385) (a3ed14c)
- ui: Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) (145f3ec)
0.1.0 (2023-11-30)
- Disable Gradio Analytics (#1165) (6583dc8)
- Drop loguru and use builtin
logging
(#1133) (64c5ae2) - enable resume download for hf_hub_download (#1249) (4197ada)
- move torch and transformers to local group (#1172) (0d677e1)
- Qdrant support (#1228) (03d1ae6)
- Docker and sagemaker setup (#1118) (895588b)
- fix pytorch version to avoid wheel bug (#1123) (24cfddd)
- Remove global state (#1216) (022bd71)
- sagemaker config and chat methods (#1142) (a517a58)
- typo in README.md (#1091) (ba23443)
- Windows 11 failing to auto-delete tmp file (#1260) (0d52002)
- Windows permission error on ingest service tmp files (#1280) (f1cbff0)
0.0.2 (2023-10-20)
- Initial version (490d93f)