v7.3.0
This release adds a new RAG front-end application template, streaming LLM and streaming RAG support along with significant text extraction improvements
See below for full details on the new features, improvements and bug fixes.
New Features
- Add support for streaming LLM generation (#680)
- Add RAG API endpoint (#735)
- Add RAG deepdive notebook (#737)
- Add RAG example application (#743)
Improvements
- Improve textractor pipeline (#748)
- Can't specify embedding model via API? (#632)
- Configuration documentation update request (#705)
- RAG alias for Extractor (#732)
- Rename Extractor pipeline to RAG (#736)
- Support max_seq_length parameter with model pooling (#746)