Releases: logancyang/obsidian-copilot
Releases · logancyang/obsidian-copilot
2.4.15
- Allow sending multiple notes to the prompt with one click in Chat mode! You can specify the note context using the new Copilot command
Set note context for Chat mode
#265 - Add ad-hoc custom prompt for selection. Thanks to @SeardnaSchmid #264
2.4.14
Bug fixes
- Only init embedding manager when switching to QA mode
- Avoid OpenAI key error when it's empty but the model or embedding provider is not set as OpenAI
- Add back Azure embedding deployment name setting
2.4.13
- Add the new OpenAI models announced today
- 2 new embedding models small and large. Small is better than ada v2 but 1/5 the cost! Large is slightly more expensive than the old ada v2 but has much better quality.
- Now you can set them in the QA settings section
- New
gpt-4-turbo-preview
alias that's pointing togpt-4-0125-preview
, and newgpt-3.5-turbo-0125
(already covered by aliasgpt-3.5-turbo
. - For more details check the OpenAI announcement page
2.4.12
2.4.11
2.4.10
2.4.9
- Add OpenRouterAI as a separate option in model dropdown. You can specify the actual model in the setting. OpenRouter serves free and uncensored LLMs! Visit their site to check the models available https://openrouter.ai/
- Bumped max tokens to 10000, and max conversation turns to 30
2.4.8
- Add LM Studio and Ollama as two separate options in the model dropdown
- Add setup guide
- Remove LocalAI option
2.4.7
2.4.6
- Add Save and Reload button to avoid manually toggling the plugin on and off every time settings change. Now, clicking on either button triggers a plugin reload to let the new settings take effect
- Fix error handling
- No more "model_not_found" when the user has no access to the model, now it explicitly says you have no access
- Shows the missing API key message when the chat model is not properly initialized
- Shows model switch failure when Azure credentials are not provided
- Show the actual model name and chain type used in debug messages
- Make
gpt-4-turbo
the default model