Skip to content

Releases: logancyang/obsidian-copilot

2.4.15

28 Jan 20:58
82a46b7
Compare
Choose a tag to compare
  • Allow sending multiple notes to the prompt with one click in Chat mode! You can specify the note context using the new Copilot command Set note context for Chat mode #265
  • Add ad-hoc custom prompt for selection. Thanks to @SeardnaSchmid #264

2.4.14

26 Jan 22:28
d6a28ec
Compare
Choose a tag to compare

Bug fixes

  • Only init embedding manager when switching to QA mode
  • Avoid OpenAI key error when it's empty but the model or embedding provider is not set as OpenAI
  • Add back Azure embedding deployment name setting

2.4.13

26 Jan 01:54
48ef083
Compare
Choose a tag to compare
  • Add the new OpenAI models announced today
  • 2 new embedding models small and large. Small is better than ada v2 but 1/5 the cost! Large is slightly more expensive than the old ada v2 but has much better quality.
    • Now you can set them in the QA settings section
SCR-20240125-pqya
  • New gpt-4-turbo-preview alias that's pointing to gpt-4-0125-preview, and new gpt-3.5-turbo-0125 (already covered by alias gpt-3.5-turbo.
  • For more details check the OpenAI announcement page

2.4.12

25 Jan 00:03
29b93b1
Compare
Choose a tag to compare
  • Use LCEL for both Chat and QA chains, and use multi-query retriever to increase recall
  • Add running dots indicator when loading AI messages since conversational QA with LCEL and multi-query retriever is a bit slower. Show the user it's not stuck, just loading
SCR-20240124-odxf

2.4.11

21 Jan 03:15
f84a3bc
Compare
Choose a tag to compare
  • Implement new Copilot settings components
  • Add custom Ollama base URL
SCR-20240120-qshf SCR-20240120-qsir SCR-20240120-qsjx SCR-20240120-qskz SCR-20240120-qslx

2.4.10

19 Jan 01:16
9440e9c
Compare
Choose a tag to compare
  • Change Conversation mode to Chat mode, and QA: Active Note to just QA to prepare for QA over the whole vault mode.
  • Add a button to send the active note directly into the prompt in Chat mode. This button shows only in Chat mode, and it becomes the index button in QA mode.
    chat-note-prompt

2.4.9

12 Jan 16:50
195c7de
Compare
Choose a tag to compare
  • Add OpenRouterAI as a separate option in model dropdown. You can specify the actual model in the setting. OpenRouter serves free and uncensored LLMs! Visit their site to check the models available https://openrouter.ai/
SCR-20240112-ifwi SCR-20240112-igae
  • Bumped max tokens to 10000, and max conversation turns to 30

2.4.8

11 Jan 06:21
d2d16cd
Compare
Choose a tag to compare
  • Add LM Studio and Ollama as two separate options in the model dropdown
  • Add setup guide
  • Remove LocalAI option

2.4.7

08 Jan 03:35
c74737b
Compare
Choose a tag to compare
  • Add google api key in settings
Screenshot 2024-01-07 at 7 22 34 PM
  • Add Gemini Pro model
    • I find that this model hallucinates quite a lot if you have a high temperature. Set the temperature close to 0 for better results.
      • Temperature 0.7:
        Screenshot 2024-01-07 at 7 19 27 PM
        Screenshot 2024-01-07 at 7 19 38 PM

      • Temperature 0.1:
        Screenshot 2024-01-07 at 7 23 17 PM

2.4.6

02 Jan 06:16
b90a165
Compare
Choose a tag to compare
  • Add Save and Reload button to avoid manually toggling the plugin on and off every time settings change. Now, clicking on either button triggers a plugin reload to let the new settings take effect
Screenshot 2024-01-01 at 9 59 50 PM
  • Fix error handling
    • No more "model_not_found" when the user has no access to the model, now it explicitly says you have no access
    • Shows the missing API key message when the chat model is not properly initialized
    • Shows model switch failure when Azure credentials are not provided
  • Show the actual model name and chain type used in debug messages
  • Make gpt-4-turbo the default model