Skip to content

Releases: logancyang/obsidian-copilot

2.6.0

01 Sep 00:20
0297d78
Compare
Choose a tag to compare
  • Huge thanks to our awesome @gianluca-venturini for his incredible work on mobile support! Now you can use Copilot on your phone and tablet! 🎉🎉🎉

  • Complete rehaul of how models work in Copilot settings. Now you can add any model to your model picker provided its name, model provider, API key and base url! No more waiting for me to add new models!

SCR-20240827-mwpm SCR-20240831-peid
  • Say goodbye to CORS errors for both chat models and embedding! The new model table in settings now lets you turn on "CORS" for individual chat models if you see CORS issue with them.
    • Embedding models are immune to CORS errors by default!
    • Caveat: this is powered by Obsidian API's requestUrl which does not support "streaming" of LLM responses. So streaming is disabled whenever you have CORS on in Copilot settings. Please upvote this feature request to let Obsidian know your need for streaming!

2.5.5

26 Aug 00:41
60c70d2
Compare
Choose a tag to compare

Another long-awaited major update: message styling revamp, plus math and code syntax highlighting support! 🎉🎉🎉
SCR-20240824-uddc
SCR-20240824-udgm

  • Now the messages are more compact and clean, with better math, code and table support.
  • The Send button turns to Stop button when it's streaming, old Stop button is gone.
  • Some housekeeping and minor tweaks
    • Refactored Settings components
    • Added prettier and husky for formatting pre-commit hook
    • Show default system prompt as placeholder for better visibility
    • Bug fix: find notes by path corner case
    • Community contribution: @pontabi 's first ever PR, aligns Copy button at the bottom right of messages

2.5.4

10 Aug 19:00
9c0a56d
Compare
Choose a tag to compare

We have some awesome updates this time!

  • No more CORS errors for any OpenAI replacement API! Now you can use any 3rd party OpenAI replacement without CORS issue with the new toggle in Advanced settings. Big thanks to @Ebonsignori! #495
SCR-20240810-ksva
  • GEMINI 1.5 PRO and GEMINI 1.5 FLASH added! Thanks to @anpigon #497
SCR-20240810-ktrb
  • Custom model fields added for OpenAI and Google. Note that when OpenAI proxy base URL is present, the override logic is: proxy model name > custom model name (this addition) > model dropdown. #499
SCR-20240810-ktwk SCR-20240810-ktxf
  • Add setting to turn built-in Copilot commands on and off to reduce command menu clutter #500
SCR-20240810-kuxz
  • Fix 2 long time bugs where user messages are duplicated in saved note, and custom prompt commands missing when note not focused #501 #502
  • GPT-3 models are removed since GPT-4o-mini is superior in every way.
  • When switching models, the actual model name used in the API call is shown in the Notice banner, better for debugging.

2.5.3

22 Jul 20:42
0a35a49
Compare
Choose a tag to compare

Sorry for the delay folks, I was afk for quite a while but am back now!

  • GPT 4o and mini are added.
  • "Claude 3" renamed to just "Claude" and defaults to the new best claude-3-5-sonnet-20240620 model (reset or manual input required)
  • Fix a bug where source link is broken when vault name has spaces
  • Groq is added
  • OpenAI organization id added
  • Summarize Selection added to context menu
  • fish cors example

Big thanks to all community contributions!! #482, #446, #445, #441, #436

2.5.2

14 Mar 14:40
b92fd14
Compare
Choose a tag to compare
  • Fixed a bug where frontmatter parsing was failing
  • Fix missing command #353
  • Add exclude filter for indexing #334
  • Implement a first iteration of the custom retriever #331
  • Implement note title mention in Chat and Vault QA mode
    • Now if you type [[ it will trigger a modal for a list of all note titles to pick from
    • In Chat mode, a direct [[]] note title mention sends the note content in the prompt in the background, similar to how custom prompts work.
    • In Vault QA mode, a direct [[]] note title mention ensures that the retriever puts that note at the top of the source notes

2.5.1

09 Mar 06:54
b3ec3fd
Compare
Choose a tag to compare

Bug fixes

Re-indexing for Vault QA is recommended!

2.5.0

05 Mar 06:43
0534761
Compare
Choose a tag to compare
  • Brand new Vault QA (BETA) mode! This is a highly-anticipated feature and is a big step forward toward the vision of this plugin. Huge shoutout to @AntoineDao for working with me on this! #285
    • Implement more sophisticated chunking and QA flow
    • Rename current QA to Long Note QA
    • Fix Long Note QA logic
    • Add a list of clickable "Source Notes" titles below AI responses
    • Show the chunks retrieved in debug info.
    • Add command to Index Vault for QA
    • Refresh Index button
    • Add another one Force complete re-index for Vault QA
    • Add notice banner for indexing progress
    • Local embedding integration with Ollama
    • Add max sources setting
    • Add strategy ON_MODE_SWITCH, calls refresh index on mode switch
    • Add count total token of vault command, and language in settings for cost estimation.
  • Claude 3 integration. You can set the actual Claude 3 model variant in the setting. Default is claude-3-sonnet-20240229

2.4.18

22 Feb 06:08
73cc302
Compare
Choose a tag to compare
  • Fix a bug where chat context is not set correctly @Lisandra-dev #304
  • Enable model name, embedding provider url, embedding model name overrides for various OpenAI drop-in replacement providers like one-api etc. #305
  • Add encryption for API keys #306
  • Update Ollama context window setting instruction #307

2.4.17

17 Feb 23:09
2bb2264
Compare
Choose a tag to compare
  • Add filter notes by tags in "Set note context in Chat mode" command #291
  • Add filter notes by tags in Advanced Custom Prompt #296
  • (Chore) Remove all the different Azure model choices and leave one AZURE OPENAI to avoid confusion. The actual Azure model is set in the settings.
  • Fix a bug where model switch fails after copilot commands #298
SCR-20240213-ugbm SCR-20240216-nsht

2.4.16

06 Feb 21:15
848d297
Compare
Choose a tag to compare
  • Introducing advanced custom prompt! Now custom prompts don't require a text selection, and you can compose long and complex prompts by referencing a note or a folder of notes! #281
SCR-20240206-lvsy
  • Enable setting the full LM Studio URL instead of just the port #283
SCR-20240206-lwbg