Skip to content

Releases: devoxx/DevoxxGenieIDEAPlugin

v0.2.23

13 Oct 09:51
Compare
Choose a tag to compare

Fix #301: Fix anthrophic models data
Fix #306: Added some tooltips for LLM specific settings
Fix #309: Add some common file extensions for frontend code

v0.2.22

23 Sep 16:15
cd77c18
Compare
Choose a tag to compare

Fix : Number format exception for cost value

v0.2.21

19 Sep 10:23
Compare
Choose a tag to compare

Feat #294 : Add possibility to use a custom base url for OpenAI
Fix #291 : Fix OpenAI o1 model context
Fix #293 : Extra logging for issue

v0.2.20

13 Sep 13:01
2e18795
Compare
Choose a tag to compare

Support for OpenAI o1 models 🤩

image

v0.2.19

12 Sep 18:36
74eec92
Compare
Choose a tag to compare
  • Support for OpenAI o1 preview and o1 mini
  • Feat #244 : Fix for Jan 👋🏼
  • Feat #231 : Use .gitignore in the "Copy Project to Prompt" feature
  • Fix #179 : Groq models updated
  • Fix #289 : Avoid duplicates in LLMModelRegistryService

image
image

v0.2.18

09 Sep 07:21
51fce65
Compare
Choose a tag to compare

Feat #225 : Support for OpenRouter
Fix #220 : Sort conversation history by date
Fix #226 : Migrate to Langchain4J 0.34.0 and use new Gemini (with API_KEY) code
Fix #276 : Sort the files in the attachment popup
Feat #279 : Update font size based on LafManagerListener.TOPIC

OpenRouter.mp4

v0.2.17

05 Sep 07:49
Compare
Choose a tag to compare
  • Feat #266 : Use OnePixelSplitter for chat window
  • Feat #35 : Conversation history panel
  • Fix #270 : Isolating conversations and chat memory between different projects
  • Fix #274 : Fix for deletion of history message
History.mp4

v0.2.16

02 Sep 09:45
Compare
Choose a tag to compare
  • Feat #245 : Always show execution time
  • Feat #242 : Add LMStudio Model Selection
  • Fix #251 : LMStudio check should happen every time LLM provider was changed
  • Feat #234 : Reuse the LLMStudio token usage in response
  • Fix #249 : Calculate token cost shows consistent results after switching projects
  • Feat #256 : "Shift+Enter" submits prompt
  • Feat #263 : Clear prompt when response is returned
  • Feat #261 : Support deepseek.com as LLM provider

v0.2.15

21 Aug 14:20
Compare
Choose a tag to compare
  • Feat #219 : Mention how many files are used when calculating total tokens
  • Feat #221 : Add multiple selected files using right-click
  • Fix #232 : "Add full project to prompt" doesn't include the attached project content tokens in calculation
  • Feat #228 : Show execution time even when no token usage is provided

image

v0.2.14

17 Aug 15:01
Compare
Choose a tag to compare

Fix #217 : Prompting local LLMs throws exception