Releases: devoxx/DevoxxGenieIDEAPlugin
Releases · devoxx/DevoxxGenieIDEAPlugin
v0.2.13
- Feat #209 : Upgraded to LangChain4j 0.33.0
- Fix #211 : Class initialization must not depend on services
- Feat #213 : Show input/output tokens and cost per request in footer of response
v0.2.12
- Fix #203 : Google WebSeach is broken
- Feat #199 : Show execution time of prompt enhancement
- Fix - Token, Cost and Context Window Settings page mapping correction
- Fix #202 : Update the Gradle IntelliJ build file so it can be installed on other IntelliJ products
v0.2.10
- Fix #184 - Input panel has bigger min/preferred height size
- Feat #186 - Support for local LLaMA.c++ http server
- Feat #191 - Add Google model : gemini-1.5-pro-exp-0801
- Fix #181 - Last selected LLM provider is not persisted anymore, fixed by @mydeveloperplanet
- Feat #181 - Support for multiple projects with different LLM providers & language models
- Fix #190 - Scroll output panel to the bottom when new output is added
v0.2.9
Fix #183 : Allows a remote Ollama instance to be used.
v0.2.8
Support for Exo which allows you to run a local LLM cluster with Llama 3.1 using 8b, 70b and 405b on your Apple Silicon computers.
v0.2.7
- Show window context for downloaded Ollama models
- Also allow Token Calculation + "Add full project" to Ollama models 🔥
v0.2.6
- Renamed Gemini LLM provider to Google
- Increased Gemini Pro 1.5 window context to 2M
- Sorting LLM providers and model names alphabetically in combobox
- LLM cost calculation refactored
v0.2.5
Feat #171: Support OpenAI GPT 4o mini
Fix #170: Fixed LMStudio comms
v0.2.4
Feat #164: Include all attached files in response output reference
Feat #166: Improve code inclusion for chat context
v0.2.3
Feat #148: Create custom commands
Feat #157: Calc tokens for directory
Fix #153: Use the "Copy Project" settings when using "Add Directory to Context Window"
Feat #159: Introduce variable TokenCalculator based on selected LLM Provider
Feat #161: Move predefined command to custom commands