You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When it comes to medical data, users may feel uncomfortable sharing this information with online providers such as OpenAI. To address privacy concerns, we should provide users with an option to opt out of using OpenAI's services. Instead, they should be able to download and use a local LLM on demand for summarisation and interpretation purposes.
Solution
The local LLM model will be available for download through the app settings. Once downloaded, users can choose between using this local model or OpenAI's services. For the sake of simplicity, we can implement a single toggle in the settings view that allows users to easily switch between these two options.
Additional context
To ensure full transparency, the active model (whether local LLM or OpenAI) should be displayed within the chat. This visual indicator helps users maintain awareness of which model is processing their data during conversations. For example, we can implement this as a subtle but noticeable indicator in the chat navigation header.
Code of Conduct
I agree to follow this project's Code of Conduct and Contributing Guidelines
The text was updated successfully, but these errors were encountered:
Problem
In response to your comment - @PSchmiedmayer
--
When it comes to medical data, users may feel uncomfortable sharing this information with online providers such as OpenAI. To address privacy concerns, we should provide users with an option to opt out of using OpenAI's services. Instead, they should be able to download and use a local LLM on demand for summarisation and interpretation purposes.
Solution
The local LLM model will be available for download through the app settings. Once downloaded, users can choose between using this local model or OpenAI's services. For the sake of simplicity, we can implement a single toggle in the settings view that allows users to easily switch between these two options.
Additional context
To ensure full transparency, the active model (whether local LLM or OpenAI) should be displayed within the chat. This visual indicator helps users maintain awareness of which model is processing their data during conversations. For example, we can implement this as a subtle but noticeable indicator in the chat navigation header.
Code of Conduct
The text was updated successfully, but these errors were encountered: