-
-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configured providers but twinny not sending any requests to provider. #242
Comments
Maybe try a restart? The settings look correct to me. Also in the extension settings change the Ollama settings too. Click the cog in the extension header, there are some api settings for ollama in there too. |
There is an issue with Twinny and WSL connected VSCode windows.Continue extension works, but I can't get Twinny to work. Let me know if there's a way for me to help debug this. I've also tried setting the host value to Relevant console logs(?): ERR [Extension Host] Fetch error: TypeError: fetch failed
at node:internal/deps/undici/undici:12345:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async t.streamResponse (/home/user/.vscode-server/extensions/rjmacarthy.twinny-3.11.45/out/index.js:2:138539)
console.ts:137 [Extension Host] Fetch error: TypeError: fetch failed
at node:internal/deps/undici/undici:12345:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async t.streamResponse (/home/user/.vscode-server/extensions/rjmacarthy.twinny-3.11.45/out/index.js:2:138539)
y @ console.ts:137 |
I have Twinny 3.12.0 on Linux. I'm running ollama with mistral-nemo and stable-code. I do get a similar error message as above.
Any suggestions on how to fix this? |
i also just noticed this, i am using vscodium. tbh i dont think this was always the case so i will update when i figure it out, hopefully. |
|
Facing the same issue. Any updates? |
Potential fix twinnydotdev#242 or just "House Keeping"
localhost and 0.0.0.0 not working for me. Have a responding ollama on both my local machine and remote. UI responds "==## ERROR ##== : fetch failed" In devtools, no requests are made, so I assume its on the backend. The last error says:
error goes here: Line 103 in d528d53
The fetch config from the stream response:
I can't see in ollama API but "Authorization" header should be excluded if it's undefined. Looks like there's several files setting Authorization to "Bearer undefined". Added a pull request here: #422 there's several other files using the fetch. I don't think it's the actual error though. |
I just noticed though, my project is using SSH, and this plugin is being installed on the SSH vscode server, which isn't going to have access. That's definitely an error. It should probably be installed locally unless otherwise indicated. |
This is definitely a problem. I opened up a local workspace and it connects. I'm going to open a specific issue. |
are you using the ollama template with an openai endpoint, or vice-versa? |
Potential fix #242 or just "House Keeping"
Please let me know if we should re-open, thanks! |
No. I'm fairly certain my issue is related to the plugin installing in the SSH workspace which doesn't have access to my localhost run Ollama. see #423 for what the most likely cause is of multiple error reports. The work around at the moment for SSH workplace is to provide a IP and open port to the ollama server if you're working on remote; or use a local code source. It might be possible in the open ports section, but I don't know if that's two-way; I think it's just a one way, but don't know how the SSH workspace plugin is establishing the connection. |
Describe the bug
I have setup the following providers and I checked with curl that /api/generate endpoint on http://duodesk.duo:11434 works, the extension shows loading circle but is not sending any requests. Also tried setting Ollama Hostname setting to duodesk.duo, but no luck.
To Reproduce
Just added the providers I have attached.
Expected behavior
Should work with the providers I have I think?
Screenshots
Logging
Logging is enabled but not sure where am I supposed to see the logs, checked Output tab but there is no entry for twinny.
API Provider
Ollama running at http://duodesk.duo:11434 in local network.
Chat or Auto Complete?
Both
Model Name
codellama:7b-code
Desktop (please complete the following information):
Additional context
The text was updated successfully, but these errors were encountered: