Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configured providers but twinny not sending any requests to provider. #242

Closed
Duoquote opened this issue May 9, 2024 · 14 comments · Fixed by #422
Closed

Configured providers but twinny not sending any requests to provider. #242

Duoquote opened this issue May 9, 2024 · 14 comments · Fixed by #422
Labels
help wanted Extra attention is needed

Comments

@Duoquote
Copy link

Duoquote commented May 9, 2024

Describe the bug
I have setup the following providers and I checked with curl that /api/generate endpoint on http://duodesk.duo:11434 works, the extension shows loading circle but is not sending any requests. Also tried setting Ollama Hostname setting to duodesk.duo, but no luck.

To Reproduce
Just added the providers I have attached.

Expected behavior
Should work with the providers I have I think?

Screenshots
image

Logging
Logging is enabled but not sure where am I supposed to see the logs, checked Output tab but there is no entry for twinny.

API Provider
Ollama running at http://duodesk.duo:11434 in local network.

Chat or Auto Complete?
Both

Model Name
codellama:7b-code

Desktop (please complete the following information):

  • OS: Windows
  • Version: 11

Additional context

@Duoquote
Copy link
Author

Duoquote commented May 9, 2024

I was looking at other issues and saw where I could see the logs, the log is as follows:
image

Looks like hostname is set incorrectly, how do I change it?

@rjmacarthy
Copy link
Collaborator

rjmacarthy commented May 9, 2024

Maybe try a restart? The settings look correct to me. Also in the extension settings change the Ollama settings too. Click the cog in the extension header, there are some api settings for ollama in there too.

@rjmacarthy rjmacarthy added the help wanted Extra attention is needed label May 15, 2024
@randomeduc
Copy link

randomeduc commented May 21, 2024

How can I check some logs? Because I have the same problem on Windows, I'm running VS Code from WSL:Ubuntu

  1. Install Ollama and check if is running at http://localhost:11434

imagen

  1. Install the models
    imagen

  2. Install and configure the vscode extension
    Settings
    imagen

Providers
imagen

  1. Test the chat
    imagen

Got this message from vs code dev tool
imagen

But the loading spinner keep there, any help? Also try 127.0.0.1 as host but same results.

@jpaveg
Copy link

jpaveg commented Jul 10, 2024

There is an issue with Twinny and WSL connected VSCode windows.

Continue extension works, but I can't get Twinny to work. Let me know if there's a way for me to help debug this.

I've also tried setting the host value to 127.0.0.1, localhost, and the default 0.0.0.0. Restarted Ollama, VSCode, my computer, but Twinny does not even reach the ollama server. I do not have this issue when I run Twinny from within a VSCode instance that is running without any remote container connectivity (in my case WSL2 specifically).

Relevant console logs(?):

ERR [Extension Host] Fetch error: TypeError: fetch failed
	at node:internal/deps/undici/undici:12345:11
	at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
	at async t.streamResponse (/home/user/.vscode-server/extensions/rjmacarthy.twinny-3.11.45/out/index.js:2:138539)
console.ts:137 [Extension Host] Fetch error: TypeError: fetch failed
	at node:internal/deps/undici/undici:12345:11
	at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
	at async t.streamResponse (/home/user/.vscode-server/extensions/rjmacarthy.twinny-3.11.45/out/index.js:2:138539)
y @ console.ts:137

@changty
Copy link

changty commented Aug 5, 2024

I have Twinny 3.12.0 on Linux. I'm running ollama with mistral-nemo and stable-code.
Curling the ollama works fine but Twinny just keeps spinning the circle, like in the screenshots above.
My project is running in a docker container, but that should not affect Twinny.

I do get a similar error message as above.

ERR [Extension Host] Fetch error: TypeError: fetch failed at node:internal/deps/undici/undici:12345:11 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async streamResponse (/root/.vscode-server/extensions/rjmacarthy.twinny-3.12.0/out/index.js:20901:22)

Any suggestions on how to fix this?

@unclemusclez
Copy link

i also just noticed this, i am using vscodium.

tbh i dont think this was always the case so i will update when i figure it out, hopefully.

@unclemusclez
Copy link

https://stackoverflow.com/questions/70590704/fetching-in-a-vscode-extension-node-fetch-and-nodehttp-issues

Judging from these two issues:

Cannot find module 'node:http` on AWS Lambda v14 and
Problem with importing node-fetch v3.0.0 and node v16.5.0

it looks like the upgrade from node-fetch v3 to v3.1 was "troublesome" and resulted in the error you are seeing for some.

A few users are downgrading to v2 (or you might try v3 rather than v3.1 which you are using).

Uninstall node-fetch and npm install -s node-fetch@2

@pacman100
Copy link
Contributor

Facing the same issue. Any updates?

disarticulate added a commit to disarticulate/twinny that referenced this issue Dec 12, 2024
Potential fix twinnydotdev#242
or just "House Keeping"
@disarticulate
Copy link
Contributor

localhost and 0.0.0.0 not working for me. Have a responding ollama on both my local machine and remote.

UI responds "==## ERROR ##== : fetch failed"

In devtools, no requests are made, so I assume its on the backend.

The last error says:

ERR [Extension Host] �[91m [ERROR_twinny] �[32m Message: Fetch error 
 �[91m Error Type: TypeError 
  Error Message: fetch failed 
 �[31m TypeError: fetch failed
	at node:internal/deps/undici/undici:13392:13
	at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
	at async globalThis.fetch (file:///home/cy/.vscode-server/bin/138f619c86f1199955d53b4166bef66ef252935c/out/vs/workbench/api/node/extensionHostProcess.js:159:19800)
	at async streamResponse (/home/cy/.vscode-server/extensions/rjmacarthy.twinny-3.19.23-linux-x64/out/index.js:58388:22)
console.ts:137 [Extension Host]  [ERROR_twinny]  Message: Fetch error 

error goes here:

log.logConsoleError(Logger.ErrorType.Fetch_Error, "Fetch error", error)

The fetch config from the stream response:

[Extension Host] [twinny] ***Twinny Stream Debug***
    Streaming response from 0.0.0.0:11434.
    Request body:
    {
  "model": "llama3.1:latest",
  "stream": true,
  "messages": [
    {
      "role": "user",
      "content": "yhello"
    },
    {
      "role": "assistant",
      "content": "==## ERROR ##== : fetch failed"
    },
    {
      "role": "user",
      "content": "\n  Generate a title for this conversation in under 10 words.\n  It should not contain any special characters or quotes.\n"
    }
  ],
  "keep_alive": "5m",
  "options": {
    "temperature": 0.2,
    "num_predict": 2048
  }
}

    Request options:
    {
  "hostname": "0.0.0.0",
  "port": 11434,
  "path": "/api/chat",
  "protocol": "http",
  "method": "POST",
  "headers": {
    "Content-Type": "application/json",
    "Authorization": "Bearer undefined"
  }
}

I can't see in ollama API but "Authorization" header should be excluded if it's undefined. Looks like there's several files setting Authorization to "Bearer undefined".

Added a pull request here: #422 there's several other files using the fetch.

I don't think it's the actual error though.

@disarticulate
Copy link
Contributor

I just noticed though, my project is using SSH, and this plugin is being installed on the SSH vscode server, which isn't going to have access. That's definitely an error. It should probably be installed locally unless otherwise indicated.

@disarticulate
Copy link
Contributor

I just noticed though, my project is using SSH, and this plugin is being installed on the SSH vscode server, which isn't going to have access. That's definitely an error. It should probably be installed locally unless otherwise indicated.

This is definitely a problem. I opened up a local workspace and it connects. I'm going to open a specific issue.

@unclemusclez
Copy link

localhost and 0.0.0.0 not working for me. Have a responding ollama on both my local machine and remote.

UI responds "==## ERROR ##== : fetch failed"

In devtools, no requests are made, so I assume its on the backend.

The last error says:

ERR [Extension Host] �[91m [ERROR_twinny] �[32m Message: Fetch error 
 �[91m Error Type: TypeError 
  Error Message: fetch failed 
 �[31m TypeError: fetch failed
	at node:internal/deps/undici/undici:13392:13
	at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
	at async globalThis.fetch (file:///home/cy/.vscode-server/bin/138f619c86f1199955d53b4166bef66ef252935c/out/vs/workbench/api/node/extensionHostProcess.js:159:19800)
	at async streamResponse (/home/cy/.vscode-server/extensions/rjmacarthy.twinny-3.19.23-linux-x64/out/index.js:58388:22)
console.ts:137 [Extension Host]  [ERROR_twinny]  Message: Fetch error 

error goes here:

log.logConsoleError(Logger.ErrorType.Fetch_Error, "Fetch error", error)

The fetch config from the stream response:

[Extension Host] [twinny] ***Twinny Stream Debug***
    Streaming response from 0.0.0.0:11434.
    Request body:
    {
  "model": "llama3.1:latest",
  "stream": true,
  "messages": [
    {
      "role": "user",
      "content": "yhello"
    },
    {
      "role": "assistant",
      "content": "==## ERROR ##== : fetch failed"
    },
    {
      "role": "user",
      "content": "\n  Generate a title for this conversation in under 10 words.\n  It should not contain any special characters or quotes.\n"
    }
  ],
  "keep_alive": "5m",
  "options": {
    "temperature": 0.2,
    "num_predict": 2048
  }
}

    Request options:
    {
  "hostname": "0.0.0.0",
  "port": 11434,
  "path": "/api/chat",
  "protocol": "http",
  "method": "POST",
  "headers": {
    "Content-Type": "application/json",
    "Authorization": "Bearer undefined"
  }
}

I can't see in ollama API but "Authorization" header should be excluded if it's undefined. Looks like there's several files setting Authorization to "Bearer undefined".

Added a pull request here: #422 there's several other files using the fetch.

I don't think it's the actual error though.

are you using the ollama template with an openai endpoint, or vice-versa?

rjmacarthy pushed a commit that referenced this issue Dec 13, 2024
Potential fix #242
or just "House Keeping"
@rjmacarthy
Copy link
Collaborator

Please let me know if we should re-open, thanks!

@disarticulate
Copy link
Contributor

disarticulate commented Dec 13, 2024

are you using the ollama template with an openai endpoint, or vice-versa?

No. I'm fairly certain my issue is related to the plugin installing in the SSH workspace which doesn't have access to my localhost run Ollama. see #423 for what the most likely cause is of multiple error reports.

The work around at the moment for SSH workplace is to provide a IP and open port to the ollama server if you're working on remote; or use a local code source. It might be possible in the open ports section, but I don't know if that's two-way; I think it's just a one way, but don't know how the SSH workspace plugin is establishing the connection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants