Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama is not responding to Open WebUI when Indexing codebase #3566

Open
3 tasks done
danielaskdd opened this issue Dec 30, 2024 · 0 comments
Open
3 tasks done

Ollama is not responding to Open WebUI when Indexing codebase #3566

danielaskdd opened this issue Dec 30, 2024 · 0 comments
Assignees
Labels
area:indexing Relates to embedding and indexing ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior "needs-triage"

Comments

@danielaskdd
Copy link

danielaskdd commented Dec 30, 2024

Before submitting your bug report

Relevant environment info

- OS:Macos Sequoia 15.2 intel Core i9
- Continue version: 0.9.248
- IDE version: Vscode 1.96.2
- Model: - (Model from Ollama or Open WebUI in local network)
- config.json:
  
{
  "models": [
    {
      "title": "Qwen-coder-7b",
      "provider": "ollama",
      "model": "qwen2.5-coder:7b",
      "systemMessage": "Speak Chinese. When generating commit message, write in Chinese.",
      "apiBase": "http://m4.lan.xxxxxx.com:11434"
    },
    {
      "title": "Deepseek-coder",
      "provider": "openai",
      "model": "deepseek-coder",
      "systemMessage": "Speak Chinese. When generating commit message, write in Chinese.",
      "apiBase": "https://ai.xxxxxx.com:5013/api",
      "useLegacyCompletionsEndpoint": false,
      "apiKey": "sk-api-key"
    },
    {
      "title": "Claude3.5 Sonnet",
      "provider": "openai",
      "model": "anthropic/claude-3.5-sonnet:beta",
      "systemMessage": "Speak Chinese. When generating commit message, write in Chinese.",
      "apiBase": "https://ai.xxxxxx.com:5013/api",
      "useLegacyCompletionsEndpoint": false,
      "apiKey": "sk-api-key"
    },
    {
      "title": "o1-mini",
      "provider": "openai",
      "model": "o1-mini",
      "systemMessage": "Speak Chinese. When generating commit message, write in Chinese.",
      "apiBase": "https://ai.xxxxxx.com:5013/api",
      "useLegacyCompletionsEndpoint": false,
      "apiKey": "sk-api-key"
    },
    {
      "title": "gpt-4o",
      "provider": "openai",
      "model": "gpt-4o",
      "systemMessage": "Speak Chinese. When generating commit message, write in Chinese.",
      "apiBase": "https://ai.xxxxxx.com:5013/api",
      "useLegacyCompletionsEndpoint": false,
      "apiKey": "sk-api-key"
    },
    {
      "title": "Qwen-Coder-Plus",
      "provider": "openai",
      "model": "qwen-coder-plus",
      "systemMessage": "Speak Chinese. When generating commit message, write in Chinese.",
      "apiBase": "https://ai.xxxxxx.com:5013/api",
      "useLegacyCompletionsEndpoint": false,
      "apiKey": "sk-api-key"
    }
],
  "embeddingsProvider": {
    "title": "BGE-M3",
    "provider": "ollama",
    "model": "bge-m3:latest",
    "apiBase": "http://m4.lan.xxxxxx.com:11434"
  },
  "tabAutocompleteModel": {
    "title": "Tab Autocomplete",
    "provider": "ollama",
    "model": "qwen2.5-coder:1.5b",
    "apiBase": "http://m4.lan.xxxxxx.com:11434"
  },
  "reranker": {
    "name": "voyage",
    "params": {
      "model": "rerank-2",
      "apiKey": "pa-api-key"
    }
  },
  "customCommands": [
    {
      "name": "explain",
      "prompt": "{{{ input }}}\n\n使用中文详细解释所选代码。详细说明其功能、用途和工作原理。",
      "description": "解释代码"
    },
    {
      "name": "check",
      "prompt": "{{{ input }}}\n\n检查选定代码的语法和逻辑错误,如果有明显可以优化的地方,请给出优化建议。使用中文回答问题。",
      "description": "检查代码错误和提出优化建议"
    },
    {
      "name": "test",
      "prompt": "{{{ input }}}\n\n为选定的代码编写一套完整的单元测试代码。测试需要包含重要边界情况的正确性检查。测试需要包括测试前的准备和测试后的清理工作。使用中文书写代码备注",
      "description": "编写单元测试代码"
    }
  ],
  "contextProviders": [
    {
      "name": "code",
      "params": {}
    },
    {
      "name": "docs",
      "params": {}
    },
    {
      "name": "diff",
      "params": {}
    },
    {
      "name": "terminal",
      "params": {}
    },
    {
      "name": "problems",
      "params": {}
    },
    {
      "name": "folder",
      "params": {}
    },
    {
      "name": "web",
      "params": {}
    },
    {
      "name": "codebase",
      "params": {}
    }
  ],
  "slashCommands": [
    {
      "name": "share",
      "description": "Export the current chat session to markdown"
    },
    {
      "name": "cmd",
      "description": "Generate a shell command"
    },
    {
      "name": "commit",
      "description": "Generate a git commit message"
    }
  ]
}

Description

There are the following issues with indexing a codebase containing hundreds of files:

  1. The indexing progress is extremely slow. During indexing, Ollama cannot respond to requests from other clients. After indexing is completed, Ollama resumes normal operation.
  2. During indexing, Ollama receives a large number of aip/embed requests, and the processing time for each request is very long, ranging from 1 to 7 minutes.
  3. During indexing, clicking the "Cancel Indexing" button on the plugin UI is ineffective.

Causing Ollama to be unresponsive to other clients' requests is a very serious problem, resulting in all AI clients in the company being suspended.

To reproduce

  1. open a folder with hundreds of source code files with VScode.
  2. wait for the codebase index start
iShot_2024-12-30_16 15 46

Log output

No response

@dosubot dosubot bot added area:indexing Relates to embedding and indexing ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior labels Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:indexing Relates to embedding and indexing ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior "needs-triage"
Projects
None yet
Development

No branches or pull requests

2 participants