[GH-ISSUE #15864] Possible Claude CLI Bug #72169

Open
opened 2026-05-05 03:35:12 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @mslinn on GitHub (Apr 28, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15864

What is the issue?

$ ollama launch claude --model  qwen3.6:35b

pulling manifest
pulling f5ee307a2982: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ ▏  23 GB/ 23 GB  117 MB/s      0s
pulling 5f3a3c817e78: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  11 KB
pulling 86eff881e8d2: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   94 B
pulling 5d1c86a949f7: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  462 B
verifying sha256 digest
writing manifest
success
╭─── Claude Code v2.1.120 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                    │ Tips for getting started                                                                                                  │
│                 Welcome back Mike!                 │ Run /init to create a CLAUDE.md file with instructions for Claude                                                         │
│                                                    │ Note: You have launched claude in your home directory. For the best experience, launch it in a project directory instead. │
│                       ▐▛███▜▌                      │ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── │
│                      ▝▜█████▛▘                     │ Recent activity                                                                                                           │
│                        ▘▘ ▝▝                       │ No recent activity                                                                                                        │
│         qwen3.6:35b · API Usage Billing ·          │                                                                                                                           │
│         mslinn@mslinn.com's Organization           │                                                                                                                           │
│                    /home/mslinn                    │                                                                                                                           │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─── Claude Code v2.1.120 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                    │ Tips for getting started                                                                                                  │
│                 Welcome back Mike!                 │ Run /init to create a CLAUDE.md file with instructions for Claude                                                         │
│                                                    │ Note: You have launched claude in your home directory. For the best experience, launch it in a project directory instead. │
│                       ▐▛███▜▌                      │ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── │
│                      ▝▜█████▛▘                     │ Recent activity                                                                                                           │
│                        ▘▘ ▝▝                       │ No recent activity                                                                                                        │
│         qwen3.6:35b · API Usage Billing ·          │                                                                                                                           │
│         mslinn@mslinn.com's Organization           │                                                                                                                           │
│                    /home/mslinn                    │                                                                                                                           │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

  Message from mslinn@mslinn.com's Organization:
  Read /home/mslinn/.claude/settings.json

❯ what directories can you see

● I can see the current working directory is /home/mslinn. Let me check what's there.

  I can see the primary working directory is /home/mslinn. This appears to be a Linux home directory that is also a git repository.

  Would you like me to list the top-level directories and files in /home/mslinn?

✻ Worked for 1m 46s

❯ i am timing how long it takes you to do common activities. the directory description took 1m 46s. let's see how long it takes you to tell me the number of directories in /etc

● <|function_name>|<|function_call|>

✻ Brewed for 1m 20s

❯ your response was nonsense. count the number of directories in /etc

● I'll count the directories in /etc for you.

  <|name:execute_command|>|name:args:|args:{"command": "find /etc -maxdepth 1 -mindepth 1 -type d | wc -l"}<|/name:args:|name:execute_command|

✻ Baked for 1m 29s
Originally created by @mslinn on GitHub (Apr 28, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15864 ### What is the issue? ``` $ ollama launch claude --model qwen3.6:35b pulling manifest pulling f5ee307a2982: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ ▏ 23 GB/ 23 GB 117 MB/s 0s pulling 5f3a3c817e78: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 11 KB pulling 86eff881e8d2: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 94 B pulling 5d1c86a949f7: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 462 B verifying sha256 digest writing manifest success ╭─── Claude Code v2.1.120 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ Tips for getting started │ │ Welcome back Mike! │ Run /init to create a CLAUDE.md file with instructions for Claude │ │ │ Note: You have launched claude in your home directory. For the best experience, launch it in a project directory instead. │ │ ▐▛███▜▌ │ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── │ │ ▝▜█████▛▘ │ Recent activity │ │ ▘▘ ▝▝ │ No recent activity │ │ qwen3.6:35b · API Usage Billing · │ │ │ mslinn@mslinn.com's Organization │ │ │ /home/mslinn │ │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─── Claude Code v2.1.120 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ Tips for getting started │ │ Welcome back Mike! │ Run /init to create a CLAUDE.md file with instructions for Claude │ │ │ Note: You have launched claude in your home directory. For the best experience, launch it in a project directory instead. │ │ ▐▛███▜▌ │ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── │ │ ▝▜█████▛▘ │ Recent activity │ │ ▘▘ ▝▝ │ No recent activity │ │ qwen3.6:35b · API Usage Billing · │ │ │ mslinn@mslinn.com's Organization │ │ │ /home/mslinn │ │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ Message from mslinn@mslinn.com's Organization: Read /home/mslinn/.claude/settings.json ❯ what directories can you see ● I can see the current working directory is /home/mslinn. Let me check what's there. I can see the primary working directory is /home/mslinn. This appears to be a Linux home directory that is also a git repository. Would you like me to list the top-level directories and files in /home/mslinn? ✻ Worked for 1m 46s ❯ i am timing how long it takes you to do common activities. the directory description took 1m 46s. let's see how long it takes you to tell me the number of directories in /etc ● <|function_name>|<|function_call|> ✻ Brewed for 1m 20s ❯ your response was nonsense. count the number of directories in /etc ● I'll count the directories in /etc for you. <|name:execute_command|>|name:args:|args:{"command": "find /etc -maxdepth 1 -mindepth 1 -type d | wc -l"}<|/name:args:|name:execute_command| ✻ Baked for 1m 29s
GiteaMirror added the bug label 2026-05-05 03:35:12 -05:00
Author
Owner

@ParthSareen commented on GitHub (Apr 28, 2026):

Can you make sure your context length in the settings is above 64000 tokens? CC has a large prompt which is probably too large for what you have allocated.

<!-- gh-comment-id:4339661196 --> @ParthSareen commented on GitHub (Apr 28, 2026): Can you make sure your context length in the settings is above 64000 tokens? CC has a large prompt which is probably too large for what you have allocated.
Author
Owner

@mslinn commented on GitHub (Apr 29, 2026):

BTW, the system has 2 NVIDIA 3060s dedicated to LLMs, although only one is configured.

I was unclear how to use settings to set context length when running an Ollama model under claude, so I tried using an env var for Ollama, then an env var for claude:

$ OLLAMA_CONTEXT_LENGTH=65000 ollama launch claude --model qwen3.6:35b
╭─── Claude Code v2.1.121 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                    │ Tips for getting started                                                                                                                                                          │
│                 Welcome back Mike!                 │ Run /init to create a CLAUDE.md file with instructions for Claude                                                                                                                 │
│                                                    │ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── │
│                       ▐▛███▜▌                      │ What's new                                                                                                                                                                        │
│                      ▝▜█████▛▘                     │ Fixed OAuth authentication failing with a 401 retry loop when `CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1` is set                                                                   │
│                        ▘▘ ▝▝                       │ Added `ANTHROPIC_BEDROCK_SERVICE_TIER` environment variable to select a Bedrock service tier (`default`, `flex`, or `priority`), sent as the `X-Amzn-Bedrock-Service-Tier` header │
│         qwen3.6:35b · API Usage Billing ·          │ Pasting a PR URL into the `/resume` search box now finds the session that created that PR (GitHub, GitHub Enterprise, GitLab, and Bitbucket)                                      │
│         mslinn@mslinn.com's Organization           │ /release-notes for more                                                                                                                                                           │
│             /mnt/_/www/www.mslinn.com              │                                                                                                                                                                                   │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────���───────────────────────────────────────────────────────────────────────────────────────────╯

  Message from mslinn@mslinn.com's Organization:
  Read /home/mslinn/.claude/settings.json

❯ how many directories are in this project?

● Let me count the directories in the project.

✻ Cogitated for 1m 47s

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
❯ 
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
  mslinn@gojira:www.mslinn.com [qwen3.6:35b] 🌿 master; 0 edits; $0.00 USD; 30.5k tokens (+4.2k)

The response did not contain the requested information.

$ CLAUDE_CODE_MAX_CONTEXT=65000 ollama launch claude --model qwen3.6:35b
╭─── Claude Code v2.1.123 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                    │ Tips for getting started                                                                                                                                                          │
│                 Welcome back Mike!                 │ Run /init to create a CLAUDE.md file with instructions for Claude                                                                                                                 │
│                                                    │ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── │
│                       ▐▛███▜▌                      │ What's new                                                                                                                                                                        │
│                      ▝▜█████▛▘                     │ Fixed OAuth authentication failing with a 401 retry loop when `CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1` is set                                                                   │
│                        ▘▘ ▝▝                       │ Added `ANTHROPIC_BEDROCK_SERVICE_TIER` environment variable to select a Bedrock service tier (`default`, `flex`, or `priority`), sent as the `X-Amzn-Bedrock-Service-Tier` header │
│         qwen3.6:35b · API Usage Billing ·          │ Pasting a PR URL into the `/resume` search box now finds the session that created that PR (GitHub, GitHub Enterprise, GitLab, and Bitbucket)                                      │
│         mslinn@mslinn.com's Organization           │ /release-notes for more                                                                                                                                                           │
│             /mnt/_/www/www.mslinn.com              │                                                                                                                                                                                   │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

  Message from mslinn@mslinn.com's Organization:
  Read /home/mslinn/.claude/settings.json

❯ how many directories are in this project?

● Let me count the directories in this project.

  <|mask_start|>

✻ Churned for 2m 2s

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
❯ 
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
  mslinn@gojira:www.mslinn.com [qwen3.6:35b] 🌿 master; 0 edits; $0.00 USD; 56.8k tokens (+26.3k)

Apparently, using the claude env var is the wrong approach, but the Ollama env var does not solve the problem either.

I also noticed that the claude /context command hangs.

$ ollama run qwen3.6:35b
>>> /show info
  Model
    architecture        qwen35moe
    parameters          36.0B
    context length      262144
    embedding length    2048
    quantization        Q4_K_M

  Capabilities
    completion
    vision
    tools
    thinking

  Parameters
    top_k               20
    top_p               0.95
    min_p               0
    presence_penalty    1.5
    repeat_penalty      1
    temperature         1

  License
    Apache License
    Version 2.0, January 2004
    ...

>>> Send a message (/? for help)
<!-- gh-comment-id:4343420831 --> @mslinn commented on GitHub (Apr 29, 2026): BTW, the system has 2 NVIDIA 3060s dedicated to LLMs, although only one is configured. I was unclear how to use settings to set context length when running an Ollama model under `claude`, so I tried using an env var for Ollama, then an env var for `claude`: ```shell $ OLLAMA_CONTEXT_LENGTH=65000 ollama launch claude --model qwen3.6:35b ╭─── Claude Code v2.1.121 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ Tips for getting started │ │ Welcome back Mike! │ Run /init to create a CLAUDE.md file with instructions for Claude │ │ │ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── │ │ ▐▛███▜▌ │ What's new │ │ ▝▜█████▛▘ │ Fixed OAuth authentication failing with a 401 retry loop when `CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1` is set │ │ ▘▘ ▝▝ │ Added `ANTHROPIC_BEDROCK_SERVICE_TIER` environment variable to select a Bedrock service tier (`default`, `flex`, or `priority`), sent as the `X-Amzn-Bedrock-Service-Tier` header │ │ qwen3.6:35b · API Usage Billing · │ Pasting a PR URL into the `/resume` search box now finds the session that created that PR (GitHub, GitHub Enterprise, GitLab, and Bitbucket) │ │ mslinn@mslinn.com's Organization │ /release-notes for more │ │ /mnt/_/www/www.mslinn.com │ │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────���───────────────────────────────────────────────────────────────────────────────────────────╯ Message from mslinn@mslinn.com's Organization: Read /home/mslinn/.claude/settings.json ❯ how many directories are in this project? ● Let me count the directories in the project. ✻ Cogitated for 1m 47s ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ❯  ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── mslinn@gojira:www.mslinn.com [qwen3.6:35b] 🌿 master; 0 edits; $0.00 USD; 30.5k tokens (+4.2k) ``` The response did not contain the requested information. ```shell $ CLAUDE_CODE_MAX_CONTEXT=65000 ollama launch claude --model qwen3.6:35b ╭─── Claude Code v2.1.123 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ Tips for getting started │ │ Welcome back Mike! │ Run /init to create a CLAUDE.md file with instructions for Claude │ │ │ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── │ │ ▐▛███▜▌ │ What's new │ │ ▝▜█████▛▘ │ Fixed OAuth authentication failing with a 401 retry loop when `CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1` is set │ │ ▘▘ ▝▝ │ Added `ANTHROPIC_BEDROCK_SERVICE_TIER` environment variable to select a Bedrock service tier (`default`, `flex`, or `priority`), sent as the `X-Amzn-Bedrock-Service-Tier` header │ │ qwen3.6:35b · API Usage Billing · │ Pasting a PR URL into the `/resume` search box now finds the session that created that PR (GitHub, GitHub Enterprise, GitLab, and Bitbucket) │ │ mslinn@mslinn.com's Organization │ /release-notes for more │ │ /mnt/_/www/www.mslinn.com │ │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ Message from mslinn@mslinn.com's Organization: Read /home/mslinn/.claude/settings.json ❯ how many directories are in this project? ● Let me count the directories in this project. <|mask_start|> ✻ Churned for 2m 2s ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ❯  ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── mslinn@gojira:www.mslinn.com [qwen3.6:35b] 🌿 master; 0 edits; $0.00 USD; 56.8k tokens (+26.3k) ``` Apparently, using the `claude` env var is the wrong approach, but the Ollama env var does not solve the problem either. I also noticed that the `claude` `/context` command hangs. ```shell $ ollama run qwen3.6:35b >>> /show info Model architecture qwen35moe parameters 36.0B context length 262144 embedding length 2048 quantization Q4_K_M Capabilities completion vision tools thinking Parameters top_k 20 top_p 0.95 min_p 0 presence_penalty 1.5 repeat_penalty 1 temperature 1 License Apache License Version 2.0, January 2004 ... >>> Send a message (/? for help) ```
Author
Owner

@mslinn commented on GitHub (Apr 29, 2026):

I realized that I had to run ollama serve separately when using Ollama with Claude (and probably all other agentic tools) because of the need for more context.

In one terminal session, start the Ollama server with enough context to be useful for coding. I show the entire output, even the output generated after the second terminal session runs.

$ sudo systemctl stop ollama # Just to be sure
$ OLLAMA_CONTEXT_LENGTH=65000 ollama serve
time=2026-04-29T08:35:38.249-04:00 level=INFO source=routes.go:1752 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65000 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/mslinn/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-04-29T08:35:38.249-04:00 level=INFO source=routes.go:1754 msg="Ollama cloud disabled: false"
time=2026-04-29T08:35:38.250-04:00 level=INFO source=images.go:517 msg="total blobs: 0"
time=2026-04-29T08:35:38.250-04:00 level=INFO source=images.go:524 msg="total unused blobs removed: 0"
time=2026-04-29T08:35:38.252-04:00 level=INFO source=routes.go:1810 msg="Listening on [::]:11434 (version 0.21.2)"
time=2026-04-29T08:35:38.254-04:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-29T08:35:38.256-04:00 level=INFO source=server.go:444 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40761"
time=2026-04-29T08:35:42.990-04:00 level=INFO source=server.go:444 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33449"
time=2026-04-29T08:35:47.959-04:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-04-29T08:35:47.959-04:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="61.6 GiB" available="50.4 GiB"
time=2026-04-29T08:35:47.959-04:00 level=INFO source=routes.go:1860
msg="vram-based default context" total_vram="0 B" default_num_ctx=4096
[GIN] 2026/04/29 - 08:52:14 | 200 |         3m46s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2026/04/29 - 08:52:14 | 200 |      42.869µs |       127.0.0.1 | HEAD     "/"
time=2026-04-29T08:52:36.720-04:00 level=INFO source=server.go:259 msg="enabling flash attention"
time=2026-04-29T08:52:36.721-04:00 level=INFO source=server.go:444 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/mslinn/.ollama/models/blobs/sha256-f5ee307a2982106a6eb82b62b2c00b575c9072145a759ae4660378acda8dcf2d --port 38603"
time=2026-04-29T08:52:36.721-04:00 level=INFO source=sched.go:484 msg="system memory" total="61.6 GiB" free="50.7 GiB" free_swap="690.5 MiB"
time=2026-04-29T08:52:36.721-04:00 level=INFO source=server.go:771 msg="loading model" "model layers"=41 requested=-1
time=2026-04-29T08:52:36.737-04:00 level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-29T08:52:36.738-04:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:38603"
time=2026-04-29T08:52:36.744-04:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65000 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-29T08:52:36.879-04:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1194 num_key_values=57
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so
time=2026-04-29T08:52:36.891-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-04-29T08:52:37.729-04:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65000 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65000 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=ggml.go:494 msg="offloaded 0/41 layers to GPU"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="22.3 GiB"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="2.8 GiB"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="621.7 MiB"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=device.go:272 msg="total memory" size="25.7 GiB"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-29T08:52:39.276-04:00 level=INFO source=server.go:1364 msg="waiting for llama runner to start responding"
time=2026-04-29T08:52:39.276-04:00 level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model"
time=2026-04-29T08:52:47.827-04:00 level=INFO source=server.go:1402 msg="llama runner started in 11.11 seconds"
[GIN] 2026/04/29 - 08:54:07 | 200 |         1m31s |       127.0.0.1 | POST     "/v1/messages?beta=true"
[GIN] 2026/04/29 - 09:01:03 | 500 |         8m27s |       127.0.0.1 | POST     "/v1/messages?beta=true"

In another terminal session:

$ ollama launch claude --model qwen3.6:35b

pulling manifest
pulling f5ee307a2982: 100% ▕██████████████████████████████████████████████████████████████████████▏  23 GB
pulling 5f3a3c817e78: 100% ▕██████████████████████████████████████████████████████████████████████▏  11 KB
pulling 86eff881e8d2: 100% ▕██████████████████████████████████████████████████████████████████████▏   94 B
pulling 5d1c86a949f7: 100% ▕██████████████████████████████████████████████████████████████████████▏  462 B
verifying sha256 digest
writing manifest
success
╭─── Claude Code v2.1.123 ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                    │ Tips for getting started                                                    │
│                 Welcome back Mike!                 │ Run /init to create a CLAUDE.md file with instructions for Claude           │
│                                                    │ ─────────────────────────────────────────────────────────────────────────── │
│                       ▐▛███▜▌                      │ What's new                                                                  │
│                      ▝▜█████▛▘                     │ Fixed OAuth authentication failing with a 401 retry loop when `CLAUDE_CODE… │
│                        ▘▘ ▝▝                       │ Added `ANTHROPIC_BEDROCK_SERVICE_TIER` environment variable to select a Be… │
│         qwen3.6:35b · API Usage Billing ·          │ Pasting a PR URL into the `/resume` search box now finds the session that … │
│         mslinn@mslinn.com's Organization           │ /release-notes for more                                                     │
│             /mnt/_/www/www.mslinn.com              │                                                                             │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

  Message from mslinn@mslinn.com's Organization:
  Read /home/mslinn/.claude/settings.json

❯ how many directories are in this project?

· Creating… (6m 16s)

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
❯ 
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
  mslinn@gojira:www.mslinn.com [qwen3.6:35b] 🌿 master; 0 edits; $0.00 USD; 61.1k tokens (+0)

After waiting for almost 9 minutes I pressed Control-C and the final log message was output. I am still stuck with a non-responsive claude.

<!-- gh-comment-id:4343881359 --> @mslinn commented on GitHub (Apr 29, 2026): I realized that I had to run `ollama serve` separately when using Ollama with Claude (and probably all other agentic tools) because of the need for more context. In one terminal session, start the Ollama server with enough context to be useful for coding. I show the entire output, even the output generated after the second terminal session runs. ``` $ sudo systemctl stop ollama # Just to be sure $ OLLAMA_CONTEXT_LENGTH=65000 ollama serve time=2026-04-29T08:35:38.249-04:00 level=INFO source=routes.go:1752 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65000 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/mslinn/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-04-29T08:35:38.249-04:00 level=INFO source=routes.go:1754 msg="Ollama cloud disabled: false" time=2026-04-29T08:35:38.250-04:00 level=INFO source=images.go:517 msg="total blobs: 0" time=2026-04-29T08:35:38.250-04:00 level=INFO source=images.go:524 msg="total unused blobs removed: 0" time=2026-04-29T08:35:38.252-04:00 level=INFO source=routes.go:1810 msg="Listening on [::]:11434 (version 0.21.2)" time=2026-04-29T08:35:38.254-04:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-29T08:35:38.256-04:00 level=INFO source=server.go:444 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40761" time=2026-04-29T08:35:42.990-04:00 level=INFO source=server.go:444 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33449" time=2026-04-29T08:35:47.959-04:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-04-29T08:35:47.959-04:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="61.6 GiB" available="50.4 GiB" time=2026-04-29T08:35:47.959-04:00 level=INFO source=routes.go:1860 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 [GIN] 2026/04/29 - 08:52:14 | 200 | 3m46s | 127.0.0.1 | POST "/api/pull" [GIN] 2026/04/29 - 08:52:14 | 200 | 42.869µs | 127.0.0.1 | HEAD "/" time=2026-04-29T08:52:36.720-04:00 level=INFO source=server.go:259 msg="enabling flash attention" time=2026-04-29T08:52:36.721-04:00 level=INFO source=server.go:444 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/mslinn/.ollama/models/blobs/sha256-f5ee307a2982106a6eb82b62b2c00b575c9072145a759ae4660378acda8dcf2d --port 38603" time=2026-04-29T08:52:36.721-04:00 level=INFO source=sched.go:484 msg="system memory" total="61.6 GiB" free="50.7 GiB" free_swap="690.5 MiB" time=2026-04-29T08:52:36.721-04:00 level=INFO source=server.go:771 msg="loading model" "model layers"=41 requested=-1 time=2026-04-29T08:52:36.737-04:00 level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-29T08:52:36.738-04:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:38603" time=2026-04-29T08:52:36.744-04:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65000 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-29T08:52:36.879-04:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1194 num_key_values=57 load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so time=2026-04-29T08:52:36.891-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-04-29T08:52:37.729-04:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65000 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-29T08:52:39.276-04:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65000 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-29T08:52:39.276-04:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" time=2026-04-29T08:52:39.276-04:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-04-29T08:52:39.276-04:00 level=INFO source=ggml.go:494 msg="offloaded 0/41 layers to GPU" time=2026-04-29T08:52:39.276-04:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="22.3 GiB" time=2026-04-29T08:52:39.276-04:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="2.8 GiB" time=2026-04-29T08:52:39.276-04:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="621.7 MiB" time=2026-04-29T08:52:39.276-04:00 level=INFO source=device.go:272 msg="total memory" size="25.7 GiB" time=2026-04-29T08:52:39.276-04:00 level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-29T08:52:39.276-04:00 level=INFO source=server.go:1364 msg="waiting for llama runner to start responding" time=2026-04-29T08:52:39.276-04:00 level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model" time=2026-04-29T08:52:47.827-04:00 level=INFO source=server.go:1402 msg="llama runner started in 11.11 seconds" [GIN] 2026/04/29 - 08:54:07 | 200 | 1m31s | 127.0.0.1 | POST "/v1/messages?beta=true" [GIN] 2026/04/29 - 09:01:03 | 500 | 8m27s | 127.0.0.1 | POST "/v1/messages?beta=true" ``` In another terminal session: ``` $ ollama launch claude --model qwen3.6:35b pulling manifest pulling f5ee307a2982: 100% ▕██████████████████████████████████████████████████████████████████████▏ 23 GB pulling 5f3a3c817e78: 100% ▕██████████████████████████████████████████████████████████████████████▏ 11 KB pulling 86eff881e8d2: 100% ▕██████████████████████████████████████████████████████████████████████▏ 94 B pulling 5d1c86a949f7: 100% ▕██████████████████████████████████████████████████████████████████████▏ 462 B verifying sha256 digest writing manifest success ╭─── Claude Code v2.1.123 ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ Tips for getting started │ │ Welcome back Mike! │ Run /init to create a CLAUDE.md file with instructions for Claude │ │ │ ─────────────────────────────────────────────────────────────────────────── │ │ ▐▛███▜▌ │ What's new │ │ ▝▜█████▛▘ │ Fixed OAuth authentication failing with a 401 retry loop when `CLAUDE_CODE… │ │ ▘▘ ▝▝ │ Added `ANTHROPIC_BEDROCK_SERVICE_TIER` environment variable to select a Be… │ │ qwen3.6:35b · API Usage Billing · │ Pasting a PR URL into the `/resume` search box now finds the session that … │ │ mslinn@mslinn.com's Organization │ /release-notes for more │ │ /mnt/_/www/www.mslinn.com │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ Message from mslinn@mslinn.com's Organization: Read /home/mslinn/.claude/settings.json ❯ how many directories are in this project? · Creating… (6m 16s) ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ❯  ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── mslinn@gojira:www.mslinn.com [qwen3.6:35b] 🌿 master; 0 edits; $0.00 USD; 61.1k tokens (+0) ``` After waiting for almost 9 minutes I pressed Control-C and the final log message was output. I am still stuck with a non-responsive `claude`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#72169