[GH-ISSUE #15261] Vulkan causing unrelated output with gemma4:e4b (AMD/Ryzen iGPU) #56273

Open
opened 2026-04-29 10:33:10 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @alperyilmaz on GitHub (Apr 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15261

What is the issue?

I use ollama with Vulkan and I have AMD Ryzen CPU/iGPU. When using gemma4:e4b OR gemma4:e2b I noticed that I got strange/unrelated response. Also, gemma skips thinking part as well..

I provided the log for when loading gemma4:e4b with Vulkan.

Here's a sample prompt and response (notice that there's no thinking output):

>>> how many r's are there in word strawberry
...I can understand! You are asking for a very literal translation or clarification of the phrase "How many **isso**?"

Since "isso" is not a standard English word, and it wasn't preceded by a picture or context, I have to make some educated
guesses about what you might mean.

When I use qwen, (with Vulkan) it behaves as expected:

❯ ollama run qwen3.5:2b
>>> how can r's are there in word strawberry?
Thinking...
Thinking Process:

1.  **Analyze the Request:**
    *   Question: "how can r's are there in word strawberry?"
    *   Core Task: Identify the occurrences of the letter 'r' in the word "strawberry".
    *   Target: Explain where the 'r's are in the word.
...clipped thinking output for brevity...
...done thinking.

There are **three** "r"s in the word "strawberry".

Here is how they are located:

**s** **t** **r** **a** **w** **b** **e** **r** **r** **y**

1.  **1st "r":** This is the **3rd letter** (inside the word **s-t-r-...**)
2.  **2nd "r":** This is the **8th letter** (inside the word ...-e-r-r-**r**-y? No, let's look closer).
3.  **3rd "r":** This is the **9th letter**.

Without Vulkan, gemma4:e4b works fine (notice that it actually thinks):

❯ OLLAMA_VULKAN=0 ollama serve
❯ ollama run gemma4:e4b
>>> how can r's are there in word strawberry?
Thinking...
Thinking Process:

1.  **Analyze the Request:** The user is asking "how can r's are there in word strawberry?". This is a counting/letter
identification task.
2.  **Identify the Target Word:** The word is "strawberry".
3.  **Identify the Target Letter:** The letter is 'r'.
4.  **Count the Occurrences:**
    *   S T R A W B E R R Y
    *   1st 'r': R (index 2)
    *   2nd 'r': R (index 8)
    *   3rd 'r': R (index 9)
5.  **Determine the Total Count:** There are three 'r's.
6.  **Formulate the Answer:** State the count clearly and provide the process (optional, but helpful).
...done thinking.

There are **3** 'r's in the word "strawberry".

Not sure if this bug is related to #15248

Relevant log output

OLLAMA_VULKAN=1 ollama serve
time=2026-04-03T10:58:04.777+03:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/alper/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-04-03T10:58:04.777+03:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false"
time=2026-04-03T10:58:04.777+03:00 level=INFO source=images.go:499 msg="total blobs: 18"
time=2026-04-03T10:58:04.778+03:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-03T10:58:04.778+03:00 level=INFO source=routes.go:1802 msg="Listening on 127.0.0.1:11434 (version 0.20.0)"
time=2026-04-03T10:58:04.778+03:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-03T10:58:04.779+03:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38177"
time=2026-04-03T10:58:04.840+03:00 level=INFO source=types.go:42 msg="inference compute" id=00000000-0500-0000-0000-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="AMD Radeon Graphics (RADV RENOIR)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:05:00.0 type=iGPU total="32.3 GiB" available="31.0 GiB"
time=2026-04-03T10:58:04.840+03:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="32.3 GiB" default_num_ctx=32768


[GIN] 2026/04/03 - 10:58:12 | 200 |      41.695µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/03 - 10:58:12 | 200 |  228.390428ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/03 - 10:58:12 | 200 |  225.689364ms |       127.0.0.1 | POST     "/api/show"
time=2026-04-03T10:58:12.843+03:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40981"
time=2026-04-03T10:58:13.048+03:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-03T10:58:13.049+03:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /home/alper/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a --port 37115"
time=2026-04-03T10:58:13.049+03:00 level=INFO source=sched.go:484 msg="system memory" total="60.7 GiB" free="54.1 GiB" free_swap="12.0 GiB"
time=2026-04-03T10:58:13.049+03:00 level=INFO source=sched.go:491 msg="gpu memory" id=00000000-0500-0000-0000-000000000000 library=Vulkan available="30.5 GiB" free="31.0 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-03T10:58:13.049+03:00 level=INFO source=server.go:759 msg="loading model" "model layers"=43 requested=-1
time=2026-04-03T10:58:13.066+03:00 level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-03T10:58:13.066+03:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:37115"
time=2026-04-03T10:58:13.071+03:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:43[ID:00000000-0500-0000-0000-000000000000 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-03T10:58:13.135+03:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=2131 num_key_values=55
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon Graphics (RADV RENOIR) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none
load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so
time=2026-04-03T10:58:13.170+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
ggml_backend_vk_get_device_memory called: uuid 00000000-0500-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2026-04-03T10:58:13.193+03:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-03T10:58:13.221+03:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=2.268673ms bounds=(0,0)-(2048,2048)
time=2026-04-03T10:58:13.335+03:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=114.602148ms size="[768 768]"
time=2026-04-03T10:58:13.335+03:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-03T10:58:13.335+03:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-03T10:58:13.336+03:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=117.853427ms shape="[2560 256]"
time=2026-04-03T10:58:13.461+03:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:43[ID:00000000-0500-0000-0000-000000000000 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 00000000-0500-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_vulkan: Device memory allocation of size 5637144576 failed.
ggml_vulkan: Requested buffer size exceeds device buffer size limit: ErrorOutOfDeviceMemory
alloc_tensor_range: failed to allocate Vulkan0 buffer of size 5637144576
time=2026-04-03T10:58:13.788+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.10
time=2026-04-03T10:58:13.788+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.20
time=2026-04-03T10:58:13.789+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.30
time=2026-04-03T10:58:13.789+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.40
time=2026-04-03T10:58:13.789+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.50
time=2026-04-03T10:58:13.790+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.60
time=2026-04-03T10:58:13.790+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.70
time=2026-04-03T10:58:13.791+03:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:42[ID:00000000-0500-0000-0000-000000000000 Layers:42(0..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 00000000-0500-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2026-04-03T10:58:14.046+03:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-03T10:58:14.068+03:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=2.508371ms bounds=(0,0)-(2048,2048)
time=2026-04-03T10:58:14.183+03:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=114.498922ms size="[768 768]"
time=2026-04-03T10:58:14.186+03:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-03T10:58:14.186+03:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-03T10:58:14.187+03:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=121.567548ms shape="[2560 256]"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:42[ID:00000000-0500-0000-0000-000000000000 Layers:42(0..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=ggml.go:482 msg="offloading 42 repeating layers to GPU"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=ggml.go:494 msg="offloaded 42/43 layers to GPU"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="2.8 GiB"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.6 GiB"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="692.0 MiB"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="654.3 MiB"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="21.0 MiB"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:272 msg="total memory" size="10.8 GiB"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-03T10:58:14.644+03:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding"
time=2026-04-03T10:58:14.644+03:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model"
time=2026-04-03T10:58:19.913+03:00 level=INFO source=server.go:1390 msg="llama runner started in 6.86 seconds"
[GIN] 2026/04/03 - 10:58:19 | 200 |  7.313091191s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.20.0

Originally created by @alperyilmaz on GitHub (Apr 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15261 ### What is the issue? I use ollama with Vulkan and I have AMD Ryzen CPU/iGPU. When using gemma4:e4b OR gemma4:e2b I noticed that I got strange/unrelated response. Also, gemma skips thinking part as well.. I provided the log for when loading gemma4:e4b with Vulkan. Here's a sample prompt and response (notice that there's no thinking output): ``` >>> how many r's are there in word strawberry ...I can understand! You are asking for a very literal translation or clarification of the phrase "How many **isso**?" Since "isso" is not a standard English word, and it wasn't preceded by a picture or context, I have to make some educated guesses about what you might mean. ``` When I use qwen, (with Vulkan) it behaves as expected: ``` ❯ ollama run qwen3.5:2b >>> how can r's are there in word strawberry? Thinking... Thinking Process: 1. **Analyze the Request:** * Question: "how can r's are there in word strawberry?" * Core Task: Identify the occurrences of the letter 'r' in the word "strawberry". * Target: Explain where the 'r's are in the word. ...clipped thinking output for brevity... ...done thinking. There are **three** "r"s in the word "strawberry". Here is how they are located: **s** **t** **r** **a** **w** **b** **e** **r** **r** **y** 1. **1st "r":** This is the **3rd letter** (inside the word **s-t-r-...**) 2. **2nd "r":** This is the **8th letter** (inside the word ...-e-r-r-**r**-y? No, let's look closer). 3. **3rd "r":** This is the **9th letter**. ``` Without Vulkan, gemma4:e4b works fine (notice that it actually thinks): ``` ❯ OLLAMA_VULKAN=0 ollama serve ❯ ollama run gemma4:e4b >>> how can r's are there in word strawberry? Thinking... Thinking Process: 1. **Analyze the Request:** The user is asking "how can r's are there in word strawberry?". This is a counting/letter identification task. 2. **Identify the Target Word:** The word is "strawberry". 3. **Identify the Target Letter:** The letter is 'r'. 4. **Count the Occurrences:** * S T R A W B E R R Y * 1st 'r': R (index 2) * 2nd 'r': R (index 8) * 3rd 'r': R (index 9) 5. **Determine the Total Count:** There are three 'r's. 6. **Formulate the Answer:** State the count clearly and provide the process (optional, but helpful). ...done thinking. There are **3** 'r's in the word "strawberry". ``` Not sure if this bug is related to #15248 ### Relevant log output ```shell ❯ OLLAMA_VULKAN=1 ollama serve time=2026-04-03T10:58:04.777+03:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/alper/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-04-03T10:58:04.777+03:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false" time=2026-04-03T10:58:04.777+03:00 level=INFO source=images.go:499 msg="total blobs: 18" time=2026-04-03T10:58:04.778+03:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-03T10:58:04.778+03:00 level=INFO source=routes.go:1802 msg="Listening on 127.0.0.1:11434 (version 0.20.0)" time=2026-04-03T10:58:04.778+03:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-03T10:58:04.779+03:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38177" time=2026-04-03T10:58:04.840+03:00 level=INFO source=types.go:42 msg="inference compute" id=00000000-0500-0000-0000-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="AMD Radeon Graphics (RADV RENOIR)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:05:00.0 type=iGPU total="32.3 GiB" available="31.0 GiB" time=2026-04-03T10:58:04.840+03:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="32.3 GiB" default_num_ctx=32768 [GIN] 2026/04/03 - 10:58:12 | 200 | 41.695µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/03 - 10:58:12 | 200 | 228.390428ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/03 - 10:58:12 | 200 | 225.689364ms | 127.0.0.1 | POST "/api/show" time=2026-04-03T10:58:12.843+03:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40981" time=2026-04-03T10:58:13.048+03:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-03T10:58:13.049+03:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /home/alper/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a --port 37115" time=2026-04-03T10:58:13.049+03:00 level=INFO source=sched.go:484 msg="system memory" total="60.7 GiB" free="54.1 GiB" free_swap="12.0 GiB" time=2026-04-03T10:58:13.049+03:00 level=INFO source=sched.go:491 msg="gpu memory" id=00000000-0500-0000-0000-000000000000 library=Vulkan available="30.5 GiB" free="31.0 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-03T10:58:13.049+03:00 level=INFO source=server.go:759 msg="loading model" "model layers"=43 requested=-1 time=2026-04-03T10:58:13.066+03:00 level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-03T10:58:13.066+03:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:37115" time=2026-04-03T10:58:13.071+03:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:43[ID:00000000-0500-0000-0000-000000000000 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-03T10:58:13.135+03:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=2131 num_key_values=55 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = AMD Radeon Graphics (RADV RENOIR) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so time=2026-04-03T10:58:13.170+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) ggml_backend_vk_get_device_memory called: uuid 00000000-0500-0000-0000-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2026-04-03T10:58:13.193+03:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-03T10:58:13.221+03:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=2.268673ms bounds=(0,0)-(2048,2048) time=2026-04-03T10:58:13.335+03:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=114.602148ms size="[768 768]" time=2026-04-03T10:58:13.335+03:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-03T10:58:13.335+03:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-03T10:58:13.336+03:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=117.853427ms shape="[2560 256]" time=2026-04-03T10:58:13.461+03:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:43[ID:00000000-0500-0000-0000-000000000000 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 00000000-0500-0000-0000-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 ggml_vulkan: Device memory allocation of size 5637144576 failed. ggml_vulkan: Requested buffer size exceeds device buffer size limit: ErrorOutOfDeviceMemory alloc_tensor_range: failed to allocate Vulkan0 buffer of size 5637144576 time=2026-04-03T10:58:13.788+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.10 time=2026-04-03T10:58:13.788+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.20 time=2026-04-03T10:58:13.789+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.30 time=2026-04-03T10:58:13.789+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.40 time=2026-04-03T10:58:13.789+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.50 time=2026-04-03T10:58:13.790+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.60 time=2026-04-03T10:58:13.790+03:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.70 time=2026-04-03T10:58:13.791+03:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:42[ID:00000000-0500-0000-0000-000000000000 Layers:42(0..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 00000000-0500-0000-0000-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2026-04-03T10:58:14.046+03:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-03T10:58:14.068+03:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=2.508371ms bounds=(0,0)-(2048,2048) time=2026-04-03T10:58:14.183+03:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=114.498922ms size="[768 768]" time=2026-04-03T10:58:14.186+03:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-03T10:58:14.186+03:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-03T10:58:14.187+03:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=121.567548ms shape="[2560 256]" time=2026-04-03T10:58:14.644+03:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:42[ID:00000000-0500-0000-0000-000000000000 Layers:42(0..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-03T10:58:14.644+03:00 level=INFO source=ggml.go:482 msg="offloading 42 repeating layers to GPU" time=2026-04-03T10:58:14.644+03:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-04-03T10:58:14.644+03:00 level=INFO source=ggml.go:494 msg="offloaded 42/43 layers to GPU" time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="2.8 GiB" time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.6 GiB" time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="692.0 MiB" time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="654.3 MiB" time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="21.0 MiB" time=2026-04-03T10:58:14.644+03:00 level=INFO source=device.go:272 msg="total memory" size="10.8 GiB" time=2026-04-03T10:58:14.644+03:00 level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-03T10:58:14.644+03:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding" time=2026-04-03T10:58:14.644+03:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model" time=2026-04-03T10:58:19.913+03:00 level=INFO source=server.go:1390 msg="llama runner started in 6.86 seconds" [GIN] 2026/04/03 - 10:58:19 | 200 | 7.313091191s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.20.0
GiteaMirror added the bug label 2026-04-29 10:33:10 -05:00
Author
Owner

@Onako2 commented on GitHub (Apr 3, 2026):

Can confirm on Windows 11 with Intel Core i7-1185G7 with Vulkan on

<!-- gh-comment-id:4184786403 --> @Onako2 commented on GitHub (Apr 3, 2026): Can confirm on Windows 11 with Intel Core i7-1185G7 with Vulkan on
Author
Owner

@phdaucourt commented on GitHub (Apr 3, 2026):

Can confirm too for gemma4:e2b on Windows 11 with with Intel Core i7-1185G7 with Vulkan on. When Vulkan is off gemma4:e2b works correctly.

<!-- gh-comment-id:4185434191 --> @phdaucourt commented on GitHub (Apr 3, 2026): Can confirm too for gemma4:e2b on Windows 11 with with Intel Core i7-1185G7 with Vulkan on. When Vulkan is off gemma4:e2b works correctly.
Author
Owner

@devedse commented on GitHub (Apr 3, 2026):

Can confirm on Intel Arc Pro B50 (Battlemage/BMG G21, discrete GPU) running in an LXC container on Proxmox, Ollama 0.20.0, Ubuntu 24.04.

Vulkan + gemma4:e2b — garbled output, thinking phase skipped entirely. The very first output token is nonsensical (token ID 227671 = " samh") instead of entering the thinking phase. Full response was:

samhayane (help) me?

Please complete your request. What would you like to know about "how to learn" or what you need help with?
...

Prompt was simply: Explain C# to me

Vulkan + qwen3.5:9b — works perfectly on the same setup, clean output, thinking works.

Vulkan detection details:

ggml_vulkan: 0 = Intel(R) Arc(tm) Pro B50 Graphics (BMG G21) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 131072 | int dot: 1 | matrix cores: KHR_coopmat

Model loaded with 35/36 layers on GPU, splits=2, runner.vram="2.0 GiB".

Worth noting: this model also produces garbled output on the SYCL backend on Xe2/Battlemage GPUs (see ggml-org/llama.cpp#20169, ggml-org/llama.cpp#18808). So gemma4 is broken on both Vulkan (cross-vendor, this issue) and SYCL (Xe2-specific) for GPU inference.

Running on CPU (OLLAMA_VULKAN=0) produces clean output.

<!-- gh-comment-id:4185604743 --> @devedse commented on GitHub (Apr 3, 2026): Can confirm on **Intel Arc Pro B50** (Battlemage/BMG G21, discrete GPU) running in an LXC container on Proxmox, Ollama **0.20.0**, Ubuntu 24.04. **Vulkan + gemma4:e2b** — garbled output, thinking phase skipped entirely. The very first output token is nonsensical (token ID 227671 = `" samh"`) instead of entering the thinking phase. Full response was: ``` samhayane (help) me? Please complete your request. What would you like to know about "how to learn" or what you need help with? ... ``` Prompt was simply: `Explain C# to me` **Vulkan + qwen3.5:9b** — works perfectly on the same setup, clean output, thinking works. **Vulkan detection details:** ``` ggml_vulkan: 0 = Intel(R) Arc(tm) Pro B50 Graphics (BMG G21) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 131072 | int dot: 1 | matrix cores: KHR_coopmat ``` Model loaded with 35/36 layers on GPU, `splits=2`, `runner.vram="2.0 GiB"`. Worth noting: this model also produces garbled output on the **SYCL** backend on Xe2/Battlemage GPUs (see ggml-org/llama.cpp#20169, ggml-org/llama.cpp#18808). So gemma4 is broken on both Vulkan (cross-vendor, this issue) and SYCL (Xe2-specific) for GPU inference. Running on CPU (`OLLAMA_VULKAN=0`) produces clean output.
Author
Owner

@RichardBosworth commented on GitHub (Apr 4, 2026):

Same issue here. Framework 13 7840U with Radeon 780M iGPU on Windows 11. Vulkan backend produces garbled output from gemma4 models. CPU backend works fine. Also getting responses in different languages sometimes.

<!-- gh-comment-id:4185995339 --> @RichardBosworth commented on GitHub (Apr 4, 2026): Same issue here. Framework 13 7840U with Radeon 780M iGPU on Windows 11. Vulkan backend produces garbled output from gemma4 models. CPU backend works fine. Also getting responses in different languages sometimes.
Author
Owner

@Tremeschin commented on GitHub (Apr 4, 2026):

Even more broken on Arch Linux with a RTX 3060 in Vulkan, works fine in CUDA backend (switched to)

>>> What is the meaning of life?
[[error]]
Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! 
Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry!

>>> Tell me the history of Intel
 Waldorf/Wilson-Whitney-Goldfarb method.

>>> Tell me the history of NVIDIA
otimes.

Was a good laugh for my first prompt in the model at least 😆

<!-- gh-comment-id:4186319684 --> @Tremeschin commented on GitHub (Apr 4, 2026): Even more broken on Arch Linux with a RTX 3060 in Vulkan, works fine in CUDA backend (switched to) ``` >>> What is the meaning of life? [[error]] Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! Sorry! >>> Tell me the history of Intel Waldorf/Wilson-Whitney-Goldfarb method. >>> Tell me the history of NVIDIA otimes. ``` <sup>Was a good laugh for my first prompt in the model at least 😆</sup>
Author
Owner

@sachithamh commented on GitHub (Apr 4, 2026):

try changes: https://github.com/ollama/ollama/pull/15325
fork: https://github.com/sachithamh/ollama

<!-- gh-comment-id:4187024563 --> @sachithamh commented on GitHub (Apr 4, 2026): try changes: https://github.com/ollama/ollama/pull/15325 fork: https://github.com/sachithamh/ollama
Author
Owner

@ddimitriou commented on GitHub (Apr 5, 2026):

I can also confirm with Vulkan on, running on AMD RX 5500XT and Ryzen 1

<!-- gh-comment-id:4188493113 --> @ddimitriou commented on GitHub (Apr 5, 2026): I can also confirm with Vulkan on, running on AMD RX 5500XT and Ryzen 1
Author
Owner

@CasaSky commented on GitHub (Apr 6, 2026):

Same here with RX 6700XT using Vulkan

<!-- gh-comment-id:4193327098 --> @CasaSky commented on GitHub (Apr 6, 2026): Same here with RX 6700XT using Vulkan
Author
Owner

@chejh-amd commented on GitHub (Apr 7, 2026):

This is probably the same thing that ggml-org/llama.cpp#21506 is fixing — gemma4's MoE FFN path needs F32 precision but was running at F16, which messes up the output. That PR just went up a couple days ago so it hasn't landed in Ollama yet.
There's also a second thing going on for some people: the Vulkan backend fails to allocate a big enough buffer for the MoE weights (alloc_tensor_range: failed to allocate Vulkan0 buffer), so they end up on CPU while the KV cache stays on GPU. That alone can produce garbage too.
OLLAMA_VULKAN=0 works around both for now. Dense models like qwen3.5 don't hit this since they skip the MoE path entirely.

<!-- gh-comment-id:4197829226 --> @chejh-amd commented on GitHub (Apr 7, 2026): This is probably the same thing that ggml-org/llama.cpp#21506 is fixing — gemma4's MoE FFN path needs F32 precision but was running at F16, which messes up the output. That PR just went up a couple days ago so it hasn't landed in Ollama yet. There's also a second thing going on for some people: the Vulkan backend fails to allocate a big enough buffer for the MoE weights (alloc_tensor_range: failed to allocate Vulkan0 buffer), so they end up on CPU while the KV cache stays on GPU. That alone can produce garbage too. `OLLAMA_VULKAN=0` works around both for now. Dense models like qwen3.5 don't hit this since they skip the MoE path entirely.
Author
Owner

@chejh-amd commented on GitHub (Apr 8, 2026):

The actual fix just landed upstream. turns out it wasn't the F16/F32 precision issue I mentioned earlier, but a buffer overlap bug in the GEMV fusion path for MoE models (ggml-org/llama.cpp#21566, merged yesterday). once ollama picks up a llama.cpp build after b8701, the garbled output on CUDA/ROCm should be fixed.
The Vulkan buffer allocation issue is still separate though, so iGPU users hitting alloc_tensor_range: failed will still see partial CPU offload until that gets addressed in the Vulkan backend.

<!-- gh-comment-id:4204108255 --> @chejh-amd commented on GitHub (Apr 8, 2026): The actual fix just landed upstream. turns out it wasn't the F16/F32 precision issue I mentioned earlier, but a buffer overlap bug in the GEMV fusion path for MoE models (ggml-org/llama.cpp#21566, merged yesterday). once ollama picks up a llama.cpp build after b8701, the garbled output on CUDA/ROCm should be fixed. The Vulkan buffer allocation issue is still separate though, so iGPU users hitting `alloc_tensor_range: failed` will still see partial CPU offload until that gets addressed in the Vulkan backend.
Author
Owner

@devedse commented on GitHub (Apr 8, 2026):

It would be nice to wait for this one to be merged too:
https://github.com/ggml-org/llama.cpp/pull/21391

This will also fix some issues for intel cards on the E2B and E4B models.

<!-- gh-comment-id:4205480215 --> @devedse commented on GitHub (Apr 8, 2026): It would be nice to wait for this one to be merged too: https://github.com/ggml-org/llama.cpp/pull/21391 This will also fix some issues for intel cards on the E2B and E4B models.
Author
Owner

@alfosua commented on GitHub (Apr 8, 2026):

Right now, using gemma4:e2b and gemma4:e4b outputs a lot of gibberish when using it with an AMD RX 6600 with Vulkan. I am aware I can't run that much with this GPU, but still should be able to run slightly better than running gemma with my CPU.

<!-- gh-comment-id:4209013523 --> @alfosua commented on GitHub (Apr 8, 2026): Right now, using gemma4:e2b and gemma4:e4b outputs a lot of gibberish when using it with an AMD RX 6600 with Vulkan. I am aware I can't run that much with this GPU, but still should be able to run slightly better than running gemma with my CPU.
Author
Owner

@shmilee commented on GitHub (Apr 9, 2026):

I also get strange responses using Vulkan + Intel Meteorlake (Gen12) with models:
qwen3.5:9b, gemma4:e4b, gemma4:26b, deepseek-r1:14b.
But gemma2:2b, deepseek-coder-v2:16b, qwen3.5:2b, qwen3.5:9b-q8_0, these work well.

Maybe quantization matters?

[$] ollama list
NAME                     ID              SIZE      MODIFIED       
qwen3.5:9b-q8_0          441ec31e4d2a    10 GB     15 seconds ago    
qwen3.5:2b-q8_0          324d162be6ca    2.7 GB    20 seconds ago    
qwen3.5:2b               324d162be6ca    2.7 GB    26 seconds ago    
gemma2:2b                8ccf136fdd52    1.6 GB    33 seconds ago    
deepseek-coder-v2:16b    63fb193b3a9b    8.9 GB    42 seconds ago    
gemma4:e4b               c6eb396dbd59    9.6 GB    3 hours ago       
deepseek-r1:14b          c333b7232bdb    9.0 GB    4 hours ago       
qwen3.5:9b               6488c96fa5fa    6.6 GB    4 hours ago       
gemma4:26b               5571076f3d70    17 GB     24 hours ago      

[$] for model in `ollama list|grep -v NAME|awk '{printf $1" "}'`; do echo -e `ollama show $model |grep quantization`"\t" for $model; done  
quantization Q8_0 	 for qwen3.5:9b-q8_0
quantization Q8_0 	 for qwen3.5:2b-q8_0
quantization Q8_0 	 for qwen3.5:2b
quantization Q4_0 	 for gemma2:2b
quantization Q4_0 	 for deepseek-coder-v2:16b
quantization Q4_K_M 	 for gemma4:e4b
quantization Q4_K_M 	 for deepseek-r1:14b
quantization Q4_K_M 	 for qwen3.5:9b
quantization Q4_K_M 	 for gemma4:26b
<!-- gh-comment-id:4212949363 --> @shmilee commented on GitHub (Apr 9, 2026): I also get strange responses using Vulkan + Intel Meteorlake (Gen12) with models: `qwen3.5:9b`, `gemma4:e4b`, `gemma4:26b`, `deepseek-r1:14b`. But `gemma2:2b`, `deepseek-coder-v2:16b`, `qwen3.5:2b`, `qwen3.5:9b-q8_0`, these work well. Maybe `quantization` matters? ``` [$] ollama list NAME ID SIZE MODIFIED qwen3.5:9b-q8_0 441ec31e4d2a 10 GB 15 seconds ago qwen3.5:2b-q8_0 324d162be6ca 2.7 GB 20 seconds ago qwen3.5:2b 324d162be6ca 2.7 GB 26 seconds ago gemma2:2b 8ccf136fdd52 1.6 GB 33 seconds ago deepseek-coder-v2:16b 63fb193b3a9b 8.9 GB 42 seconds ago gemma4:e4b c6eb396dbd59 9.6 GB 3 hours ago deepseek-r1:14b c333b7232bdb 9.0 GB 4 hours ago qwen3.5:9b 6488c96fa5fa 6.6 GB 4 hours ago gemma4:26b 5571076f3d70 17 GB 24 hours ago [$] for model in `ollama list|grep -v NAME|awk '{printf $1" "}'`; do echo -e `ollama show $model |grep quantization`"\t" for $model; done quantization Q8_0 for qwen3.5:9b-q8_0 quantization Q8_0 for qwen3.5:2b-q8_0 quantization Q8_0 for qwen3.5:2b quantization Q4_0 for gemma2:2b quantization Q4_0 for deepseek-coder-v2:16b quantization Q4_K_M for gemma4:e4b quantization Q4_K_M for deepseek-r1:14b quantization Q4_K_M for qwen3.5:9b quantization Q4_K_M for gemma4:26b ```
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15261
Analyzed: 2026-04-18T18:22:48.441460

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274310681 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15261 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15261 **Analyzed**: 2026-04-18T18:22:48.441460 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56273