[GH-ISSUE #15264] ollama run gemma4:26b Error: 500 Internal Server Error #9763

Closed
opened 2026-04-12 22:39:15 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @cdsama on GitHub (Apr 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15264

What is the issue?

device: rtx pro 6000 blackwell
driver: RTX Driver Release 595
cmd: ollama run gemma4:26b
Error: 500 Internal Server Error: model failed to load, this may be due to resource limitations or an internal error, check ollama server logs for details

same with gemma4:31b

Relevant log output

time=2026-04-03T16:47:41.194+08:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2026-04-03T16:47:41.202+08:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false"
time=2026-04-03T16:47:41.215+08:00 level=INFO source=images.go:499 msg="total blobs: 53"
time=2026-04-03T16:47:41.223+08:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-03T16:47:41.228+08:00 level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.0)"
time=2026-04-03T16:47:41.229+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-03T16:47:41.257+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11827"
time=2026-04-03T16:47:41.605+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11853"
time=2026-04-03T16:47:41.980+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11885"
time=2026-04-03T16:47:42.253+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-04-03T16:47:42.255+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11909"
time=2026-04-03T16:47:42.255+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11910"
time=2026-04-03T16:47:42.678+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-03e35a48-04dd-3b4e-88e6-789f657bcbbc filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA RTX PRO 6000 Blackwell Workstation Edition" libdirs=ollama,cuda_v13 driver=13.2 pci_id=0000:02:00.0 type=discrete total="95.6 GiB" available="86.4 GiB"
time=2026-04-03T16:47:42.678+08:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="95.6 GiB" default_num_ctx=262144
[GIN] 2026/04/03 - 16:47:42 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/03 - 16:47:42 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2026/04/03 - 16:47:42 | 200 |    148.8714ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/03 - 16:47:42 | 200 |    131.4417ms |       127.0.0.1 | POST     "/api/show"
time=2026-04-03T16:47:43.117+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11976"
time=2026-04-03T16:47:43.459+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-04-03T16:47:43.459+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2026-04-03T16:47:43.459+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=24
time=2026-04-03T16:47:43.547+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-03T16:47:43.550+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\models\\blobs\\sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df --port 12003"
time=2026-04-03T16:47:43.564+08:00 level=INFO source=sched.go:484 msg="system memory" total="253.4 GiB" free="215.6 GiB" free_swap="172.9 GiB"
time=2026-04-03T16:47:43.564+08:00 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-03e35a48-04dd-3b4e-88e6-789f657bcbbc library=CUDA available="86.0 GiB" free="86.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-03T16:47:43.564+08:00 level=INFO source=server.go:759 msg="loading model" "model layers"=31 requested=-1
time=2026-04-03T16:47:43.787+08:00 level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-03T16:47:43.796+08:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:12003"
time=2026-04-03T16:47:43.803+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:4 BatchSize:512 FlashAttention:Disabled KvSize:1048576 KvCacheType: NumThreads:8 GPULayers:31[ID:GPU-03e35a48-04dd-3b4e-88e6-789f657bcbbc Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-03T16:47:43.837+08:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=1014 num_key_values=52
load_backend: loaded CPU backend from D:\AI\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA RTX PRO 6000 Blackwell Workstation Edition, compute capability 12.0, VMM: yes, ID: GPU-03e35a48-04dd-3b4e-88e6-789f657bcbbc
load_backend: loaded CUDA backend from D:\AI\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-04-03T16:47:43.917+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-04-03T16:47:43.921+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-03T16:47:43.938+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.6786ms bounds=(0,0)-(2048,2048)
time=2026-04-03T16:47:43.993+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=54.8742ms size="[768 768]"
time=2026-04-03T16:47:43.993+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-03T16:47:43.993+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-03T16:47:43.993+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=56.5528ms shape="[2816 256]"
CUDA error: an internal operation failed
  current device: 0, in function ggml_cuda_mul_mat_batched_cublas_impl at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2130
  cublasGemmBatchedEx(ctx.cublas_handle(), CUBLAS_OP_T, CUBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), cu_data_type_a, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), cu_data_type_b, s11, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne0, ne23, cu_compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error
time=2026-04-03T16:47:47.940+08:00 level=ERROR source=server.go:1207 msg="do load request" error="Post \"http://127.0.0.1:12003/load\": read tcp 127.0.0.1:12030->127.0.0.1:12003: wsarecv: An existing connection was forcibly closed by the remote host."
time=2026-04-03T16:47:47.940+08:00 level=ERROR source=server.go:1207 msg="do load request" error="Post \"http://127.0.0.1:12003/load\": dial tcp 127.0.0.1:12003: connectex: No connection could be made because the target machine actively refused it."
time=2026-04-03T16:47:47.940+08:00 level=INFO source=sched.go:511 msg="Load failed" model=D:\AI\Ollama\models\blobs\sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df error="model failed to load, this may be due to resource limitations or an internal error, check ollama server logs for details"
[GIN] 2026/04/03 - 16:47:47 | 500 |    4.9710803s |       127.0.0.1 | POST     "/api/generate"
time=2026-04-03T16:47:48.123+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.20.0

Originally created by @cdsama on GitHub (Apr 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15264 ### What is the issue? device: rtx pro 6000 blackwell driver: RTX Driver Release 595 cmd: ollama run gemma4:26b Error: 500 Internal Server Error: model failed to load, this may be due to resource limitations or an internal error, check ollama server logs for details same with gemma4:31b ### Relevant log output ```shell time=2026-04-03T16:47:41.194+08:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-04-03T16:47:41.202+08:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false" time=2026-04-03T16:47:41.215+08:00 level=INFO source=images.go:499 msg="total blobs: 53" time=2026-04-03T16:47:41.223+08:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-03T16:47:41.228+08:00 level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.0)" time=2026-04-03T16:47:41.229+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-03T16:47:41.257+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11827" time=2026-04-03T16:47:41.605+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11853" time=2026-04-03T16:47:41.980+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11885" time=2026-04-03T16:47:42.253+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-04-03T16:47:42.255+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11909" time=2026-04-03T16:47:42.255+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11910" time=2026-04-03T16:47:42.678+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-03e35a48-04dd-3b4e-88e6-789f657bcbbc filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA RTX PRO 6000 Blackwell Workstation Edition" libdirs=ollama,cuda_v13 driver=13.2 pci_id=0000:02:00.0 type=discrete total="95.6 GiB" available="86.4 GiB" time=2026-04-03T16:47:42.678+08:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="95.6 GiB" default_num_ctx=262144 [GIN] 2026/04/03 - 16:47:42 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2026/04/03 - 16:47:42 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/04/03 - 16:47:42 | 200 | 148.8714ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/03 - 16:47:42 | 200 | 131.4417ms | 127.0.0.1 | POST "/api/show" time=2026-04-03T16:47:43.117+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 11976" time=2026-04-03T16:47:43.459+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-04-03T16:47:43.459+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2026-04-03T16:47:43.459+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=24 time=2026-04-03T16:47:43.547+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-03T16:47:43.550+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\models\\blobs\\sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df --port 12003" time=2026-04-03T16:47:43.564+08:00 level=INFO source=sched.go:484 msg="system memory" total="253.4 GiB" free="215.6 GiB" free_swap="172.9 GiB" time=2026-04-03T16:47:43.564+08:00 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-03e35a48-04dd-3b4e-88e6-789f657bcbbc library=CUDA available="86.0 GiB" free="86.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-03T16:47:43.564+08:00 level=INFO source=server.go:759 msg="loading model" "model layers"=31 requested=-1 time=2026-04-03T16:47:43.787+08:00 level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-03T16:47:43.796+08:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:12003" time=2026-04-03T16:47:43.803+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:4 BatchSize:512 FlashAttention:Disabled KvSize:1048576 KvCacheType: NumThreads:8 GPULayers:31[ID:GPU-03e35a48-04dd-3b4e-88e6-789f657bcbbc Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-03T16:47:43.837+08:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=1014 num_key_values=52 load_backend: loaded CPU backend from D:\AI\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX PRO 6000 Blackwell Workstation Edition, compute capability 12.0, VMM: yes, ID: GPU-03e35a48-04dd-3b4e-88e6-789f657bcbbc load_backend: loaded CUDA backend from D:\AI\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-04-03T16:47:43.917+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-04-03T16:47:43.921+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-03T16:47:43.938+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.6786ms bounds=(0,0)-(2048,2048) time=2026-04-03T16:47:43.993+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=54.8742ms size="[768 768]" time=2026-04-03T16:47:43.993+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-03T16:47:43.993+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-03T16:47:43.993+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=56.5528ms shape="[2816 256]" CUDA error: an internal operation failed current device: 0, in function ggml_cuda_mul_mat_batched_cublas_impl at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2130 cublasGemmBatchedEx(ctx.cublas_handle(), CUBLAS_OP_T, CUBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), cu_data_type_a, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), cu_data_type_b, s11, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne0, ne23, cu_compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-04-03T16:47:47.940+08:00 level=ERROR source=server.go:1207 msg="do load request" error="Post \"http://127.0.0.1:12003/load\": read tcp 127.0.0.1:12030->127.0.0.1:12003: wsarecv: An existing connection was forcibly closed by the remote host." time=2026-04-03T16:47:47.940+08:00 level=ERROR source=server.go:1207 msg="do load request" error="Post \"http://127.0.0.1:12003/load\": dial tcp 127.0.0.1:12003: connectex: No connection could be made because the target machine actively refused it." time=2026-04-03T16:47:47.940+08:00 level=INFO source=sched.go:511 msg="Load failed" model=D:\AI\Ollama\models\blobs\sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df error="model failed to load, this may be due to resource limitations or an internal error, check ollama server logs for details" [GIN] 2026/04/03 - 16:47:47 | 500 | 4.9710803s | 127.0.0.1 | POST "/api/generate" time=2026-04-03T16:47:48.123+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.20.0
GiteaMirror added the bug label 2026-04-12 22:39:15 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9763