[GH-ISSUE #3004] Does ollama support accelerated running on npu? #48357

Open
opened 2026-04-28 07:54:43 -05:00 by GiteaMirror · 32 comments
Owner

Originally created by @fatinghenji on GitHub (Mar 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3004

Originally assigned to: @dhiltgen on GitHub.

The Intel Ultra 5 NPU is a hardware gas pedal dedicated to AI computing that boosts the performance and efficiency of AI applications.

Will ollama support using npu for acceleration? Or does it only call the cpu?

Originally created by @fatinghenji on GitHub (Mar 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3004 Originally assigned to: @dhiltgen on GitHub. > The Intel Ultra 5 NPU is a hardware gas pedal dedicated to AI computing that boosts the performance and efficiency of AI applications. Will ollama support using npu for acceleration? Or does it only call the cpu?
GiteaMirror added the feature request label 2026-04-28 07:54:43 -05:00
Author
Owner

@easp commented on GitHub (Mar 8, 2024):

Ollama currently uses llama.cpp. Llama.cpp doesn't appear to support any neural net accelerators at this point (other than nvidia tensor-rt through CUDA).

<!-- gh-comment-id:1986074011 --> @easp commented on GitHub (Mar 8, 2024): Ollama currently uses llama.cpp. Llama.cpp doesn't appear to support any neural net accelerators at this point (other than nvidia tensor-rt through CUDA).
Author
Owner

@dhiltgen commented on GitHub (Mar 11, 2024):

This may be possible via Vulkan.

<!-- gh-comment-id:1989110965 --> @dhiltgen commented on GitHub (Mar 11, 2024): This may be possible via Vulkan.
Author
Owner

@antt001 commented on GitHub (May 25, 2024):

according to this https://medium.com/@jianyu_neo/run-llm-on-all-intel-gpus-using-llama-cpp-fd2e2dcbd9bd
and this https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/llama_cpp_quickstart.html#install-ipex-llm-for-llama-cpp
it is possible to utilize intel GPU with llama.cpp (I know it is not NPU) I hope this helps

<!-- gh-comment-id:2131180582 --> @antt001 commented on GitHub (May 25, 2024): according to this https://medium.com/@jianyu_neo/run-llm-on-all-intel-gpus-using-llama-cpp-fd2e2dcbd9bd and this https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/llama_cpp_quickstart.html#install-ipex-llm-for-llama-cpp it is possible to utilize intel GPU with llama.cpp (I know it is not NPU) I hope this helps
Author
Owner

@PinguDEV-original commented on GitHub (Jun 19, 2024):

It would also be cool to use AMD Ryzen NPUs

<!-- gh-comment-id:2179311116 --> @PinguDEV-original commented on GitHub (Jun 19, 2024): It would also be cool to use AMD Ryzen NPUs
Author
Owner

@idoodler commented on GitHub (Jul 8, 2024):

I'd be interested in using it on a Raspi with the AI shield. I think it has 13 TOPS

<!-- gh-comment-id:2215021228 --> @idoodler commented on GitHub (Jul 8, 2024): I'd be interested in using it on a Raspi with the AI shield. I think it has 13 TOPS
Author
Owner

@MarkinBrisbane commented on GitHub (Jul 24, 2024):

Having acquired a new laptop with an NPU, I have found my way here.

@dhiltgen apologies if you have found this already, but just in case you haven't I am linking to the Intel NPU Python library guide:
https://intel.github.io/intel-npu-acceleration-library/

<!-- gh-comment-id:2246743085 --> @MarkinBrisbane commented on GitHub (Jul 24, 2024): Having acquired a new laptop with an NPU, I have found my way here. @dhiltgen apologies if you have found this already, but just in case you haven't I am linking to the Intel NPU Python library guide: https://intel.github.io/intel-npu-acceleration-library/
Author
Owner

@bkb-Git commented on GitHub (Aug 3, 2024):

@MarkinBrisbane Thanks for the link

<!-- gh-comment-id:2266651583 --> @bkb-Git commented on GitHub (Aug 3, 2024): @MarkinBrisbane Thanks for the link
Author
Owner

@ThatOneCalculator commented on GitHub (Aug 8, 2024):

Just got a Asus ZenBook 16S Pro 2024 with the HX 370 and 890M (aka, it has an NPU). There's Linux kernel and firmware patches to get the NPU supported, which I'm currently running! https://aur.archlinux.org/packages/linux-mainline-um5606

Would love to see this supported. It officially identifies itself as a gfx1150 GPU.

<!-- gh-comment-id:2274840098 --> @ThatOneCalculator commented on GitHub (Aug 8, 2024): Just got a Asus ZenBook 16S Pro 2024 with the HX 370 and 890M (aka, it has an NPU). There's Linux kernel and firmware patches to get the NPU supported, which I'm currently running! https://aur.archlinux.org/packages/linux-mainline-um5606 Would love to see this supported. It officially identifies itself as a gfx1150 GPU.
Author
Owner

@ChristianWeyer commented on GitHub (Sep 30, 2024):

Just got a Asus ZenBook 16S Pro 2024 with the HX 370 and 890M (aka, it has an NPU). There's Linux kernel and firmware patches to get the NPU supported, which I'm currently running! https://aur.archlinux.org/packages/linux-mainline-um5606

Would love to see this supported. It officially identifies itself as a gfx1150 GPU.

Are you currently able to use Ollama or llama.cpp accelerated with your setup?

<!-- gh-comment-id:2383681890 --> @ChristianWeyer commented on GitHub (Sep 30, 2024): > Just got a Asus ZenBook 16S Pro 2024 with the HX 370 and 890M (aka, it has an NPU). There's Linux kernel and firmware patches to get the NPU supported, which I'm currently running! https://aur.archlinux.org/packages/linux-mainline-um5606 > > Would love to see this supported. It officially identifies itself as a gfx1150 GPU. Are you currently able to use Ollama or llama.cpp accelerated with your setup?
Author
Owner

@ChristianWeyer commented on GitHub (Sep 30, 2024):

It would also be cool to use AMD Ryzen NPUs

Do you know whether this is currently planned @dhiltgen? There are a number of affordable devices out there with this chipset.

Thanks!

<!-- gh-comment-id:2383685231 --> @ChristianWeyer commented on GitHub (Sep 30, 2024): > It would also be cool to use AMD Ryzen NPUs Do you know whether this is currently planned @dhiltgen? There are a number of affordable devices out there with this chipset. Thanks!
Author
Owner

@ThatOneCalculator commented on GitHub (Nov 16, 2024):

Are you currently able to use Ollama or llama.cpp accelerated with your setup?

Unfortunately not, although XDNA patches for the Linux kernel are being refined constantly. Would love to see someone from the Ollama team attempt to use support using the 20241112194745.854626-x-lizhi.hou@amd.com patchset (AMD XDNA v10) https://lore.kernel.org/all/20241112194745.854626-1-lizhi.hou@amd.com/

<!-- gh-comment-id:2480695021 --> @ThatOneCalculator commented on GitHub (Nov 16, 2024): > Are you currently able to use Ollama or llama.cpp accelerated with your setup? Unfortunately not, although XDNA patches for the Linux kernel are being refined constantly. Would love to see someone from the Ollama team attempt to use support using the `20241112194745.854626-x-lizhi.hou@amd.com` patchset (AMD XDNA v10) https://lore.kernel.org/all/20241112194745.854626-1-lizhi.hou@amd.com/
Author
Owner

@hashangit commented on GitHub (Nov 16, 2024):

@ThatOneCalculator are you able to get the GPU to work with Ollama? It looks like the inference is purely on the CPU as of now. If this is a solved issue, can you point me to any guide or reference material which I can use to configure the GPU to be used? I can't seem to be able to find any solid information on the matter online.

<!-- gh-comment-id:2480805969 --> @hashangit commented on GitHub (Nov 16, 2024): @ThatOneCalculator are you able to get the GPU to work with Ollama? It looks like the inference is purely on the CPU as of now. If this is a solved issue, can you point me to any guide or reference material which I can use to configure the GPU to be used? I can't seem to be able to find any solid information on the matter online.
Author
Owner

@ThatOneCalculator commented on GitHub (Nov 16, 2024):

I'm not able to get it to work with the GPU (Ollama with ROCm support & ROCm 6.2.4), but you probably wouldn't want to run it on the GPU, since afaik the "NPU" acceleration happens on the CPU (feel free to correct me if I'm wrong!)

However, even without NPU acceleration, on Linux 6.12rc7 with my patch set, I'm able to get ~50 tokens/sec on llama3.2 with only ~50% CPU usage on the "performance" power profile.

<!-- gh-comment-id:2480806781 --> @ThatOneCalculator commented on GitHub (Nov 16, 2024): I'm not able to get it to work with the GPU (Ollama with ROCm support & ROCm 6.2.4), but you probably *wouldn't* want to run it on the GPU, since afaik the "NPU" acceleration happens on the CPU (feel free to correct me if I'm wrong!) However, even without NPU acceleration, on Linux 6.12rc7 with my patch set, I'm able to get ~50 tokens/sec on llama3.2 with only ~50% CPU usage on the "performance" power profile.
Author
Owner

@sasskialudin commented on GitHub (Nov 23, 2024):

It's about 5 months that the Qualcomm Snapdragon X Plus X1Pprocessor is available and | have the opportunity to get one ASUS Vivobook S 15 OLED with 32Gb of RAM around it for 850 USD (Black Friday deal).

So, do we have at last NPU support for the Qualcomm Snapdragon X Plus X1P processor over ollama?

Alternatively, is there any NPU support for the [AMD Ryzen AI 9 HX 370?
The later one is supposed to offer 50 TOPS vs 45 TOPS for the Snapdragon chip.

<!-- gh-comment-id:2495340973 --> @sasskialudin commented on GitHub (Nov 23, 2024): It's about 5 months that the Qualcomm Snapdragon X Plus X1Pprocessor is available and | have the opportunity to get one ASUS Vivobook S 15 OLED with 32Gb of RAM around it for 850 USD (Black Friday deal). So, do we have at last NPU support for the Qualcomm Snapdragon X Plus X1P processor over ollama? Alternatively, is there any NPU support for the [AMD Ryzen AI 9 HX 370? The later one is supposed to offer 50 TOPS vs 45 TOPS for the Snapdragon chip.
Author
Owner

@MovGP0 commented on GitHub (Dec 17, 2024):

So, do we have at last NPU support for the Qualcomm Snapdragon X Plus X1P processor over ollama?

Albeit current builds of the llama.cpp library already support the Hexagon NPU, is still considered in development.

The good news is that the CPU of the Snapdragon X is pretty fast; it can execute small to medium models with a reasonable performance on it. Source

Also note that the NPU can only use ~7.8 GB of shared memory. Bigger models might not be able to execute on the NPU.

<!-- gh-comment-id:2549460912 --> @MovGP0 commented on GitHub (Dec 17, 2024): > So, do we have at last NPU support for the Qualcomm Snapdragon X Plus X1P processor over ollama? Albeit current builds of the [llama.cpp](https://github.com/ggerganov/llama.cpp) library already support the Hexagon NPU, is still considered in development. The good news is that the CPU of the Snapdragon X is pretty fast; it can execute small to medium models with a reasonable performance on it. [Source](https://github.com/ollama/ollama/issues/5360#issuecomment-2244357036) Also note that the NPU can only use ~7.8 GB of shared memory. Bigger models might not be able to execute on the NPU.
Author
Owner

@JiapengLi commented on GitHub (Dec 20, 2024):

Alternatively, is there any NPU support for the [AMD Ryzen AI 9 HX 370? The later one is supposed to offer 50 TOPS vs 45 TOPS for the Snapdragon chip.

I did some test under AMD Ryzen AI 9 HX 370 without NPU support, performance is not good.

https://github.com/ollama/ollama/issues/5186#issuecomment-2556496478

<!-- gh-comment-id:2556534011 --> @JiapengLi commented on GitHub (Dec 20, 2024): > Alternatively, is there any NPU support for the [AMD Ryzen AI 9 HX 370? The later one is supposed to offer 50 TOPS vs 45 TOPS for the Snapdragon chip. I did some test under AMD Ryzen AI 9 HX 370 without NPU support, performance is not good. https://github.com/ollama/ollama/issues/5186#issuecomment-2556496478
Author
Owner

@adoreparler commented on GitHub (Dec 21, 2024):

I have a Core Ultra 7 265k with Debian testing installed. Would be interested in testing on GPU or NPU if any development is happening on this.

<!-- gh-comment-id:2558142552 --> @adoreparler commented on GitHub (Dec 21, 2024): I have a Core Ultra 7 265k with Debian testing installed. Would be interested in testing on GPU or NPU if any development is happening on this.
Author
Owner

@rcbevans commented on GitHub (Dec 28, 2024):

llvm 17 has support for the GFX1150 GPU in the HX 370, looks like it might be a config change to makefile.rocm to add support?

Dec 27 23:55:19 evans-home-srv ollama[1966317]: time=2024-12-27T23:55:19.946-08:00 level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942])" gpu_type=gfx1150 gpu=0 library=/usr/local/lib/ollama

https://github.com/ollama/ollama/blob/main/make/Makefile.rocm#L9

<!-- gh-comment-id:2564262884 --> @rcbevans commented on GitHub (Dec 28, 2024): llvm 17 has support for the GFX1150 GPU in the HX 370, looks like it *might* be a config change to makefile.rocm to add support? > Dec 27 23:55:19 evans-home-srv ollama[1966317]: time=2024-12-27T23:55:19.946-08:00 level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942])" gpu_type=gfx1150 gpu=0 library=/usr/local/lib/ollama https://github.com/ollama/ollama/blob/main/make/Makefile.rocm#L9
Author
Owner

@rcbevans commented on GitHub (Dec 28, 2024):

llvm 17 has support for the GFX1150 GPU in the HX 370, looks like it might be a config change to makefile.rocm to add support?

Dec 27 23:55:19 evans-home-srv ollama[1966317]: time=2024-12-27T23:55:19.946-08:00 level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942])" gpu_type=gfx1150 gpu=0 library=/usr/local/lib/ollama

https://github.com/ollama/ollama/blob/main/make/Makefile.rocm#L9

I cloned the code, added gfx1150 to the HIP_COMMON list but after building I see gfx1151 is now listed as supported (based on the TensileLibrary_lazy_gfx1151.dat, but there's no gfx1150....

time=2024-12-28T01:08:04.276-08:00 level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1010 gfx1012 gfx1030 gfx1100 gfx1101 gfx1102 gfx1151 gfx1200 gfx1201 gfx900 gfx906 gfx908 gfx90a gfx942])" gpu_type=gfx1150 gpu=0 library=/opt/rocm/lib

Not sure if I'm missing something or maybe there's a particular reason why the gfx1150 isn't supported

<!-- gh-comment-id:2564276060 --> @rcbevans commented on GitHub (Dec 28, 2024): > llvm 17 has support for the GFX1150 GPU in the HX 370, looks like it _might_ be a config change to makefile.rocm to add support? > > > Dec 27 23:55:19 evans-home-srv ollama[1966317]: time=2024-12-27T23:55:19.946-08:00 level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942])" gpu_type=gfx1150 gpu=0 library=/usr/local/lib/ollama > > https://github.com/ollama/ollama/blob/main/make/Makefile.rocm#L9 I cloned the code, added gfx1150 to the HIP_COMMON list but after building I see gfx1151 is now listed as supported (based on the `TensileLibrary_lazy_gfx1151.dat`, but there's no gfx1150.... > time=2024-12-28T01:08:04.276-08:00 level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1010 gfx1012 gfx1030 gfx1100 gfx1101 gfx1102 gfx1151 gfx1200 gfx1201 gfx900 gfx906 gfx908 gfx90a gfx942])" gpu_type=gfx1150 gpu=0 library=/opt/rocm/lib Not sure if I'm missing something or maybe there's a particular reason why the gfx1150 isn't supported
Author
Owner

@difrost commented on GitHub (Jan 4, 2025):

The AMD XDNA driver has been finally accepted [1] and will most probably arrive with Linux Kernel 6.14. I'll try to get the off-tree [2] driver to work on my HX 370 and lets see how much work will be required to make Ollama use NPU.

[1] https://lwn.net/ml/all/778990df-cfdf-bdab-9f11-83a9bfc25ba0@quicinc.com/#t
[2] https://github.com/amd/xdna-driver

<!-- gh-comment-id:2571257037 --> @difrost commented on GitHub (Jan 4, 2025): The AMD XDNA driver has been finally accepted [1] and will most probably arrive with Linux Kernel 6.14. I'll try to get the off-tree [2] driver to work on my HX 370 and lets see how much work will be required to make Ollama use NPU. [1] https://lwn.net/ml/all/778990df-cfdf-bdab-9f11-83a9bfc25ba0@quicinc.com/#t [2] https://github.com/amd/xdna-driver
Author
Owner

@difrost commented on GitHub (Jan 6, 2025):

So ... I've managed to get ollama running on 890M (gfx1150) with ROCm. For llama3.1:8B I'm getting stable ~ 13 tokens/sec basically same value I get running straight on CPU AVX. What's funny when I load the XDNA driver I'm getting ~ 15 tokens/sec which is weird as NPU is definitelly not in use - I would need to do a detailed benchmarking.

rocminfo for the record:

ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
Runtime Ext Version:     1.6
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD Ryzen AI 9 HX 370 w/ Radeon 890M
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD Ryzen AI 9 HX 370 w/ Radeon 890M
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      49152(0xc000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   4367                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            24                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Memory Properties:       
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    31946792(0x1e77828) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    31946792(0x1e77828) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    31946792(0x1e77828) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1150                            
  Uuid:                    GPU-XX                             
  Marketing Name:          AMD Radeon Graphics                
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      32(0x20) KB                        
    L2:                      2048(0x800) KB                     
  Chip ID:                 5390(0x150e)                       
  ASIC Revision:           4(0x4)                             
  Cacheline Size:          128(0x80)                          
  Max Clock Freq. (MHz):   2900                               
  BDFID:                   25344                              
  Internal Node ID:        1                                  
  Compute Unit:            16                                 
  SIMDs per CU:            2                                  
  Shader Engines:          1                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Memory Properties:       APU
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 25                                 
  SDMA engine uCode::      11                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    15973396(0xf3bc14) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    15973396(0xf3bc14) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1150         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*** Done ***             

Here's how GPU/CPU looks. The model goes into RAM, not the dedicated VRAM (tho it should fit there and I've tested also with smaller models):
screenshot_06012025_204250

<!-- gh-comment-id:2573866025 --> @difrost commented on GitHub (Jan 6, 2025): So ... I've managed to get ollama running on 890M (gfx1150) with ROCm. For llama3.1:8B I'm getting stable ~ 13 tokens/sec basically same value I get running straight on CPU AVX. What's funny when I load the XDNA driver I'm getting ~ 15 tokens/sec which is weird as NPU is definitelly not in use - I would need to do a detailed benchmarking. rocminfo for the record: ``` ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 Runtime Ext Version: 1.6 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen AI 9 HX 370 w/ Radeon 890M Uuid: CPU-XX Marketing Name: AMD Ryzen AI 9 HX 370 w/ Radeon 890M Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 49152(0xc000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 4367 BDFID: 0 Internal Node ID: 0 Compute Unit: 24 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Memory Properties: Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 31946792(0x1e77828) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 31946792(0x1e77828) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 31946792(0x1e77828) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1150 Uuid: GPU-XX Marketing Name: AMD Radeon Graphics Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 32(0x20) KB L2: 2048(0x800) KB Chip ID: 5390(0x150e) ASIC Revision: 4(0x4) Cacheline Size: 128(0x80) Max Clock Freq. (MHz): 2900 BDFID: 25344 Internal Node ID: 1 Compute Unit: 16 SIMDs per CU: 2 Shader Engines: 1 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Memory Properties: APU Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 25 SDMA engine uCode:: 11 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 15973396(0xf3bc14) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 15973396(0xf3bc14) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Recommended Granule:0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1150 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done *** ``` Here's how GPU/CPU looks. The model goes into RAM, not the dedicated VRAM (tho it should fit there and I've tested also with smaller models): ![screenshot_06012025_204250](https://github.com/user-attachments/assets/78cb41f0-43aa-4690-9069-71cba43dbe17)
Author
Owner

@shymega commented on GitHub (Jan 6, 2025):

So ... I've managed to get ollama running on 890M (gfx1150) with ROCm. For llama3.1:8B I'm getting stable ~ 13 tokens/sec basically same value I get running straight on CPU AVX. What's funny when I load the XDNA driver I'm getting ~ 15 tokens/sec which is weird as NPU is definitelly not in use - I would need to do a detailed benchmarking.

rocminfo for the record:

�[37mROCk module is loaded�[0m
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
Runtime Ext Version:     1.6
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD Ryzen AI 9 HX 370 w/ Radeon 890M
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD Ryzen AI 9 HX 370 w/ Radeon 890M
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      49152(0xc000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   4367                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            24                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Memory Properties:       
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    31946792(0x1e77828) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    31946792(0x1e77828) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    31946792(0x1e77828) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1150                            
  Uuid:                    GPU-XX                             
  Marketing Name:          AMD Radeon Graphics                
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      32(0x20) KB                        
    L2:                      2048(0x800) KB                     
  Chip ID:                 5390(0x150e)                       
  ASIC Revision:           4(0x4)                             
  Cacheline Size:          128(0x80)                          
  Max Clock Freq. (MHz):   2900                               
  BDFID:                   25344                              
  Internal Node ID:        1                                  
  Compute Unit:            16                                 
  SIMDs per CU:            2                                  
  Shader Engines:          1                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Memory Properties:       APU
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 25                                 
  SDMA engine uCode::      11                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    15973396(0xf3bc14) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    15973396(0xf3bc14) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1150         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*** Done ***             

Here's how GPU/CPU looks. The model goes into RAM, not the dedicated VRAM (tho it should fit there and I've tested also with smaller models): screenshot_06012025_204250

Did you use a specific environment variable/GPU target for the 890M?

<!-- gh-comment-id:2573933848 --> @shymega commented on GitHub (Jan 6, 2025): > So ... I've managed to get ollama running on 890M (gfx1150) with ROCm. For llama3.1:8B I'm getting stable ~ 13 tokens/sec basically same value I get running straight on CPU AVX. What's funny when I load the XDNA driver I'm getting ~ 15 tokens/sec which is weird as NPU is definitelly not in use - I would need to do a detailed benchmarking. > > rocminfo for the record: > > ``` > �[37mROCk module is loaded�[0m > ===================== > HSA System Attributes > ===================== > Runtime Version: 1.1 > Runtime Ext Version: 1.6 > System Timestamp Freq.: 1000.000000MHz > Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) > Machine Model: LARGE > System Endianness: LITTLE > Mwaitx: DISABLED > DMAbuf Support: YES > > ========== > HSA Agents > ========== > ******* > Agent 1 > ******* > Name: AMD Ryzen AI 9 HX 370 w/ Radeon 890M > Uuid: CPU-XX > Marketing Name: AMD Ryzen AI 9 HX 370 w/ Radeon 890M > Vendor Name: CPU > Feature: None specified > Profile: FULL_PROFILE > Float Round Mode: NEAR > Max Queue Number: 0(0x0) > Queue Min Size: 0(0x0) > Queue Max Size: 0(0x0) > Queue Type: MULTI > Node: 0 > Device Type: CPU > Cache Info: > L1: 49152(0xc000) KB > Chip ID: 0(0x0) > ASIC Revision: 0(0x0) > Cacheline Size: 64(0x40) > Max Clock Freq. (MHz): 4367 > BDFID: 0 > Internal Node ID: 0 > Compute Unit: 24 > SIMDs per CU: 0 > Shader Engines: 0 > Shader Arrs. per Eng.: 0 > WatchPts on Addr. Ranges:1 > Memory Properties: > Features: None > Pool Info: > Pool 1 > Segment: GLOBAL; FLAGS: FINE GRAINED > Size: 31946792(0x1e77828) KB > Allocatable: TRUE > Alloc Granule: 4KB > Alloc Recommended Granule:4KB > Alloc Alignment: 4KB > Accessible by all: TRUE > Pool 2 > Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED > Size: 31946792(0x1e77828) KB > Allocatable: TRUE > Alloc Granule: 4KB > Alloc Recommended Granule:4KB > Alloc Alignment: 4KB > Accessible by all: TRUE > Pool 3 > Segment: GLOBAL; FLAGS: COARSE GRAINED > Size: 31946792(0x1e77828) KB > Allocatable: TRUE > Alloc Granule: 4KB > Alloc Recommended Granule:4KB > Alloc Alignment: 4KB > Accessible by all: TRUE > ISA Info: > ******* > Agent 2 > ******* > Name: gfx1150 > Uuid: GPU-XX > Marketing Name: AMD Radeon Graphics > Vendor Name: AMD > Feature: KERNEL_DISPATCH > Profile: BASE_PROFILE > Float Round Mode: NEAR > Max Queue Number: 128(0x80) > Queue Min Size: 64(0x40) > Queue Max Size: 131072(0x20000) > Queue Type: MULTI > Node: 1 > Device Type: GPU > Cache Info: > L1: 32(0x20) KB > L2: 2048(0x800) KB > Chip ID: 5390(0x150e) > ASIC Revision: 4(0x4) > Cacheline Size: 128(0x80) > Max Clock Freq. (MHz): 2900 > BDFID: 25344 > Internal Node ID: 1 > Compute Unit: 16 > SIMDs per CU: 2 > Shader Engines: 1 > Shader Arrs. per Eng.: 2 > WatchPts on Addr. Ranges:4 > Coherent Host Access: FALSE > Memory Properties: APU > Features: KERNEL_DISPATCH > Fast F16 Operation: TRUE > Wavefront Size: 32(0x20) > Workgroup Max Size: 1024(0x400) > Workgroup Max Size per Dimension: > x 1024(0x400) > y 1024(0x400) > z 1024(0x400) > Max Waves Per CU: 32(0x20) > Max Work-item Per CU: 1024(0x400) > Grid Max Size: 4294967295(0xffffffff) > Grid Max Size per Dimension: > x 4294967295(0xffffffff) > y 4294967295(0xffffffff) > z 4294967295(0xffffffff) > Max fbarriers/Workgrp: 32 > Packet Processor uCode:: 25 > SDMA engine uCode:: 11 > IOMMU Support:: None > Pool Info: > Pool 1 > Segment: GLOBAL; FLAGS: COARSE GRAINED > Size: 15973396(0xf3bc14) KB > Allocatable: TRUE > Alloc Granule: 4KB > Alloc Recommended Granule:2048KB > Alloc Alignment: 4KB > Accessible by all: FALSE > Pool 2 > Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED > Size: 15973396(0xf3bc14) KB > Allocatable: TRUE > Alloc Granule: 4KB > Alloc Recommended Granule:2048KB > Alloc Alignment: 4KB > Accessible by all: FALSE > Pool 3 > Segment: GROUP > Size: 64(0x40) KB > Allocatable: FALSE > Alloc Granule: 0KB > Alloc Recommended Granule:0KB > Alloc Alignment: 0KB > Accessible by all: FALSE > ISA Info: > ISA 1 > Name: amdgcn-amd-amdhsa--gfx1150 > Machine Models: HSA_MACHINE_MODEL_LARGE > Profiles: HSA_PROFILE_BASE > Default Rounding Mode: NEAR > Default Rounding Mode: NEAR > Fast f16: TRUE > Workgroup Max Size: 1024(0x400) > Workgroup Max Size per Dimension: > x 1024(0x400) > y 1024(0x400) > z 1024(0x400) > Grid Max Size: 4294967295(0xffffffff) > Grid Max Size per Dimension: > x 4294967295(0xffffffff) > y 4294967295(0xffffffff) > z 4294967295(0xffffffff) > FBarrier Max Size: 32 > *** Done *** > ``` > > Here's how GPU/CPU looks. The model goes into RAM, not the dedicated VRAM (tho it should fit there and I've tested also with smaller models): ![screenshot_06012025_204250](https://private-user-images.githubusercontent.com/11079631/400544392-78cb41f0-43aa-4690-9069-71cba43dbe17.jpg?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzYxOTY4MTUsIm5iZiI6MTczNjE5NjUxNSwicGF0aCI6Ii8xMTA3OTYzMS80MDA1NDQzOTItNzhjYjQxZjAtNDNhYS00NjkwLTkwNjktNzFjYmE0M2RiZTE3LmpwZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAxMDYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMTA2VDIwNDgzNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTUwMTViYWNhMzlkYjRmZGJjYTJiYzlkMDZlZWZlYTQwMTI3Y2UxY2U4NWI3YWM5Mjc2ZWUwNzAyNDU4NDE0ODMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.Zux2y_Y1d2usxhHs6KoAVEelqvBRNDKdM1ubsbsN1oU) Did you use a specific environment variable/GPU target for the 890M?
Author
Owner

@difrost commented on GitHub (Jan 12, 2025):

@shymega There's a number of steps you need to do in order to get the ROCm and iGPU "working" (that's still a dirty hack anyway) and one is the override of GFX version. I'm working on number of issues related to AMD HW in my laptop but in general getting XDNA2 support in Ollama will be mostly limited by ROCm support (HIP BLAS to be more precise). Currently it seems that it will be easier to get the NPU working with llama.cpp and that's my primary focus - AMD added the Strix Point support in their LLVM.

<!-- gh-comment-id:2585685239 --> @difrost commented on GitHub (Jan 12, 2025): @shymega There's a number of steps you need to do in order to get the ROCm and iGPU "working" (that's still a dirty hack anyway) and one is the override of GFX version. I'm working on number of issues related to AMD HW in my laptop but in general getting XDNA2 support in Ollama will be mostly limited by ROCm support (HIP BLAS to be more precise). Currently it seems that it will be easier to get the NPU working with llama.cpp and that's my primary focus - [AMD added the Strix Point support](https://github.com/Xilinx/llvm-aie/commit/685e83f8f375b088517d80322ea7e78f1e0de56e) in their LLVM.
Author
Owner

@palandovalex commented on GitHub (Jan 24, 2025):

What about the NPU on apple silicone, specifically the m4?

<!-- gh-comment-id:2612732388 --> @palandovalex commented on GitHub (Jan 24, 2025): What about the NPU on apple silicone, specifically the m4?
Author
Owner

@ZiTAL commented on GitHub (Feb 21, 2025):

Same here for rockchip support :)

<!-- gh-comment-id:2673962798 --> @ZiTAL commented on GitHub (Feb 21, 2025): Same here for rockchip support :)
Author
Owner

@khw11044 commented on GitHub (Mar 15, 2025):

Same here for rockchip support :)

Do you know how to run Ollama using Rockchip's NPU?
I'm trying to run a LangChain project on Rockchip, and I've successfully converted the model into a .rkllm file for fast LLM execution. However, I don't know how to load it into LangChain.

<!-- gh-comment-id:2726165808 --> @khw11044 commented on GitHub (Mar 15, 2025): > Same here for rockchip support :) Do you know how to run Ollama using Rockchip's NPU? I'm trying to run a LangChain project on Rockchip, and I've successfully converted the model into a .rkllm file for fast LLM execution. However, I don't know how to load it into LangChain.
Author
Owner

@wishx commented on GitHub (Mar 24, 2025):

The 6.14 kernel with AMD AI NPU has released and is widely available. Just letting people know.
https://www.phoronix.com/news/Linux-6.14

<!-- gh-comment-id:2749096531 --> @wishx commented on GitHub (Mar 24, 2025): The 6.14 kernel with AMD AI NPU has released and is widely available. Just letting people know. https://www.phoronix.com/news/Linux-6.14
Author
Owner

@seboss666 commented on GitHub (Aug 14, 2025):

Is there any news on the matter ? Trying getting fresh update on this, seems the models needs to be in a specific format suited for AMD NPUs, I don't have the knowledge to validate this statement though (would be a real bummer) :/
At least fall back on Vulkan maybe ? :)

<!-- gh-comment-id:3189851033 --> @seboss666 commented on GitHub (Aug 14, 2025): Is there any news on the matter ? Trying getting fresh update on this, seems the models needs to be in a specific format suited for AMD NPUs, I don't have the knowledge to validate this statement though (would be a real bummer) :/ At least fall back on Vulkan maybe ? :)
Author
Owner

@InternetPseudonym commented on GitHub (Aug 20, 2025):

for anyone wondering when this might arrive :

The firmware for the NPU is distributed as a closed source binary

meaning : pretty much "never". Closed source BS never gets much traction and dies within a few years, usually (unless a major megacorp keeps pumping money into it). I would recommend forgetting about this closed source NPU BS and start investiganting real acceleration options instead (GPU acc is pretty good nowadays)

<!-- gh-comment-id:3205391962 --> @InternetPseudonym commented on GitHub (Aug 20, 2025): for anyone wondering when this might arrive : > The firmware for the NPU is distributed as a closed source binary meaning : pretty much "never". Closed source BS never gets much traction and dies within a few years, usually _(unless a major megacorp keeps pumping money into it)_. I would recommend forgetting about this closed source NPU BS and start investiganting real acceleration options instead _(GPU acc is pretty good nowadays)_
Author
Owner

@pomazanbohdan commented on GitHub (Aug 20, 2025):

https://github.com/amd/RyzenAI-SW ?

<!-- gh-comment-id:3205485068 --> @pomazanbohdan commented on GitHub (Aug 20, 2025): https://github.com/amd/RyzenAI-SW ?
Author
Owner

@curious-boy-007 commented on GitHub (Sep 14, 2025):

@fatinghenji @pomazanbohdan @InternetPseudonym
I notice Qualcomm NPU (Snapdragon) has been supported by this CLI, it also supports running GGUF format, same as Ollama:

<!-- gh-comment-id:3289058951 --> @curious-boy-007 commented on GitHub (Sep 14, 2025): @fatinghenji @pomazanbohdan @InternetPseudonym I notice Qualcomm NPU (Snapdragon) has been supported by this CLI, it also supports running GGUF format, same as Ollama: - https://github.com/NexaAI/nexa-sdk - NPU: https://sdk.nexa.ai/model/Llama3.2-3B-NPU-Turbo
Author
Owner

@julianbarg commented on GitHub (Dec 20, 2025):

Linux upstreamed the Ryzen AI driver a while ago actually:

https://www.phoronix.com/review/linux-614-features

Has anyone experimented with this yet?

<!-- gh-comment-id:3677941170 --> @julianbarg commented on GitHub (Dec 20, 2025): Linux upstreamed the Ryzen AI driver a while ago actually: https://www.phoronix.com/review/linux-614-features Has anyone experimented with this yet?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48357