Configure llama.cpp on AMD iGPU for local AI
Configure llama.cpp on a modern Tuxedo laptop, running Gentoo Linux with Wayland and Niri.
Laptop
Neofetch
1OS: Gentoo Linux x86_64
2Host: TUXEDO InfinityBook Pro AMD Gen10 Standard
3Kernel: 6.17.1-gentoo-gentoo-dist
4Uptime: 32 mins
5Packages: 1091 (emerge)
6Shell: bash 5.3.3
7Resolution: 2560x1440
8DE: niri
9WM: sway
10Theme: Adwaita [GTK3]
11Icons: Adwaita [GTK3]
12Terminal: kitty
13CPU: AMD Ryzen AI 9 HX 370 w/ Radeon 890M (24) @ 5.157GHz
14GPU: AMD ATI Radeon 880M / 890M
15Memory: 3268MiB / 127904MiB
amdgpu_top
1AMD Radeon 890M Graphics (0000:65:00.0, 0x150E:0xC1)
2GFX1150/Strix Point
3APU, GFX11_5, gfx1150, 16 CU, 600-2900 MHz
4DDR5 128-bit, 512 MiB, 1000-2800 MHz
5Memory Usage
6VRAM: [ 461 / 512 MiB ]
7GTT: [ 44406 / 63952 MiB ]
llama.cpp
There's Vulkan Backend support in llama.cpp=
, which enables usage of the iGPU in my TUXEDO InfinityBook Pro AMD Gen10 Standard laptop, allowing system RAM to be used by the iGPU
.
TODO ...