Gentoo Linux on modern hardware

Gentoo Linux installed on a modern Tuxedo laptop, with Wayland and Niri offers a high-performance, customizable development environment. Gentoo, a source-based distribution, allows fine-tuned control over system components, optimized for the Tuxedo hardware’s robust specs, such as high-end CPUs and GPUs. Wayland provides a modern, secure, and smooth display server protocol, enhancing graphical performance. Niri, a minimalist Wayland compositor, delivers a lightweight, tiling window manager experience, ideal for development, prioritizing efficiency and workspace organization. GitHub: tuxedocomputers.

Gentoo

Gentoo is a free operating system based on Linux that can be automatically optimized and customized for just about any application or need. Extreme configurability, performance, and a top-notch user and developer community are all hallmarks of the Gentoo experience.

Tuxedo computers

TUXEDO Computers are individually built computers and PCs being fully Linux-suitable, custom tailored Linux hardware so to say. We deliver all TUXEDOs ready to go so you only have to unwrap, plug in and turn it on!

Neofetch

 1OS: Gentoo Linux x86_64 
 2Host: TUXEDO InfinityBook Pro AMD Gen10 Standard 
 3Kernel: 6.17.3-gentoo-gentoo-dist 
 4Uptime: 32 mins 
 5Packages: 1091 (emerge) 
 6Shell: bash 5.3.3 
 7Resolution: 2560x1440 
 8DE: niri 
 9WM: sway 
10Theme: Adwaita [GTK3] 
11Icons: Adwaita [GTK3] 
12Terminal: kitty 
13CPU: AMD Ryzen AI 9 HX 370 w/ Radeon 890M (24) @ 5.157GHz 
14GPU: AMD ATI Radeon 880M / 890M 
15Memory: 3268MiB / 127904MiB

Tuxedo kernel patches

Use /etc/portage/patches, e.g. as follows (note: this specific example patch is already in Gentoo kernel 6.17.3):

Clone the TUXEDO Linux kernel and view an example patch, as follows:

 1# mkdir -p ~/src/Linux/Tuxedo
 2# cd ~/src/Linux/Tuxedo
 3# git clone https://gitlab.com/tuxedocomputers/development/packages/linux
 4# cd linux
 5# git show b9870c6ba4ce7d5300f405f620b79baabfbd08d7
 6commit b9870c6ba4ce7d5300f405f620b79baabfbd08d7
 7Author: 
 8Date:   Thu Jul 17 11:18:38 2025 +0200
 9
10    Input: i8042 - add TUXEDO InfinityBook Pro Gen10 AMD to i8042 quirk table
11    
12    Occasionally wakes up from suspend with missing input on the internal keyboard.
13    Setting the quirks appears to fix the issue for this device as well.
14    
15    Signed-off-by: 
16
17diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
18index 6ed9fc34948c..1caa6c4ca435 100644
19--- a/drivers/input/serio/i8042-acpipnpio.h
20+++ b/drivers/input/serio/i8042-acpipnpio.h
21@@ -1155,6 +1155,20 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
22 		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
23 					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
24 	},
25+	{
26+		.matches = {
27+			DMI_MATCH(DMI_BOARD_NAME, "XxHP4NAx"),
28+		},
29+		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
30+					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
31+	},
32+	{
33+		.matches = {
34+			DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"),
35+		},
36+		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
37+					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
38+	},
39 	/*
40 	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
41 	 * after suspend fixable with the forcenorestore quirk.

Put the patch in /etc/portage/patches, as follows:

1# mkdir -p /etc/portage/patches/sys-kernel/gentoo-sources
2# cd /etc/portage/patches/sys-kernel/gentoo-sources
3# (cd ~/src/Linux/Tuxedo/linux ; git show b9870c6ba4ce7d5300f405f620b79baabfbd08d7) > b9870c6ba4ce7d5300f405f620b79baabfbd08d7.patch

The patch will be automatically applied when an emerge -a sys-kernel/gentoo-sources is done.

Beware: don't forget to delete the patch from /etc/portage/pathes when it has been incorporated in sys-kernel/gentoo-sources itself.

Tuxedo drivers

Gentoo ebuild for recent version is on Codeberg: photonsphere/tuxedo-drivers-gentoo-ebuild and can be cloned and installed per instructions in the header of the ebuild file. It's simply a copy of Gentoo repository's app-laptop/tuxedo-drivers with a version bump.

To install:

1# eselect repository create local_repository
2# mkdir -p /var/db/repos/local_repository/app-laptop/tuxedo-drivers
3# cp tuxedo-drivers-4.17.0.ebuild /var/db/repos/local_repository/app-laptop/tuxedo-drivers/
4# egencache --repo local_repository --update
5# ebuild /var/db/repos/local_repository/app-laptop/tuxedo-drivers/tuxedo-drivers-4.17.0.ebuild digest
6#
7# echo "app-laptop/tuxedo-drivers ~amd64" > /etc/portage/package.accept_keywords/tuxedo-drivers 
8# emerge -a app-laptop/tuxedo-drivers

yt6801 wired network driver

Gentoo ebuild is on Codeberg: photonsphere/yt6801-gentoo-ebuild and can be cloned and installed per instructions in the header of the ebuild file.

It uses Makefiles from https://www.motor-comm.com/Public/Uploads/uploadfile/files/20250430/yt6801-linux-driver-1.0.30.zip and the Tuxedo specific driver from tuxedo-yt6801/1.0.30tux4 .

Beware: I have deleted an earlier version of this repository to avoid confusion (somehow I had completely missed the Tuxedo tuxedo-yt6801 repository).

To install (repository may already have created in step above):

1# eselect repository create local_repository
2# mkdir -p /var/db/repos/local_repository/app-laptop/tuxedo-yt6801
3# cp tuxedo-yt6801-1.0.30_p4.ebuild /var/db/repos/local_repository/app-laptop/tuxedo-yt6801/
4# egencache --repo local_repository --update
5# ebuild /var/db/repos/local_repository/app-laptop/tuxedo-yt6801/tuxedo-yt6801-1.0.30_p4.ebuild digest
6#
7# echo "app-laptop/tuxedo-yt6801 ~amd64" > /etc/portage/package.accept_keywords/tuxedo-yt6801 
8# emerge -a app-laptop/tuxedo-yt6801

verify it:

1# modprobe yt6801
2# lsmod | grep yt6801
3# ip a
4# ping -c 3 photonsphere.org
5# lspci -k | grep -A 3 'Ethernet controller'
664:00.0 Ethernet controller: Motorcomm Microelectronics. YT6801 Gigabit Ethernet Controller (rev 01)
7DeviceName: Realtek Ethernet
8Subsystem: AIstone Global Limited Device 7011
9Kernel driver in use: yt6801 

Last line is significant!

Local AI via iGPU

Configure llama.cpp.

amdgpu_top

1AMD Radeon 890M Graphics (0000:65:00.0, 0x150E:0xC1)
2GFX1150/Strix Point
3APU, GFX11_5, gfx1150, 16 CU, 600-2900 MHz
4DDR5 128-bit, 512 MiB, 1000-2800 MHz
5Memory Usage
6VRAM: [ 461 /   512 MiB ]
7GTT: [ 44406 / 63952 MiB ]

llama.cpp

There's Vulkan Backend support in llama.cpp=, which enables usage of the iGPU in my TUXEDO InfinityBook Pro AMD Gen10 Standard laptop, allowing system RAM to be used by the iGPU.

llama.cpp start script

 1#!/bin/sh
 2#
 3# https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF
 4#
 5
 6export MODEL="unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q8_0"
 7# export MODEL="unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q6_K_XL"
 8# export MODEL="unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M"
 9export CONTEXT_WINDOW_SIZE=262144
10
11# https://github.com/ggml-org/llama.cpp/issues/13178
12#export QWEN3_CODER_TEMPLATE="./qwen3-coder-30b.jinja"
13
14### --no-webui \
15### --chat-template-file "${QWEN3_CODER_TEMPLATE}" \
16HIP_VISIBLE_DEVICES="" GGML_BLAS_DEVICE=Vulkan0 GGML_BACKEND=Vulkan llama-server \
17  --alias "${MODEL}" \
18  --host 0.0.0.0 \
19  --port 8088 \
20  --temp 0.7 \
21  --top_p 0.8 \
22  --top_k 20 \
23  --repeat-penalty 1.05 \
24  -ngl 100 \
25  --flash-attn on \
26  -c ${CONTEXT_WINDOW_SIZE} \
27  --keep 32 \
28  --jinja \
29  -t $(nproc) \
30  --no-warmup \
31  -hf "${MODEL}" 

opencode

Configuration for local llama.cpp is as follows:

~/.config/opencode/config.json

 1{
 2    "$schema": "https://opencode.ai/config.json",
 3    "provider": {
 4        "llama-server": {
 5            "npm": "@ai-sdk/openai-compatible",
 6            "options": {
 7                "baseURL": "http://127.0.0.1:8088/v1",
 8                "timeout": false
 9            },
10            "models": {
11                "llama-cpp": {
12                  "tools": true,
13                  "reasoning": true,
14                  "options": {
15                  }
16               }
17            }
18        },
19        "ollama": {
20            "npm": "@ai-sdk/openai-compatible",
21            "options": {
22                "baseURL": "http://localhost:11434/v1",
23                "timeout": false
24            },
25            "models": {
26                "ollama": {
27                    "tools": true
28                }
29            }
30        }
31    }
32}

Speed

 1srv  log_server_r: request: POST /v1/chat/completions 127.0.0.1 200
 2srv  params_from_: Chat format: Hermes 2 Pro
 3slot get_availabl: id  0 | task 0 | selected slot by lcs similarity, lcs_len = 13995, similarity = 0.970 (> 0.100 thold)
 4slot launch_slot_: id  0 | task 438 | processing task
 5slot update_slots: id  0 | task 438 | new prompt, n_ctx_slot = 262144, n_keep = 32, n_prompt_tokens = 14421
 6slot update_slots: id  0 | task 438 | n_past = 13995, memory_seq_rm [13995, end)
 7slot update_slots: id  0 | task 438 | prompt processing progress, n_past = 14421, n_tokens = 426, progress = 0.029540
 8slot update_slots: id  0 | task 438 | prompt done, n_past = 14421, n_tokens = 426
 9slot      release: id  0 | task 438 | stop processing: n_past = 14445, truncated = 0
10slot print_timing: id  0 | task 438 | 
11prompt eval time =    8240.97 ms /   426 tokens (   19.35 ms per token,    51.69 tokens per second)
12       eval time =    1310.71 ms /    25 tokens (   52.43 ms per token,    19.07 tokens per second)
13      total time =    9551.69 ms /   451 tokens
14srv  update_slots: all slots are idle

Log of a short opencode session

  1ggml_backend_load_best: /usr/bin/libggml-cpu-alderlake.so score: 128
  2ggml_backend_load_best: /usr/bin/libggml-cpu-icelake.so score: 1472
  3ggml_backend_load_best: /usr/bin/libggml-cpu-skylakex.so score: 192
  4ggml_backend_load_best: /usr/bin/libggml-cpu-haswell.so score: 64
  5ggml_backend_load_best: /usr/bin/libggml-cpu-sandybridge.so score: 21
  6ggml_backend_load_best: /usr/bin/libggml-cpu-sse42.so score: 5
  7ggml_backend_load_best: /usr/bin/libggml-cpu-x64.so score: 1
  8ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected
  9register_backend: registered backend ROCm (0 devices)
 10ggml_vulkan: Found 1 Vulkan devices:
 11ggml_vulkan: 0 = AMD Radeon 890M Graphics (RADV GFX1150) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
 12register_backend: registered backend Vulkan (1 devices)
 13register_device: registered device Vulkan0 (AMD Radeon 890M Graphics (RADV GFX1150))
 14register_backend: registered backend RPC (0 devices)
 15register_backend: registered backend CPU (1 devices)
 16register_device: registered device CPU (AMD Ryzen AI 9 HX 370 w/ Radeon 890M)
 17load_backend: loaded CPU backend from /usr/bin/libggml-cpu-icelake.so
 18register_backend: registered backend CPU (1 devices)
 19register_device: registered device CPU (AMD Ryzen AI 9 HX 370 w/ Radeon 890M)
 20
 21build: 6692 (ca71fb9b) with HIP version: 6.4.43484-9999 for x86_64-pc-linux-gnu (debug)
 22system info: n_threads = 24, n_threads_batch = 24, total_threads = 24
 23
 24system_info: n_threads = 24 (n_threads_batch = 24) / 24 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | 
 25
 26main: binding port with default address family
 27main: HTTP server is listening, hostname: 0.0.0.0, port: 8088, http threads: 23
 28main: loading model
 29srv    load_model: loading model '/home/user/.cache/llama.cpp/unsloth_Qwen3-Coder-30B-A3B-Instruct-GGUF_Qwen3-Coder-30B-A3B-Instruct-Q4_K_M.gguf'
 30llama_model_load_from_file_impl: using device Vulkan0 (AMD Radeon 890M Graphics (RADV GFX1150)) (0000:65:00.0) - 42189 MiB free
 31llama_model_loader: loaded meta data with 44 key-value pairs and 579 tensors from /home/user/.cache/llama.cpp/unsloth_Qwen3-Coder-30B-A3B-Instruct-GGUF_Qwen3-Coder-30B-A3B-Instruct-Q4_K_M.gguf (version GGUF V3 (latest))
 32llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
 33...
 34print_info: max token length = 256
 35load_tensors: loading model tensors, this can take a while... (mmap = true)
 36load_tensors: offloading 48 repeating layers to GPU
 37load_tensors: offloading output layer to GPU
 38load_tensors: offloaded 49/49 layers to GPU
 39load_tensors:      Vulkan0 model buffer size = 17524.42 MiB
 40load_tensors:   CPU_Mapped model buffer size =   166.92 MiB
 41....................................................................................................
 42llama_context: constructing llama_context
 43llama_context: n_seq_max     = 1
 44llama_context: n_ctx         = 262144
 45llama_context: n_ctx_per_seq = 262144
 46llama_context: n_batch       = 2048
 47llama_context: n_ubatch      = 512
 48llama_context: causal_attn   = 1
 49llama_context: flash_attn    = enabled
 50llama_context: kv_unified    = false
 51llama_context: freq_base     = 10000000.0
 52llama_context: freq_scale    = 1
 53llama_context: Vulkan_Host  output buffer size =     0.58 MiB
 54llama_kv_cache:    Vulkan0 KV buffer size = 24576.00 MiB
 55llama_kv_cache: size = 24576.00 MiB (262144 cells,  48 layers,  1/1 seqs), K (f16): 12288.00 MiB, V (f16): 12288.00 MiB
 56llama_context:    Vulkan0 compute buffer size =   792.01 MiB
 57llama_context: Vulkan_Host compute buffer size =   516.02 MiB
 58llama_context: graph nodes  = 2983
 59llama_context: graph splits = 2
 60common_init_from_params: added <|endoftext|> logit bias = -inf
 61common_init_from_params: added <|im_end|> logit bias = -inf
 62common_init_from_params: added <|fim_pad|> logit bias = -inf
 63common_init_from_params: added <|repo_name|> logit bias = -inf
 64common_init_from_params: added <|file_sep|> logit bias = -inf
 65common_init_from_params: setting dry_penalty_last_n to ctx_size = 262144
 66srv          init: initializing slots, n_slots = 1
 67slot         init: id  0 | task -1 | new slot n_ctx_slot = 262144
 68srv          init: Enable thinking? 0
 69main: model loaded
 70main: chat template, chat_template: {# Copyright 2025-present Unsloth. Apache 2.0 License. Unsloth Chat template fixes #}
 71...
 72
 73main: server is listening on http://0.0.0.0:8088 - starting the main loop
 74srv  update_slots: all slots are idle
 75srv  params_from_: Chat format: Hermes 2 Pro
 76slot get_availabl: id  0 | task -1 | selected slot by LRU, t_last = -1
 77slot launch_slot_: id  0 | task 0 | processing task
 78slot update_slots: id  0 | task 0 | new prompt, n_ctx_slot = 262144, n_keep = 32, n_prompt_tokens = 13993
 79slot update_slots: id  0 | task 0 | n_past = 0, memory_seq_rm [0, end)
 80slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 2048, n_tokens = 2048, progress = 0.146359
 81slot update_slots: id  0 | task 0 | n_past = 2048, memory_seq_rm [2048, end)
 82slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 4096, n_tokens = 2048, progress = 0.292718
 83slot update_slots: id  0 | task 0 | n_past = 4096, memory_seq_rm [4096, end)
 84slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 6144, n_tokens = 2048, progress = 0.439077
 85slot update_slots: id  0 | task 0 | n_past = 6144, memory_seq_rm [6144, end)
 86slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 8192, n_tokens = 2048, progress = 0.585436
 87slot update_slots: id  0 | task 0 | n_past = 8192, memory_seq_rm [8192, end)
 88slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 10240, n_tokens = 2048, progress = 0.731794
 89slot update_slots: id  0 | task 0 | n_past = 10240, memory_seq_rm [10240, end)
 90slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 12288, n_tokens = 2048, progress = 0.878153
 91slot update_slots: id  0 | task 0 | n_past = 12288, memory_seq_rm [12288, end)
 92slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 13993, n_tokens = 1705, progress = 1.000000
 93slot update_slots: id  0 | task 0 | prompt done, n_past = 13993, n_tokens = 1705
 94slot      release: id  0 | task 0 | stop processing: n_past = 14423, truncated = 0
 95slot print_timing: id  0 | task 0 | 
 96prompt eval time =  159995.96 ms / 13993 tokens (   11.43 ms per token,    87.46 tokens per second)
 97       eval time =   23213.62 ms /   431 tokens (   53.86 ms per token,    18.57 tokens per second)
 98      total time =  183209.58 ms / 14424 tokens
 99srv  update_slots: all slots are idle
100srv  log_server_r: request: POST /v1/chat/completions 127.0.0.1 200
101srv  params_from_: Chat format: Hermes 2 Pro
102slot get_availabl: id  0 | task 0 | selected slot by lcs similarity, lcs_len = 13995, similarity = 0.970 (> 0.100 thold)
103slot launch_slot_: id  0 | task 438 | processing task
104slot update_slots: id  0 | task 438 | new prompt, n_ctx_slot = 262144, n_keep = 32, n_prompt_tokens = 14421
105slot update_slots: id  0 | task 438 | n_past = 13995, memory_seq_rm [13995, end)
106slot update_slots: id  0 | task 438 | prompt processing progress, n_past = 14421, n_tokens = 426, progress = 0.029540
107slot update_slots: id  0 | task 438 | prompt done, n_past = 14421, n_tokens = 426
108slot      release: id  0 | task 438 | stop processing: n_past = 14445, truncated = 0
109slot print_timing: id  0 | task 438 | 
110prompt eval time =    8240.97 ms /   426 tokens (   19.35 ms per token,    51.69 tokens per second)
111       eval time =    1310.71 ms /    25 tokens (   52.43 ms per token,    19.07 tokens per second)
112      total time =    9551.69 ms /   451 tokens
113srv  update_slots: all slots are idle
114srv  log_server_r: request: POST /v1/chat/completions 127.0.0.1 200
115^Csrv    operator(): operator(): cleaning up before exit...

DISCLAIMER

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.