mirror of
https://github.com/eunomia-bpf/bpf-developer-tutorial.git
synced 2026-02-12 22:56:28 +08:00
Add HID-BPF tutorial and implementation for virtual mouse input modification
- Introduced a comprehensive tutorial in README.md explaining how to fix broken HID devices using eBPF without kernel patches. - Implemented a userspace program (hid-input-modifier.c) that creates a virtual HID mouse using the uhid interface and sends synthetic mouse events. - Developed a BPF program (hid-input-modifier.bpf.c) that intercepts HID events and modifies mouse movement data, effectively doubling the X and Y movement. - Created necessary header files (hid_bpf.h, hid_bpf_defs.h, hid_bpf_helpers.h) to define structures and helper functions for the BPF program. - Added functionality to find and manage the virtual HID device, ensuring seamless integration with the BPF program.
This commit is contained in:
@@ -4,6 +4,8 @@ When games stutter or ML training slows down, the answers lie inside the GPU ker
|
||||
|
||||
This tutorial shows how to monitor GPU activity using eBPF and bpftrace. We'll track DRM scheduler jobs, measure latency, and diagnose bottlenecks using stable kernel tracepoints that work across Intel, AMD, and Nouveau drivers.
|
||||
|
||||
> The complete source code: <https://github.com/eunomia-bpf/bpf-developer-tutorial/tree/main/src/xpu/gpu-kernel-driver>
|
||||
|
||||
## GPU Kernel Tracepoints: Zero-Overhead Observability
|
||||
|
||||
GPU tracepoints are instrumentation points built into the kernel's Direct Rendering Manager (DRM) subsystem. When your GPU schedules a job, allocates memory, or signals a fence, these tracepoints fire with precise timing and driver state.
|
||||
|
||||
@@ -4,6 +4,8 @@ Neural Processing Units (NPUs) are the next frontier in AI acceleration - built
|
||||
|
||||
This tutorial shows you how to trace Intel NPU kernel driver operations using eBPF and bpftrace. We'll monitor the complete workflow from Level Zero API calls down to kernel functions, track IPC communication with NPU firmware, measure memory allocation patterns, and diagnose performance bottlenecks. By the end, you'll understand how NPU drivers work internally and have practical tools for debugging AI workload issues.
|
||||
|
||||
> The complete source code: <https://github.com/eunomia-bpf/bpf-developer-tutorial/tree/main/src/xpu/npu-kernel-driver>
|
||||
|
||||
## Intel NPU Driver Architecture
|
||||
|
||||
Intel's NPU driver follows a two-layer architecture similar to GPU drivers. The kernel module (`intel_vpu`) lives in mainline Linux at `drivers/accel/ivpu/` and exposes `/dev/accel/accel0` as the device interface. This handles hardware communication, memory management through an MMU, and IPC (Inter-Processor Communication) with NPU firmware running on the accelerator itself.
|
||||
|
||||
Reference in New Issue
Block a user