linux kernel patch summaries, generated daily
Today's bpf-next activity spanned three significant feature series alongside a pair of RFC submissions. Leon Hwang's long-running series (now at v12) to extend the BPF syscall with common attributes landed alongside new kfunc work from Mahe Tardy and arm64 JIT improvements from Puranjay Mohan. Mohan also submitted an RFC XDP load-balancer benchmark framework, while Justin Suess introduced support for storing referenced struct file kptrs in BPF maps.
bpf: Implement dtor for struct file BTF ID
Implements a destructor for the struct file BTF ID, enabling BPF maps to store referenced struct file kptrs. This is the core kernel patch of a two-part series that adds proper lifecycle management for file references held inside BPF maps. Tracking struct file references prevents resource leaks when map entries are removed or the map itself is destroyed. The accompanying selftest verifies that map-stored file kptrs are correctly acquired and released.
bpf, arm64: Map BPF_REG_0 to x8 instead of x7
Remaps BPF_REG_0 to the arm64 x8 register (the indirect result register) to free x7 for use as a stack-argument-passing register under the AAPCS64 calling convention. This register reassignment is a prerequisite for the arm64 BPF JIT to support BPF programs calling kernel functions that pass arguments on the stack rather than solely in registers. Follow-on patches in the series add the JIT logic for stack arguments and enable the relevant selftests on arm64.
bpf: Extend BPF syscall with common attributes support
Introduces a unified common-attributes mechanism for the BPF syscall, allowing prog_load, btf_load, and map_create commands to share a consistent log-size reporting path. At version 12, this series also adds libbpf support and the ability for userspace to retrieve the true log buffer size when BPF object loading fails. The change reduces duplication in the BPF syscall implementation and makes failure diagnostics more consistent across all BPF object types.
bpf: add bpf_icmp_send_unreach kfunc
Adds a new kfunc bpf_icmp_send_unreach that allows BPF programs to generate ICMP destination-unreachable messages for both IPv4 and IPv6. This enables tc and XDP programs to reject packets with meaningful ICMP feedback rather than silently dropping them, improving network-level error signaling. The series refactors netfilter helper functions into core ipv4/ipv6 to make them reusable outside of netfilter, and is accompanied by comprehensive tests covering both address families and recursion safety.
selftests/bpf: Add bench_force_done() for early benchmark completion
First patch of an RFC series adding an XDP load-balancer benchmark to the BPF selftest suite. This patch introduces bench_force_done(), a helper that lets a benchmark signal early completion without waiting for the full configured duration. Subsequent patches build a batch-timing library, a full XDP load-balancer BPF program with common definitions, and a driver and shell script to run the benchmark end-to-end. The RFC status invites feedback on the benchmark design and infrastructure before finalization.
selftests/bpf: fix off-by-one in bpf_cpumask_populate related selftest
Fixes an off-by-one error in the selftest for bpf_cpumask_populate. The bug caused the test to iterate one index past the valid CPU range, potentially producing incorrect results or accessing memory beyond the expected bounds on systems with specific CPU counts. This is a standalone single-patch fix with no other dependencies.
Generated 2026-04-22T00:00:00Z
Activity on April 19–20 was dominated by Yonghong Song's v6 of the stack-arguments series for BPF functions and kfuncs, a 17-patch set that extends the BPF calling convention to pass arguments on the stack beyond the standard six registers with full x86-64 JIT support. Two smaller patches rounded out the day: Aaron Tomlin fixed libbpf to properly reject negative kprobe offsets, and Matt Bobrowski corrected an off-by-one error in the bpf_cpumask_populate selftest.
bpf: Support stack arguments for bpf functions
This patch adds verifier support for BPF subprogram functions to receive arguments on the stack, enabling function signatures with more than the standard six register-based parameters. A new BPF_REG_PARAMS mechanism tracks the stack argument state through verifier analysis, and the calling convention is updated to lay out excess parameters in a well-defined region of the caller's stack frame. This is patch 07 of a 17-part series (v6) that collectively introduces stack argument passing for both BPF functions and kfuncs. The change is the core enabler for the rest of the series and requires corresponding JIT backend work to become operational.
bpf: Support stack arguments for kfunc calls
Extends the new stack argument infrastructure to kfunc calls, allowing kernel functions exposed to BPF programs to accept arguments beyond the six-register limit. The verifier is updated to validate that stack argument types and sizes match the expected kfunc BTF signature, keeping the calling convention consistent with BPF-to-BPF calls. This patch is the twelfth in the series and pairs tightly with the BPF subprogram stack argument changes introduced earlier. Unified handling across both call sites simplifies future extensions to the argument-passing mechanism.
bpf,x86: Implement JIT support for stack arguments
Implements the x86-64 JIT backend changes required to physically place excess function arguments onto the stack when calling BPF subprograms or kfuncs. The JIT allocates additional stack space for spilled parameters and emits the appropriate store instructions to lay them out before the call site. Programs using stack arguments are rejected by the verifier on architectures without JIT support, making this x86-64 implementation the first concrete gate that allows the feature to be used in practice. Other JIT backends can add independent support following the same pattern.
libbpf: Report error when a negative kprobe offset is specified
Fixes a libbpf bug where a negative offset for a kprobe attachment would be silently accepted rather than rejected at the library level, leading to confusing downstream failures. With this patch, libbpf validates the offset field and returns a clear EINVAL if a negative value is provided. This is the third revision of the fix, addressing earlier review feedback on where in the attachment path the check should live. Negative kprobe offsets are not supported by the kernel and catching them early improves the user experience for programs that misconfigure their probes.
selftests/bpf: fix off-by-one in bpf_cpumask_populate related selftest
Corrects an off-by-one error in a BPF selftest exercising bpf_cpumask_populate, where the loop bound caused a read one element past the intended array boundary. The bug could produce spurious failures or undefined behavior on configurations where the adjacent memory was not safely accessible. The fix is a one-line bound correction with no impact on the BPF subsystem itself. Keeping selftests clean ensures CI results accurately reflect real regressions rather than test-infrastructure noise.
Generated 2026-04-21T00:00:00Z
Today's bpf-next activity featured three series spanning the verifier, kfuncs, and libbpf. Kumar Kartikeya Dwivedi posted v3 of a series adding verifier warning infrastructure and a kfunc deprecation annotation, enabling non-fatal diagnostic messages during BPF program loading. Puranjay Mohan posted v13 of a long-running series introducing CPU time counter kfuncs with arm64 JIT support, bringing high-resolution per-CPU timing to BPF programs.
libbpf: Report error when a negative kprobe offset is specified
libbpf now returns an error when a user specifies a negative offset for a kprobe attachment point. Previously this case could be silently accepted, leading to undefined behavior at attach time. This is a defensive input validation improvement that catches misconfigured kprobe offsets early during program load rather than at runtime.
bpf: Add support for verifier warning messages
Introduces a new mechanism in the BPF verifier to emit non-fatal warning messages during program verification. Unlike verifier errors that abort loading, warnings allow programs to load successfully while surfacing diagnostic information to the user. This patch is the foundation of the series, adding the core warning message infrastructure that subsequent patches in the series build upon.
bpf: Introduce __bpf_kfunc_mark_deprecated annotation
Adds the `__bpf_kfunc_mark_deprecated` annotation macro that kernel developers can use to mark kfuncs as deprecated. When a BPF program calls a deprecated kfunc, the verifier emits a warning rather than rejecting the program outright. This enables gradual kfunc lifecycle management, giving users time to migrate away from old APIs without breaking existing BPF programs.
libbpf: Request verifier warnings for object loads
Updates libbpf to opt in to the new verifier warning infrastructure when loading BPF objects, so that warning messages emitted by the kernel verifier are surfaced to userspace. This wires the kernel-side warning mechanism into the standard BPF program loading path. Users relying on libbpf will automatically receive deprecation and other verifier warnings without any application-level changes.
bpf: add bpf_get_cpu_time_counter kfunc
Introduces the `bpf_get_cpu_time_counter` kfunc, which exposes the per-CPU hardware time counter to BPF programs. This allows BPF programs to perform high-resolution timing measurements using the CPU's native cycle counter. Part of a series that has reached v13 after extensive review, this kfunc gives BPF programs direct access to low-overhead hardware timing primitives.
bpf: add bpf_cpu_time_counter_to_ns kfunc
Adds `bpf_cpu_time_counter_to_ns` as a companion kfunc to convert raw CPU time counter values to nanoseconds. Raw cycle counter values are CPU-frequency-dependent and not directly portable, so this conversion kfunc makes timing results meaningful across different hardware. Together with `bpf_get_cpu_time_counter`, BPF programs can now perform accurate, portable elapsed-time measurements.
bpf, arm64: Add JIT support for cpu time counter kfuncs
Adds arm64 JIT backend support for the new CPU time counter kfuncs, enabling them to be efficiently inlined on AArch64 hardware. Without JIT support the kfuncs would fall back to a slower generic execution path. This patch completes the architecture-specific optimization needed for production-quality use of the CPU timing kfuncs on arm64 systems.
Generated 2026-04-19T09:51:17Z
A busy day on bpf-next dominated by Jiri Olsa's 28-patch tracing_multi link series, which introduces a new BPF link type for attaching a single program to multiple kernel functions simultaneously via a single syscall. Yonghong Song's 16-patch series adding stack argument support for BPF functions and kfuncs also appeared, extending the calling convention to pass structs beyond the six-register limit on x86-64.
bpf: Add support for tracing multi link
Introduces the new BPF_LINK_TYPE_TRACING_MULTI link type, allowing a single BPF tracing program to be attached to many kernel functions at once rather than requiring one link per function. The implementation reuses and extends the existing trampoline infrastructure, adding bpf_trampoline_multi_attach/detach helpers to manage bulk attachment. This is a significant usability improvement for tools that need to trace large numbers of functions—for example, function-graph style tracers or security monitors—without the overhead of managing thousands of individual links.
libbpf: Add support to create tracing multi link
Adds the libbpf-side API for creating tracing_multi links, exposing the new kernel capability to userspace BPF programs. The patch wires up bpf_link_create() for the new attach type and introduces a btf_type_is_traceable_func() helper so that callers can filter BTF entries to only traceable functions before bulk attachment. Together with the kernel patches in this series, libbpf users gain a high-level interface for multi-function tracing.
bpf: Support stack arguments for bpf functions
Extends the BPF verifier and calling convention to allow struct arguments larger than eight bytes to be passed on the stack to BPF-to-BPF calls, mirroring the C ABI on x86-64. Previously BPF functions were limited to six register-width arguments; this patch introduces the BPF_REG_PARAMS pseudo-register to track stack-passed parameters and updates the verifier to validate them. The change is a prerequisite for supporting the full kfunc calling convention when kfuncs themselves accept stack-spilled arguments.
bpf: Support stack arguments for kfunc calls
Adds verifier support for kfunc calls that take struct arguments passed on the stack, complementing the BPF-function stack-argument patch in the same series. The patch enforces that such structs are no larger than eight bytes per slot and rejects stack arguments when tail calls are reachable (since tail calls don't preserve the stack frame). x86-64 JIT emission for the new calling convention is handled by a companion patch in the series.
bpf: Extend BTF UAPI vlen, kinds to use unused bits
Expands the BTF type-info fields by repurposing currently-unused bits in the type_info word, raising the vlen limit from 16 bits to 24 bits and the kind field from 5 bits to 8 bits. This removes a long-standing constraint on the number of struct members and enum values that can be described in a single BTF type, which matters for very large auto-generated BTF from complex kernel structs. The series updates libbpf, bpftool, and selftests to handle the wider fields, with libbpf gaining a feature-probe to detect kernel support.
arm32, bpf: Reject BPF-to-BPF calls and callbacks in the JIT
Makes the 32-bit ARM BPF JIT explicitly reject programs that use BPF-to-BPF calls or callbacks, which the JIT does not implement, rather than silently producing incorrect code. This is a correctness fix: without the rejection the interpreter would be invoked as a fallback but with a JIT-compiled caller, leading to undefined behavior. The v2 revision consolidates the rejection of both BPF_PSEUDO_CALL and callback-carrying helper calls into a single check.
selftests/bpf: Trace bpf_local_storage_update to debug flaky local storage tests
Adds a fentry tracepoint on bpf_local_storage_update in the BPF local-storage selftests to capture diagnostic information when the tests fail intermittently. Flaky local-storage tests have been observed under memory pressure; the additional tracing helps identify whether failures correlate with concurrent updates or allocation failures. This is a test-infrastructure improvement rather than a kernel change.
Generated 2026-04-18T09:52:31Z
A productive day on bpf-next with three major series in flight. Yonghong Song's v5 stack-argument series for BPF functions and kfuncs reached near-final shape, while Paul Chaignon posted an RFC improving verifier register-bounds refinement for 32-to-64-bit range propagation. Mykyta Yatsenko fixed a NULL dereference in the verifier's kptr slot type-checking path, and Nick Hudson continued refining tunnel decapsulation flags for skb_adjust_room.
bpf: Support stack arguments for bpf functions
The core patch of Yonghong Song's 16-patch v5 series, teaching the BPF verifier to accept struct arguments passed on the stack in BPF-to-BPF calls. A new BPF_REG_PARAMS pseudo-register tracks the stack pointer for parameter spilling, and the verifier validates that stack slots are properly initialized before the call. The x86-64 JIT is updated in a companion patch to emit the required push/pop sequences, while non-JITed paths and tail-call-reachable paths are explicitly rejected.
bpf: Fix NULL deref in map_kptr_match_type for scalar regs
Fixes a NULL pointer dereference in map_kptr_match_type() that occurs when a BPF program tries to store a scalar register into a map slot typed as a kernel pointer (kptr). The function assumed the source register always holds a pointer with associated BTF type info, but scalars have no such info, causing a crash during verification. The fix adds a scalar-register check before accessing the BTF type, and the companion selftest confirms the verifier now properly rejects such stores.
bpf: Extend BTF UAPI vlen, kinds to use unused bits
Version 2 of Alan Maguire's series widening the BTF type-info word's vlen field from 16 to 24 bits and the kind field from 5 to 8 bits by repurposing reserved bits. The kernel change is accompanied by libbpf updates that add a feature probe for extended-vlen kernel support and adjust btf_vlen() to return __u32, plus bpftool changes to display and handle 24-bit vlen values. This removes a hard ceiling on the number of members in large structs and enum types representable in BTF.
bpf/verifier: Use intersection checks when simulating to detect dead branches
An RFC series improving the BPF verifier's ability to prune dead branches by using intersection checks between tnum (tracked number) constraints and integer range bounds when simulating conditional jumps. The series also fixes a bug in the verifier's slow-mode reg_bounds path and improves 32-to-64-bit range refinement so that the verifier derives tighter 64-bit bounds from known 32-bit constraints. Several new selftests capture the refinement cases that were previously missed.
bpf: add BPF_F_ADJ_ROOM_DECAP_* flags for tunnel decapsulation
Introduces new BPF_F_ADJ_ROOM_DECAP_L3_IPV4 and BPF_F_ADJ_ROOM_DECAP_L3_IPV6 flags for the bpf_skb_adjust_room() helper, allowing BPF programs performing tunnel decapsulation to signal the kernel that the outer IP header has been removed. A companion patch clears the GSO tunnel state in skb_adjust_room when decap flags are set, preventing the networking stack from incorrectly re-segmenting the now-bare inner packet. The v4 revision also adds a tc_tunnel selftest validating the GSO state after decapsulation.
selftests/bpf: Add BPF_STRICT_BUILD toggle
The first patch of Ricardo B. Marlière's v7 11-patch series that makes the BPF selftest build system more robust against partial kernel configurations. This patch adds a BPF_STRICT_BUILD Makefile toggle: when unset, compilation and BPF skeleton generation failures are tolerated rather than aborting the whole build. Subsequent patches in the series handle benchmark build failures, cross-test weak-symbol definitions, and install-time missing-file tolerance, making it practical to build and run BPF selftests on distro kernels without full source trees.
Generated 2026-04-17T10:16:06Z
The most notable submission was Mykyta Yatsenko's v10 of sleepable tracepoint support, a long-requested feature that allows raw and classic tracepoint BPF programs to call sleeping helpers and kfuncs. Nick Hudson's v4 series introduced new BPF_F_ADJ_ROOM_DECAP_* flags to fix GSO state corruption during tunnel decapsulation. Harishankar Vishwanathan improved the verifier's branch pruning with tnum intersection checks, and Ricardo B. Marlière posted an 11-patch series overhauling the BPF selftests build system to tolerate partial kernel configurations.
bpf: Add sleepable support for raw tracepoint programs
Adds support for BPF programs attaching to raw tracepoints to be marked sleepable, enabling them to call helpers and kfuncs that may sleep. This has been a long-requested feature (v10 of this series), as raw tracepoints see heavy use in production tracing infrastructure but could not previously use the growing set of sleepable-only BPF primitives. The series also extends libbpf with new section handlers for sleepable tracepoints and adds verifier logic to validate the sleepable flag for these program types.
bpf: add BPF_F_ADJ_ROOM_DECAP_* flags for tunnel decapsulation
Introduces new BPF_F_ADJ_ROOM_DECAP_* flags for the bpf_skb_adjust_room() helper to properly signal tunnel decapsulation operations to the kernel. Previously, programs performing decapsulation had no standard way to inform the kernel that GSO state needed updating after header removal, leading to potential packet corruption on large segmented packets. This series pairs the new flags with a fix to clear GSO state appropriately in skb_adjust_room when decapsulating.
bpf/verifier: Use intersection checks when simulating to detect dead branches
Improves the BPF verifier's branch pruning by computing tnum/u64 intersections to detect branches that can never be taken given current register constraints. This reduces the number of states the verifier must explore for programs with range checks, lowering verification time for complex programs. The accompanying selftest adds cases where the tnum and u64 ranges produce an empty intersection, verifying that the verifier correctly prunes those paths.
bpf: copy BPF token from main program to subprograms
V4 of the fix ensuring BPF token delegation is correctly propagated from a main BPF program to its subprograms during verification. Without this, privileged operations in subprograms are incorrectly rejected even when the token grants the necessary permissions. This iteration addresses review feedback from v3 and improves selftest coverage verifying that kallsyms entries are present for token-loaded subprograms.
selftests/bpf: Add BPF_STRICT_BUILD toggle
First patch in an 11-part series overhauling the BPF selftests build system to tolerate partial kernel configurations. Introduces a BPF_STRICT_BUILD toggle that lets upstreams enforce strict build behavior while allowing distro kernel CI environments to skip tests for features not compiled in. The full series handles BPF object compilation failures, skeleton generation failures, benchmark build failures, and install-time missing file handling.
selftests/bpf: Use local type for flow_offload_tuple_rhash in xdp_flowtable
Updates BPF selftests to use local type definitions for kfunc declarations rather than pulling in internal kernel headers directly, improving portability across kernel versions and configurations. The series covers two test files—xdp_flowtable and test_tunnel_kern—both of which referenced internal kernel types that can differ between kernel builds. Using local type definitions avoids header inclusion issues that arise when testing against distro or out-of-tree kernels.
Generated 2026-04-17T00:00:00Z
The day's patches centered on two substantial new features: Alan Maguire's series extending BTF's btf_type struct to use previously unused bits for larger vlen and kind fields, and Leon Hwang's v4 series introducing global per-CPU data support in BPF programs. Eduard Zingerman continued refining BPF token propagation to subprograms, while KaFai Wan added a kernel-side guard rejecting TCP_NODELAY from BPF TCP header option callbacks.
bpf: Introduce global percpu data
Introduces first-class support for global per-CPU variables in BPF programs, allowing programs to declare and use per-CPU data in a way that is reflected in generated skeletons. This eliminates the need for manual per-CPU map management when global per-CPU state is desired. The series also adds BPF_F_ALL_CPUS flag support for per-CPU map updates and extends libbpf with feature probing and skeleton generation for the new type.
bpf: Extend BTF UAPI vlen, kinds to use unused bits
Extends the BTF btf_type UAPI to repurpose previously unused bits, expanding the vlen field from 16 to 24 bits and the kind field from 5 to 8 bits. This unblocks future growth of BTF type counts (particularly for large structs with many members) and new kind definitions. The series includes matching libbpf feature detection, bpftool support for the wider fields, and selftest coverage for the new limits.
bpf: copy BPF token from main program to subprograms
Fixes a bug where the BPF token associated with a main program was not propagated to its subprograms during verification, causing permission checks on subprogram-specific operations to fail when loading via token delegation. Without this fix, privileged operations in subprograms could be incorrectly rejected even when the token grants the necessary permissions. The accompanying selftest verifies that kallsyms entries are correctly created for token-loaded subprograms.
bpf: tcp: Reject TCP_NODELAY from BPF hdr opt callbacks
Adds a kernel-side guard to reject attempts to set TCP_NODELAY from within BPF TCP header option write and reserve callbacks. Setting TCP_NODELAY from these callbacks can cause unexpected behavior because the callback context does not allow safe modification of socket-level TCP flags. The patch ensures consistent and safe behavior by failing such attempts early with an appropriate error code.
bpf: Refactor dynptr mutability tracking
Refactors how the BPF verifier tracks whether a dynptr is mutable or read-only, consolidating the logic to make it cleaner and easier to extend. The existing tracking was spread across multiple code paths using implicit conventions; this change makes mutability an explicit property of dynptr state. This v3 incorporates reviewer feedback from earlier rounds and should make future dynptr feature additions less error-prone.
s390/bpf: inline smp_processor_id and current_task
Teaches the s390 BPF JIT to inline calls to smp_processor_id() and current_task rather than emitting out-of-line function calls. Inlining these frequently-used helpers reduces call overhead and improves performance of BPF programs running on s390 hardware. This brings s390 more in line with x86 and arm64 JITs which have had similar optimizations for some time.
Generated 2026-04-17T00:00:00Z
Activity for April 13-14 was dominated by two significant RFC proposals: KASAN instrumentation for JIT-compiled BPF programs on x86, and an expanded atomics selftest suite targeting cpuv4 and sub-32-bit operations. The day also saw important verifier fixes from Eduard Zingerman correcting argument tracking through imprecise and multi-offset stack pointers, plus a use-after-free fix in BPF arena's fork handling from Alexei Starovoitov. Security hardening continued with Xu Kuohai's v14 series adding ENDBR/BTI emission for indirect jump targets across x86 and arm64.
bpf: add BPF_JIT_KASAN for KASAN instrumentation of JITed programs
This RFC introduces a new Kconfig option BPF_JIT_KASAN that enables Kernel Address Sanitizer checks inside JIT-compiled BPF programs on x86. The series works by having the BPF verifier mark instructions that access the program stack, then having the x86 JIT emit inline KASAN shadow-memory checks around those accesses. This brings the same memory-safety guarantees that KASAN provides to kernel C code into the JIT-compiled BPF execution path, significantly improving the ability to catch out-of-bounds and use-after-free bugs in BPF programs. The series is eight patches covering KASAN helper exposure, stack-access marking in the verifier, the core Kconfig, x86 JIT emission, and selftests.
bpf: Fix use-after-free in arena_vm_close on fork
This single patch fixes a use-after-free bug triggered when a process that has a BPF arena mapped forks and then the child or parent closes the arena's VM region. The arena_vm_close callback was accessing memory that could already be freed in the fork path, leading to potential memory corruption or a kernel crash. The fix ensures proper reference counting and ordering so that the arena structure remains valid for the lifetime of all mappings referencing it.
bpf: fix arg tracking for imprecise/multi-offset BPF_ST/STX
This v2 two-patch series corrects the BPF verifier's argument liveness tracking for BPF_ST and BPF_STX instructions when accessed through imprecise or multi-offset stack pointers. Without this fix, the verifier could fail to mark stack slots as live, causing incorrect pruning of program states and potentially accepting unsafe programs or rejecting valid ones. The companion selftest patch adds regression coverage for these edge cases involving imprecise pointer arithmetic targeting stack memory.
bpf: Move constants blinding out of arch-specific JITs
This is the base patch of a v14 five-patch series that refactors BPF JIT infrastructure to enable emission of ENDBR (x86 IBT) and BTI (arm64) instructions at indirect jump targets. The series first centralizes constant blinding out of arch-specific JITs, then passes bpf_verifier_env into the JIT, adds a generic helper to identify indirect jump targets, and finally adds x86 ENDBR and arm64 BTI emission. The result hardens JIT-compiled BPF programs against control-flow hijacking attacks on hardware that supports CET/BTI.
bpf, arm64: Remove redundant bpf_flush_icache() after pack allocator finalize
This v2 series removes redundant instruction-cache flush calls on arm64 and RISC-V that were being issued after the BPF pack allocator's finalize step. The pack allocator already performs an icache flush as part of finalization, making the subsequent flush in the JIT code superfluous and wasteful. Eliminating the duplicate flushes reduces overhead during BPF program load, particularly for workloads that frequently load and unload programs.
selftests/bpf: Prevent allocating data larger than a page
This three-patch series fixes bugs in the BPF task local storage selftests where allocations larger than a page were permitted, leading to garbage data being returned by tld_get_data(). The series adds a guard against oversized allocations, fixes the garbage-data return path, and adds a new selftest verifying that small task local data allocations work correctly end-to-end. These fixes improve reliability of the task local storage feature for programs that use it to track per-task state.
bpf/tests: Exhaustive test coverage for signed division and modulo
This v3 single patch adds exhaustive test cases for signed 32-bit and 64-bit division and modulo operations in the BPF test infrastructure. The tests cover edge cases including division by negative numbers, INT_MIN divided by -1 (overflow), and modulo by negative divisors, which are all areas where interpreter and JIT implementations can diverge. Comprehensive coverage here helps catch correctness regressions across different architectures when new JIT backends are added or existing ones are modified.
selftests/bpf: Only define ENABLE_ATOMICS_TESTS for cpuv4 runner
This RFC four-patch series updates the BPF atomics selftest suite with broader coverage, starting by scoping the ENABLE_ATOMICS_TESTS macro to cpuv4 runner environments to avoid spurious failures on older hardware. Subsequent patches in the series add 8-bit and 16-bit fetch-based atomic testcases, non-fetch-based atomics for all widths, and exhaustive stack-based atomic operation coverage. The expanded suite is motivated by work on LoongArch BPF JIT support and improves confidence in atomic instruction correctness across architectures.
Generated 2026-04-15T00:00:00Z
April 12-13 brought a wave of structural and feature work to bpf-next. Alexei Starovoitov posted four revision rounds of a series splitting the monolithic verifier.c into focused modules, while Yonghong Song's v4 18-patch series adds stack-based argument support for BPF functions and kfuncs with x86_64 JIT backing. Emil Tsalapatis's arena library reached v7, Menglong Dong fixed missing fsession references across the subsystem, and a lone test fix replaced a deprecated shm_open call with memfd_create.
bpf: Split fixup/post-processing logic from verifier.c into fixups.c
The first patch of Starovoitov's v4 verifier.c split series moves the fixup and post-processing logic out of the monolithic verifier.c into a dedicated fixups.c. The overarching goal is to make the BPF verifier codebase more navigable by isolating distinct concerns into their own files, reducing the size of verifier.c from tens of thousands of lines to a more manageable core. This is the opening move in a 6-patch series that also splits out liveness, CFG analysis, state equivalence, backtracking, and BTF checking.
bpf: Move backtracking logic to backtrack.c
Part of the v4 verifier.c split series, this patch extracts the precision backtracking logic into its own backtrack.c file. Precision backtracking is one of the more complex subsystems in the verifier, responsible for determining which register values must be tracked precisely to correctly prune equivalent states. Isolating it improves reviewability and makes future modifications to the backtracking algorithm easier to scope.
bpf: Support stack arguments for bpf functions
This is the core verifier patch in Song's v4 18-patch series enabling BPF functions to pass arguments via the stack, overcoming the five-register argument limit. A new BPF_REG_STACK_ARG_BASE register is introduced to address arguments spilled beyond the register window, and the verifier is taught to validate PTR_TO_STACK arguments at call sites. The series handles both BPF-to-BPF calls and kfunc calls, with safeguards against use in programs reachable by tail calls or in non-JITed contexts.
bpf,x86: Implement JIT support for stack arguments
The x86_64 JIT backend patch in Song's stack arguments series teaches the JIT to emit code that correctly marshals stack-based arguments at BPF function call boundaries. Arguments exceeding the five-register limit are placed in a designated area of the caller's stack frame and addressed relative to the new BPF_REG_STACK_ARG_BASE. The patch is paired with architecture enablement and verifier-side validation patches in the same series.
bpf: Allow instructions with arena source and non-arena dest registers
The first substantive patch in Tsalapatis's v7 arena library series relaxes a verifier restriction to allow arithmetic operations where one operand is an arena pointer and the result is a plain scalar or non-arena pointer. This is needed so that user-space arena library code can freely mix arena and non-arena pointers in calculations without triggering spurious verifier rejections. The v7 series also adds a buddy allocator, ASAN support, and a full libarena test harness.
bpf: add missing fsession to the verifier log
This v3 patch adds the missing BPF_TRACE_FSESSION attach type to the verifier's human-readable log output, which previously omitted it when printing program attach type information. Companion patches in the same 3-patch series add fsession to the BPF documentation and to bpftool's usage and man page, rounding out the coverage for this attach type. The series is a straightforward completeness fix with no functional behavior change.
selftests/bpf: Use memfd_create instead of shm_open in cgroup_iter_memcg
Replaces the use of the now-deprecated shm_open() call in the cgroup_iter_memcg BPF selftest with the more modern memfd_create() interface. The existing shm_open usage was causing test infrastructure issues on systems where POSIX shared memory is not available or behaves differently. This is a one-patch cleanup with no impact on what the test actually exercises.
Generated 2026-04-14T00:00:00Z
The April 11–12 bpf-next window was dominated by verifier refactoring and significant new feature work. Alexei Starovoitov continued the multi-part effort to split the monolithic verifier.c into focused modules (fixups.c, liveness.c, cfg.c, states.c, backtrack.c, check_btf.c) and posted follow-up cleanups to simplify the main instruction-dispatch loop and move reserved-field checks out of the hot path. Yonghong Song posted a v4 18-patch series enabling stack-passed arguments for BPF-to-BPF calls and kfunc calls on x86-64, while Emil Tsalapatis's v6 arena-library series introduced a buddy allocator and ASAN runtime for BPF arena programs.
bpf: Support stack arguments for bpf functions
Part of an 18-patch v4 series that adds first-class support for passing arguments on the stack to BPF-to-BPF functions and kfuncs. This patch adds the core verifier logic to validate PTR_TO_STACK arguments in BPF function calls, teaching the verifier to track stack-passed memory regions across call boundaries. The feature is needed because BPF programs calling functions with more than five arguments (the current register limit) have no way to pass the extras without this infrastructure. Companion patches add x86-64 JIT emission, kfunc support, and restrictions against use with tail calls or non-JITed programs.
bpf: Split fixup/post-processing logic from verifier.c into fixups.c
First patch of a v2 six-part series that breaks up the notoriously large verifier.c by extracting distinct subsystems into their own files. This patch moves fixup and post-processing logic into a new fixups.c, while companion patches create liveness.c, cfg.c, states.c, backtrack.c, and check_btf.c. The goal is to reduce verifier.c to a manageable size and improve code navigation and maintainability for one of the most complex files in the kernel. The v2 revision addresses review feedback on include dependencies and symbol visibility.
A standalone cleanup that refactors do_check_insn(), the core per-instruction dispatch function in the BPF verifier. The patch reorganizes the function to reduce nesting and improve readability without changing behavior. This is part of the broader ongoing effort to make verifier.c easier to split and maintain, complementing the multi-file decomposition series posted the same day.
bpf: Move checks for reserved fields out of the main pass
A v2 verifier cleanup that extracts reserved-field validation (zero-check of src_reg, imm, offset, etc.) from the main instruction-decode loop into a dedicated pre-pass. Moving these checks out of the hot verification path makes the main pass easier to read and avoids redundant branching on every instruction. This is a prerequisite refactoring for the broader verifier.c decomposition work.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
Lead patch of a v6 nine-patch series introducing an arena library and runtime for BPF programs. This specific patch teaches the verifier to upgrade a plain scalar register to PTR_TO_ARENA when it is the result of adding a scalar to an arena pointer, enabling safe arithmetic inside arena regions. Companion patches add basic libarena scaffolding, an ASAN runtime for memory error detection in arena programs, a buddy allocator, and a comprehensive selftest suite including ASAN-instrumented tests.
bpf, arm64: Emit BTI for indirect jump target
Final patch of a v13 five-patch series that adds ENDBR (x86 CET) and BTI (arm64) instructions at indirect-jump targets in BPF JIT-compiled programs. The series introduces a verifier helper to identify indirect jump targets, refactors constants blinding out of per-arch JITs to share common logic, and passes bpf_verifier_env to the JIT so architecture back-ends can use the target information. Reaching v13 reflects the extensive review this security hardening feature has undergone.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
A v3 fix for a null-pointer dereference triggered when a BPF fmod_ret program attaches to security_task_alloc and returns a non-zero value, causing kernel_clone() to proceed with an incompletely initialized task struct. The patch adds a check so that if fmod_ret short-circuits security_task_alloc with an error, the kernel correctly unwinds without dereferencing the null task pointer. A companion selftest verifies the return-value semantics of fmod_ret on this hook.
bpf: Use kmalloc_nolock() universally in local storage
Core patch of a v2 three-patch series that switches BPF local storage allocation to kmalloc_nolock() throughout, removing the need to plumb gfp_flags through the call chain. kmalloc_nolock() uses a per-CPU cache and avoids lock contention, which matters on fast paths like socket and task storage lookups. A companion patch removes the now-unnecessary kmalloc tracing from the local storage benchmark, and a final patch cleans up gfp_flags plumbing from bpf_local_storage_update().
bpf: add missing fsession to the verifier log
Part of a v3 three-patch series that adds the missing fsession attach type to the BPF verifier log, documentation, and bpftool. The fsession attach type was introduced but not reflected in the verifier's textual output or in user-facing tools, making it harder to debug programs using that hook. This patch fixes the verifier log output; companion patches update the BPF documentation and bpftool's usage text and man page.
Generated 2026-04-12T09:52:00Z
This period was dominated by Eduard Zingerman's ambitious static stack liveness data flow analysis series, which hit v4 with 14 patches and adds a forward arg-tracking pass to the verifier that enables poisoning of dead stack slots. Mykyta Yatsenko's sleepable tracepoint support reached v9, and Emil Tsalapatis posted a v5 of the arena library and runtime introducing buddy-allocator support and ASAN integration for BPF arena programs.
The final patch of the 14-part v4 static stack liveness series, this change poisons dead stack slots identified by the new dataflow analysis pass. By overwriting slots that the verifier proves are no longer live, it prevents inadvertent reuse of stale values and strengthens the safety guarantees of the BPF verifier. The series introduces 4-byte granularity liveness tracking, a forward arg-tracking pass, and function-instance keying by (callsite, depth) to correctly handle subprogram calls. Companion selftest patches validate the new behavior against both new and existing verifier test cases.
bpf: introduce forward arg-tracking dataflow analysis
This patch is the algorithmic core of the static stack liveness series: it adds a forward dataflow analysis pass that tracks which stack slots are written before being read, enabling the verifier to identify dead writes. Unlike the existing backward liveness pass, this forward pass computes arg-tracking results stored in bpf_liveness masks so they can be queried during normal verification. The approach handles subprogram calls by keying func_instances on (callsite, depth) pairs.
bpf: Add sleepable support for raw tracepoint programs
The first patch of a 6-part v9 series enabling BPF tracepoint programs to be marked sleepable, allowing them to call kfuncs and helpers that may block. This patch extends raw tracepoint support by running programs via a new bpf_prog_run_array_sleepable() helper that takes an RCU read-side lock safe for sleeping contexts. Verifier changes in patch 4 enforce that only raw and classic tracepoint program types may carry the sleepable flag. libbpf gains matching SEC() handlers and the series ships with selftests covering both raw and classic tracepoint flavors.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
This verifier change allows a scalar value added to a PTR_TO_ARENA pointer to itself be upgraded to a PTR_TO_ARENA, enabling more ergonomic arena-relative pointer arithmetic in BPF programs without requiring a full re-cast. It is the foundation patch for a 9-part v5 series that also introduces a userspace libarena scaffolding, an arena ASAN runtime, a buddy allocator library, and integration tests with ASAN support. The arena memory model is increasingly important for BPF programs that manage their own heap.
bpf: Enforce regsafe base id consistency for BPF_ADD_CONST scalars
This verifier fix ensures that when two scalar registers are compared for equivalence via regsafe(), their base_id fields are treated consistently for scalars produced by BPF_ADD_CONST operations. Without this check, the verifier could incorrectly mark two states as equivalent even when their add_const chains differ, potentially allowing unsound pruning. The companion patch adds a selftest to exercise the base_id consistency requirement directly.
bpf: Use kmalloc_nolock() universally in local storage
This patch (2/3, v2) extends the use of kmalloc_nolock() throughout the BPF local storage implementation so that allocations in IRQ and NMI contexts no longer need to fall back to pre-allocated memory. The companion patch removes the now-unnecessary gfp_flags plumbing from bpf_local_storage_update(), simplifying the call chain. The first patch in the series drops kmalloc tracing from the local storage create benchmark since it is no longer representative.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
This v2 fix addresses a null pointer dereference triggered when a BPF fmod_ret program attached to security_task_alloc returns a non-zero error code: kernel_clone() proceeds to call copy_process() which may dereference a task pointer that was never fully initialised. The fix adds an early return in the relevant path when the fmod_ret hook indicates failure, preventing the use-after-free or null dereference. A selftest validates the correct return value behavior of fmod_ret for this hook.
Generated 2026-04-11T10:00:00Z
Activity over this period was dominated by Eduard Zingerman's static stack liveness data flow analysis series, which progressed through three revisions (v1, v2, v3) and implements a new verifier pass to track dead stack slots and poison them at verification time. Daniel Borkmann contributed a fix to drop pkt_end markers after arithmetic operations to prevent the verifier's is_pkt_ptr_branch_taken() from making incorrect branch decisions, while Feng Yang addressed a null-pointer dereference in kernel_clone() triggered by a BPF fmod_ret program attached to security_task_alloc.
bpf: share several utility functions as internal API
This patch opens the 13-patch v3 series implementing static stack liveness data flow analysis by refactoring several internal verifier utilities into a shared internal API. Exposing these helpers avoids duplication between liveness.c and the rest of the verifier. The series as a whole introduces a new forward dataflow analysis pass that precisely tracks which BPF stack slots are live across a program's execution paths, feeding into improved liveness masks. Later patches in the series build on this foundation to identify and poison dead stack slots, improving both safety and verifier diagnostics.
bpf: introduce forward arg-tracking dataflow analysis
Introduces the core new analysis pass in the static stack liveness series: a forward arg-tracking dataflow analysis that computes which subprogram arguments and stack slots are actually consumed during execution. This complements the existing backward liveness analysis by propagating use information in the forward direction through the CFG. The results are recorded in bpf_liveness masks, enabling the verifier to distinguish truly live slots from dead ones with higher precision. This is the algorithmic heart of the feature, upon which the subsequent logging improvements and dead-slot poisoning depend.
The final patch of the v3 static stack liveness series implements the actual poisoning of stack slots determined to be dead by the new analysis pass. Dead slots are written with a special poison marker during verification, ensuring that any path the verifier missed which accesses them will be caught. This provides a defense-in-depth safety property and improves the quality of error messages when BPF programs touch uninitialized or logically dead stack memory. Accompanying selftests in patches 12/13 and earlier verify both the analysis results and the poisoning behavior.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
Fixes a null-pointer dereference in kernel_clone() that occurs when a BPF fmod_ret program attached to the security_task_alloc LSM hook returns a non-zero (error) value. In that case the fmod_ret causes an early return from the hook, bypassing actual task allocation, but the caller still dereferences the resulting null task pointer. The fix adjusts the error path to correctly handle the case where fmod_ret aborted allocation before a task object was produced. This is v2 of the series; patch 2/2 adds selftests exercising the corrected behavior.
bpf: Drop pkt_end markers on arithmetic to prevent is_pkt_ptr_branch_taken
Fixes a verifier bug where pkt_end pointer markers were incorrectly retained after arithmetic operations (scalar addition or subtraction) on a packet-end pointer. Preserving the marker after arithmetic causes is_pkt_ptr_branch_taken() to treat the derived pointer as a genuine pkt_end boundary, leading to incorrect branch-pruning decisions and potential unsoundness. The fix drops the pkt_end marker whenever arithmetic is performed on such a pointer, since the result no longer carries the same semantic guarantee. Patch 2/2 adds a selftest reproducing the stale pkt range scenario to prevent regressions.
Generated 2026-04-16T00:00:00Z
Today's bpf-next activity was dominated by two major series: Eduard Zingerman's 14-patch overhaul introducing static stack liveness data flow analysis in the verifier, and Mykyta Yatsenko's RFC for a new resizable BPF hash map backed by the kernel's rhashtable infrastructure. Additional notable work includes Kumar Kartikeya Dwivedi's verifier warning message framework, enabling non-fatal deprecation warnings during program load, and Daniel Borkmann's fix for ld_{abs,ind} failure path analysis in BPF subprograms.
bpf: share several utility functions as internal API
This is the opening patch in a 14-part series introducing static stack liveness data flow analysis into the BPF verifier. It refactors several internal utility functions into a shared API to be reused by the upcoming liveness analysis pass. The broader series upgrades stack-slot tracking to 4-byte granularity and introduces a forward arg-tracking dataflow analysis, culminating in dead stack slot poisoning — marking unused stack slots to catch uninitialized reads more reliably. The work also includes logging improvements and extensive selftests covering the new analysis behavior.
This RFC introduces a new BPF map type backed by the kernel's rhashtable infrastructure, enabling dynamically resizable hash maps without the fixed-capacity constraints of BPF_MAP_TYPE_HASH. The 18-patch series implements full lookup/update/delete operations, batch ops, BPF iterators, timer and workqueue support, and libbpf integration. This addresses long-standing performance cliffs when BPF hash maps approach their pre-allocated capacity, as resizing happens transparently at runtime. bpftool documentation and comprehensive selftests round out the RFC.
bpf: Add support for verifier warning messages
This patch introduces a new BPF verifier infrastructure for emitting non-fatal warning messages to userspace during program load, distinct from errors that reject programs outright. The six-patch series adds a KF_DEPRECATED flag for kfuncs, a __bpf_kfunc_replacement() annotation to guide migration, and libbpf support to surface warnings by default. Source location information is exposed by making find_linfo widely available within the verifier. This closes an important ergonomics gap where developers had no in-band signal for deprecated or suboptimal BPF patterns.
bpf: Propagate error from visit_tailcall_insn
This series fixes a verifier bug where errors returned by visit_tailcall_insn were silently discarded during subprogram analysis, potentially allowing malformed programs through verification. A second patch corrects the failure-path analysis for ld_abs and ld_ind instructions when used inside subprograms. A third patch removes an overly narrow static qualifier on a local subprog pointer to support the fix. Selftests are added to cover the previously undetected failure paths, and this is the second revision following initial review feedback.
bpf: Reject sleepable kprobe_multi programs at attach time
kprobe_multi programs execute in a non-preemptible context where sleeping would cause a kernel crash, yet the BPF subsystem previously accepted programs with the sleepable flag for this attach type. This patch adds an explicit check at attach time to reject the sleepable flag in combination with BPF_TRACE_KPROBE_MULTI, returning a clear error rather than silently misbehaving. A selftest verifies the rejection behavior. This is the fifth revision of the series, refined through several rounds of review.
selftests/bpf: Add BPF struct_ops + livepatch integration test
This selftest exercises the interaction between BPF struct_ops programs and the kernel livepatch infrastructure, which allows BPF programs to replace kernel functions in a structured, reversible way. The test verifies that struct_ops-based function replacement behaves correctly alongside livepatch semantics, covering both attachment and detachment paths. This is important validation for a relatively new capability that enables BPF programs to participate in live kernel patching workflows.
libbpf: Allow use of feature cache for non-token cases
libbpf's BTF feature detection previously bypassed the feature cache in code paths that did not involve a BPF token, leading to redundant kernel probes on repeated calls. This patch relaxes that requirement so the feature cache is consulted and populated regardless of token availability. The companion patch adds a BTF sanitization selftest validating BTF layout correctness under various configurations. This is the third revision of the two-patch series.
bpf: add missing fsession to the verifier log
The BPF_ATTACH_TYPE_FSESSION attach type was missing from the verifier log output, bpftool's usage strings, and kernel documentation, leaving it as an undocumented attach type in all developer-facing surfaces. This three-patch series adds fsession to the verifier log, BPF documentation, and bpftool usage output, ensuring consistency across tooling. This is the second revision addressing minor style feedback from the initial submission.
Generated 2026-04-09T10:30:00Z
April 7-8 saw broad activity across verifier correctness, networking, and tooling. Kumar Kartikeya Dwivedi submitted a series adding verifier warning message support for deprecated kfuncs, while Daniel Borkmann fixed linked register delta tracking bugs in the verifier. Nick Hudson's v3 series introduced new tunnel decapsulation flags for bpf_skb_adjust_room, and Andrey Grodzovsky's kprobe symbol disambiguation fix reached v7.
bpf: Add support for verifier warning messages
This v2 series introduces a new verifier warning infrastructure that allows the BPF verifier to emit non-fatal warning messages to users, separate from hard errors. The series leverages KF_DEPRECATED to trigger warnings for deprecated kfuncs and adds a __bpf_kfunc_replacement() annotation to point developers toward preferred replacements. libbpf is updated to flush these warnings by default, giving developers earlier visibility into deprecated API usage without causing program rejection.
bpf: Fix linked reg delta tracking when src_reg == dst_reg
This series fixes two related verifier bugs in linked register delta tracking. The first patch addresses a case where src_reg == dst_reg causes stale delta state to propagate incorrectly through register linking. The second patch ensures the delta field is cleared whenever a register's ID is reset for non-add/sub operations, preventing stale deltas from leaking through ID reassignment. Both fixes are accompanied by targeted selftests.
tracing: Prefer vmlinux symbols over module symbols for unqualified kprobes
Now at v7 (with a concurrent v6 also posted on the same day), this patch stabilizes the fix for kprobe symbol disambiguation when a module symbol shadows a vmlinux symbol of the same name. Unqualified kprobe attachments now correctly prefer the vmlinux symbol, preventing inadvertent tracing of module code. A selftest covering duplicate symbol handling is included.
bpf: add BPF_F_ADJ_ROOM_DECAP_* flags for tunnel decapsulation
Part of the v3 'bpf: decap flags and GSO state updates' series, this patch introduces new BPF_F_ADJ_ROOM_DECAP_* flags for the bpf_skb_adjust_room helper to handle tunnel decapsulation scenarios correctly. A companion patch clears tunnel GSO state in skb_adjust_room when decapping, addressing correctness issues for BPF programs performing software tunnel decap. The series also refactors ADJ_ROOM flag masks and adds guard rails for invalid flag combinations.
bpf: add missing fsession to the verifier log
This v2 series adds missing support for the fsession BPF attach type across the verifier log, BPF documentation, and bpftool. The fsession attach type was supported in the kernel but absent from these user-facing surfaces, making it invisible to developers using introspection tools. The three-patch series ensures fsession is consistently recognized and displayed alongside other attach types.
bpf: Retire rcu_trace_implies_rcu_gp()
This patch removes the rcu_trace_implies_rcu_gp() function from the BPF RCU machinery, which was a temporary workaround that treated RCU trace critical sections as implying a full RCU grace period. As the kernel RCU subsystem has matured, this workaround is no longer necessary and its removal simplifies the BPF memory model and reduces maintenance burden.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
The v4 arena library and runtime series continues to appear in this period, covering the core verifier change and an extensive libarena user-space test library. The kernel patch upgrades a scalar register to PTR_TO_ARENA when derived from arena pointer arithmetic, enabling safe arena pointer tracking in the BPF verifier. The selftest side introduces a complete arena library with buddy allocator and ASAN runtime support.
Generated 2026-04-08T12:00:00Z
Activity on April 6-7 was dominated by two substantial series: Emil Tsalapatis's v4 arena library and runtime series, which introduces a BPF memory arena with buddy allocator and ASAN support, and Kumar Kartikeya Dwivedi's v5 series enabling variable offsets for syscall PTR_TO_CTX access. Additional notable work includes Andrey Grodzovsky's RFC for fixing kprobe attachment priority when module symbols shadow vmlinux symbols, and smaller fixes for dynptr reference handling and insn_array offset loads.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
Part of the v4 'Introduce arena library and runtime' series, this patch updates the BPF verifier to upgrade a scalar register to PTR_TO_ARENA when it results from adding a scalar to an arena pointer. This is a key verifier change that enables safe tracking of pointers derived from BPF arena memory regions. The companion patches introduce a full arena user-space library for BPF selftests, including a buddy allocator and ASAN runtime integration.
bpf: Support variable offsets for syscall PTR_TO_CTX
This v5 patch extends the BPF verifier to allow variable (non-constant) offsets when accessing PTR_TO_CTX in BPF programs running in syscall context. Previously, only fixed offsets were permitted, which was overly restrictive for programs that compute offsets dynamically. Companion patches also enable unaligned accesses for syscall context and add comprehensive selftests including tests for accesses beyond U16_MAX.
tracing: Prefer vmlinux symbols over module symbols for unqualified kprobes
This RFC v5 patch addresses an ambiguity in kprobe symbol resolution: when a kernel module exports a symbol with the same name as a vmlinux symbol, an unqualified kprobe attachment would previously latch onto the module symbol. The fix ensures vmlinux symbols take precedence for unqualified probes, aligning behavior with user expectations and reducing inadvertent tracing of module code. A selftest covering the duplicate symbol scenario is included.
bpf: Do not ignore offsets for loads from insn_arrays
This v3 fix corrects a bug in the BPF loader where non-zero offsets in insn_array map loads were silently ignored, resulting in incorrect instruction loading. The patch ensures the offset is correctly applied when reading BPF instructions from array maps, preventing subtle program errors that would otherwise be difficult to diagnose. A companion selftest verifies loading from various non-zero offsets.
bpf: Allow overwriting referenced dynptr when refcnt > 1
The BPF verifier currently rejects programs that attempt to overwrite a referenced dynptr even when sibling states still hold a valid reference, causing overly conservative program rejections. This patch relaxes the restriction by tracking the reference count across sibling states and permitting the overwrite when refcnt > 1, ensuring the sibling state can still clean up the dynptr on exit. A selftest demonstrating the previously-rejected but safe pattern is included.
Generated 2026-04-08T12:00:00Z
Activity on April 5-6 was dominated by Yonghong Song's v2 and v3 iterations of the 'Support stack arguments for BPF functions and kfuncs' series, which introduces a new BPF_REG_STACK_ARG_BASE register and extends the BPF calling convention to allow structs larger than 8 bytes to be passed via the stack. The v3 revision refines the design with improved verifier validation, x86_64 JIT support, and comprehensive selftests for both BPF-to-BPF calls and kfunc calls.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
Introduces BPF_REG_STACK_ARG_BASE, a new virtual BPF register used as a base pointer for stack-allocated function arguments. This is the foundational patch in the series enabling BPF functions and kfuncs to accept arguments too large to fit in the six general-purpose argument registers. The new register is handled specially by the verifier and JIT backends to track and validate stack argument slots. It allows BPF programs to pass structs larger than 8 bytes by value through a well-defined stack ABI.
bpf: Support stack arguments for bpf functions
Extends the BPF verifier to recognize and validate stack-based argument passing for BPF-to-BPF function calls. The patch teaches the verifier to track argument slots relative to BPF_REG_STACK_ARG_BASE and verify their types, sizes, and liveness. This enables BPF subprograms to receive large struct arguments that cannot fit in registers, matching a common pattern in kernel C code.
bpf: Support stack arguments for kfunc calls
Extends stack argument support to kfunc calls, allowing BPF programs to pass large structs by value to kernel functions exposed via kfuncs. The verifier is updated to check stack argument slots when validating kfunc call sites, ensuring type safety between the BPF caller and the kernel-side parameter declaration. Stack arguments for kfuncs are limited to 8 bytes per slot to match kernel ABI expectations.
bpf: Reject stack arguments in non-JITed programs
Adds a verifier check that rejects programs using stack arguments when running without a JIT compiler. Stack argument passing requires JIT support because the interpreter cannot implement the necessary stack manipulation semantics. This guard ensures the feature is only enabled on platforms and configurations where it is fully supported.
bpf,x86: Implement JIT support for stack arguments
Implements x86_64 JIT backend support for emitting code to set up and tear down stack argument frames for BPF function and kfunc calls. The JIT allocates space on the native stack, copies argument values into position relative to the stack pointer, and passes the base address in the appropriate register. This patch is the concrete implementation that makes the stack argument ABI functional on x86_64.
selftests/bpf: Add verifier tests for stack argument validation
Adds verifier-level selftests that exercise both positive and negative cases for stack argument validation, including type mismatches, size violations, and use of uninitialized stack slots. These tests complement the functional selftests from earlier patches and ensure the verifier correctly rejects malformed programs using stack arguments. The negative tests cover the greater-than-8-byte kfunc stack argument restriction introduced in the series.
Generated 2026-04-06T10:13:03Z
No patches were submitted to the bpf mailing list during this period.
Generated 2026-04-05T09:43:13Z
The bpf-next mailing list saw active development on April 3-4, 2026, centered on BPF verifier improvements, JIT code generation, and libbpf usability enhancements. Alexei Starovoitov continued iterating on preparatory patches for static stack liveness analysis (reaching v5), while Xu Kuohai posted a 12th revision of the ENDBR/BTI CFI series for x86 and arm64. Emil Tsalapatis introduced a comprehensive arena library and runtime for BPF programs, and Chengkaitao proposed new infrastructure to simplify kfunc verifier registration.
bpf: Do register range validation early
This patch moves register range validation to an earlier stage in the BPF verifier pipeline as a preparatory step for implementing static stack liveness analysis. By validating register ranges sooner, subsequent analysis passes can make more informed decisions about stack usage. This is the first of a 6-patch v5 series from Alexei Starovoitov that lays the groundwork for static stack liveness, a significant verifier enhancement aimed at improving precision in BPF program analysis.
bpf: Add bpf_compute_const_regs() and bpf_prune_dead_branches() passes
Introduces two new compiler-style analysis passes to the BPF verifier: constant register computation and dead branch pruning. These passes allow the verifier to identify and eliminate unreachable code paths before the main verification pass runs, reducing the state space that must be explored. This is foundational infrastructure for static stack liveness analysis, which will allow the verifier to precisely track stack slot usage across subprograms and enable future optimizations.
bpf: Add helper and kfunc stack access size resolution
The final patch in Alexei Starovoitov's v5 series adds logic for resolving the sizes of stack accesses made by helpers and kfuncs, a prerequisite for accurate static stack liveness computation. Understanding how much stack space each helper or kfunc call may touch is essential for the verifier to determine which stack slots are live at any given program point. Together with the earlier patches in the series, this completes the preparatory infrastructure for static stack liveness.
bpf: Introduce BTF_SET/ID_SUB and BPF_VERIF_KFUNC_DEF
Introduces new BTF infrastructure (BTF_SET/ID_SUB) and a BPF_VERIF_KFUNC_DEF macro to simplify how the BPF verifier registers and matches kfunc verification callbacks. Currently kfunc verification logic requires manual BTF set management and is scattered across the codebase; this refactoring provides a unified, declarative mechanism for associating kfuncs with their verifier hooks. The accompanying patch applies this new infrastructure to rbtree kfuncs as a concrete demonstration.
bpf: Add helper to detect indirect jump targets
Adds a helper function to the BPF JIT infrastructure for identifying indirect jump targets in BPF programs, enabling subsequent patches to emit control-flow integrity (CFI) landing pad instructions at those sites. On x86 this means emitting ENDBR instructions (for Intel IBT), and on arm64 BTI instructions. This is the 12th revision of a mature series by Xu Kuohai that improves BPF JIT compatibility with CPU-enforced CFI features, with both x86 and arm64 backends covered.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
Enhances the BPF verifier to recognize that a scalar value resulting from arithmetic on an arena pointer should itself be typed as PTR_TO_ARENA, improving the ergonomics and correctness of arena-based BPF programs. This is the core kernel-side change in a 9-patch v3 series that also introduces a libarena library and runtime for BPF, including a buddy allocator and ASAN integration. The series significantly lowers the barrier for BPF programs to perform dynamic memory management using arenas.
libbpf: Auto-upgrade kprobes to multi-kprobes when supported
This RFC proposes transparent automatic upgrading of single kprobe attachments to the more efficient multi-kprobe mechanism when the kernel supports it, mirroring a companion patch that does the same for uprobes. Multi-kprobes attach to multiple functions via a single file descriptor, reducing per-attach overhead considerably. The series (RFC v3) also adds a libbpf feature probe to detect kernel multi-kprobe link support, making the upgrade decision automatic and safe across kernel versions.
Generated 2026-04-04T09:42:10Z
A busy day on bpf-next dominated by verifier and JIT work. Yonghong Song posted a major 10-patch series introducing stack-based argument passing for BPF functions and kfuncs, enabling larger structs to be passed by value. Alexei Starovoitov continued iterating—reaching v5—on preparatory verifier patches for static stack liveness analysis, while Emil Tsalapatis proposed a new arena library and runtime for BPF selftests.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
First patch in a 10-part series adding stack-based argument passing to BPF functions and kfuncs. It introduces a new virtual register BPF_REG_STACK_ARG_BASE to represent the base of stack-passed arguments in the BPF calling convention. This enables passing large structs by value that exceed the available register count. Subsequent patches in the series add verifier enforcement, x86-64 JIT support, and selftests covering both positive and negative cases.
bpf: Do register range validation early
First patch (v5) in a 6-patch series preparing the verifier for static stack liveness analysis. This patch moves register range validation to an earlier point in the verification pipeline so that subsequent passes can rely on consistent range invariants. The series also adds topological subprogram ordering after check_cfg(), dead branch pruning, and constant register computation passes. A v5 respin was posted within hours of v4, indicating rapid iteration.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
First patch (v3) in a 9-part series introducing an arena library and runtime for BPF selftests. This verifier change teaches the BPF verifier to upgrade a scalar register to PTR_TO_ARENA when it is the result of adding a scalar to an arena pointer, improving type-safety for arena-allocated memory. The rest of the series builds libarena scaffolding, an ASAN runtime for detecting memory errors in arena programs, a buddy allocator, and comprehensive selftests.
bpf: Move constants blinding out of arch-specific JITs
First patch (v11) in a 5-patch series that emits ENDBR (x86) and BTI (arm64) instructions at indirect jump targets in BPF JIT-compiled programs to harden against control-flow hijacking attacks. This initial patch refactors constants blinding out of architecture-specific JITs and into shared BPF core code, passing the bpf_verifier_env to the JIT. Later patches add a verifier helper to detect indirect jump targets and the per-arch emission logic for ENDBR and BTI landing pads.
bpf: Refactor reg_bounds_sanity_check
First patch (v3) in a 6-patch series fixing verifier invariant violations surfaced by syzbot. The series refactors the register bounds sanity check, exits early when reg_bounds_sync receives invalid inputs, simulates branches to prune states based on range violations, and removes now-unnecessary invariant violation flags from selftests. These fixes improve the reliability of the verifier's range-tracking logic and address potential incorrect pruning decisions.
bpf: Do not ignore offsets for loads from insn_arrays
Bug fix (v2) correcting the BPF verifier's handling of loads from instruction arrays with non-zero offsets. Previously the offset was silently ignored, leading to incorrect values being read. The fix ensures the offset is properly applied, and a companion selftest patch adds coverage for the various offset scenarios to prevent regressions.
bpf: Refactor dynptr mutability tracking
A v2 verifier cleanup that refactors how dynptr mutability is tracked internally. Instead of scattering mutability checks across dynptr helper validation paths, this patch consolidates the tracking into a cleaner representation. This makes it easier to reason about read-only vs. read-write dynptr semantics and reduces the risk of future correctness bugs when new dynptr types or helpers are introduced.
Generated 2026-04-03T10:00:00Z
April 1-2 saw heavy activity on the verifier and libbpf fronts. Yonghong Song posted a significant new feature series enabling stack-based argument passing for BPF functions and kfuncs with x86_64 JIT support, while Alexei Starovoitov iterated to v3 on preparatory verifier passes for static stack liveness analysis. Paul Chaignon and Kumar Kartikeya Dwivedi also landed verifier improvements addressing invariant violations and variable-offset syscall context access.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
Introduces a new virtual BPF register BPF_REG_STACK_ARG_BASE to support stack-based argument passing for BPF subprograms and kfuncs. This is the first patch in a 10-part series that extends the BPF calling convention beyond the existing five register arguments. Subsequent patches add verifier support, x86_64 JIT code generation, and selftests. This enables BPF programs to call functions with more than five arguments by spilling extra arguments onto the stack, bringing BPF closer to native C calling conventions.
bpf: Add bpf_compute_const_regs() and bpf_prune_dead_branches() passes
Adds two new pre-verification passes to the BPF verifier: bpf_compute_const_regs() performs a lightweight constant propagation to identify registers holding compile-time constants, and bpf_prune_dead_branches() eliminates unreachable code paths before the main verification pass runs. These passes are groundwork for upcoming static stack liveness analysis, which will reduce the state space the verifier must explore. This is patch 4/6 in Alexei's v3 series "bpf: Prep patches for static stack liveness."
bpf: Add helper and kfunc stack access size resolution
Adds logic to the verifier to resolve the access size for stack slots passed to helpers and kfuncs, completing the v3 preparatory series for static stack liveness analysis. When a helper or kfunc receives a pointer to a stack slot, the verifier now computes the precise byte range being accessed rather than conservatively marking the entire slot as live. This precision is necessary for the upcoming static liveness pass to correctly determine which stack slots need to be initialized before use.
bpf: Simulate branches to prune based on range violations
Fixes a class of verifier invariant violations where register range bounds became inconsistent after branch pruning. When the verifier detects that a register's tracked range is provably violated on a branch, it now simulates taking that branch and prunes the state rather than leaving the inconsistency unresolved. This addresses syzbot-reported crashes caused by invalid register states propagating through the verifier. This is patch 4/6 in Paul Chaignon's v3 series "Fix invariant violations and improve branch detection."
bpf: Support variable offsets for syscall PTR_TO_CTX
Extends the BPF verifier to allow variable (non-constant) offsets when accessing syscall program context pointers of type PTR_TO_CTX. Previously, the verifier rejected any non-zero variable offset into a syscall ctx, requiring programs to use only constant offsets. The patch teaches the verifier to track variable offsets and validate bounds at access time, enabling more flexible syscall BPF programs. This is the first patch in Kumar's v4 seven-patch series.
bpf: Do not ignore offsets for loads from insn_arrays
Fixes a bug in the BPF loader where non-zero offsets within instruction arrays were silently ignored when resolving map file descriptors and other relocations. The offset field was being discarded, causing incorrect values to be loaded when programs accessed elements beyond the base of an insn_array. This is a correctness fix affecting programs that use offset-based access patterns into instruction arrays, with accompanying selftests added in patch 2/2.
bpf: Refactor dynptr mutability tracking
Refactors how the BPF verifier tracks whether a dynptr is mutable or read-only, consolidating scattered mutability checks into a cleaner abstraction. Previously, mutability was inferred from the dynptr type and call context at each check site; this patch centralizes the logic to reduce duplication and make the invariants easier to audit. The refactor prepares the codebase for future dynptr extensions without changing existing behavior.
bpf: reject short IPv4/IPv6 inputs in bpf_prog_test_run_skb
Adds input length validation to bpf_prog_test_run_skb() to reject buffers shorter than a minimum IPv4 or IPv6 header when the data is marked as IP traffic. Without this check, a malformed short packet could cause the verifier test runner to access memory beyond the supplied buffer. This is a v3 single-patch fix addressing a potential out-of-bounds read in the BPF test infrastructure.
libbpf: Fix BTF handling in bpf_program__clone()
Fixes a bug in libbpf's bpf_program__clone() where the cloned program did not correctly inherit or reference the parent's BTF object, leading to use-after-free or incorrect BTF type resolution when the cloned program was loaded. The fix ensures the BTF reference is properly managed across the clone operation. This is a v2 single-patch bug fix for an issue discovered in programs that use program cloning with BTF-dependent features.
Generated 2026-04-02T23:24:36Z
The week of April 13–20 saw substantial activity across the BPF subsystem. The most prominent contribution was Yonghong Song's stack-arguments series (reaching v6), which enables BPF functions and kfuncs to accept more than six arguments by spilling extras onto the stack, complete with x86-64 JIT support and verifier validation. Jiri Olsa posted a 28-patch series introducing a tracing_multi link type, allowing a single BPF link to attach to multiple kernel functions simultaneously for more efficient multi-function tracing. Other notable work included Alan Maguire extending the BTF UAPI to use previously reserved bits for larger vlen and kind fields, Puranjay Mohan adding CPU time counter kfuncs for precise hardware performance measurement, and Kumar Kartikeya Dwivedi adding a mechanism for the verifier to emit non-fatal warning messages along with a deprecated kfunc annotation.
bpf: Support stack arguments for bpf functions
Adds verifier support for BPF subprogram functions to receive arguments on the stack, enabling signatures with more than the standard six register-based parameters. A new BPF_REG_PARAMS mechanism tracks stack argument state through the verifier's analysis, and the calling convention is updated to lay out excess parameters in a defined region of the caller's stack frame. This is patch 07/17 of the v6 series and is the core enabler for the rest of the stack argument work. The feature requires JIT support and programs on interpreter-only configurations are rejected.
bpf,x86: Implement JIT support for stack arguments
Implements the x86-64 JIT backend changes needed to physically spill excess function arguments onto the stack when calling BPF subprograms or kfuncs. The JIT allocates additional stack space and emits store instructions to lay out parameters before the call site as the callee expects. This is patch 14/17 of the v6 series and is the first architecture-specific implementation, after which the feature becomes usable on x86-64 systems. Other JIT backends can follow the same pattern independently.
bpf: Add support for tracing multi link
Introduces the core kernel implementation of the tracing_multi link type, which lets a single BPF link attach a program to multiple kernel functions at once instead of requiring one link per function. The implementation reuses and extends the existing trampoline infrastructure, adding bulk attach and detach operations via new bpf_trampoline_multi_attach/detach functions. This is patch 13/28 of a 28-patch v5 series that also covers libbpf support, session semantics, cookies, fdinfo, and extensive selftests. Bulk attachment reduces per-function overhead and simplifies management of tracing programs that monitor many kernel entry points.
libbpf: Add support to create tracing multi link
Adds the libbpf API surface for creating tracing_multi links, enabling user-space programs to attach to multiple kernel functions through a single library call. The implementation resolves function names to BTF IDs and constructs the appropriate bpf_link_create attributes for the new link type. This is patch 20/28 of the tracing_multi series and depends on the earlier kernel-side implementation patches. Applications that currently loop over individual fentry/fexit attachments can migrate to this API for a simpler and more efficient interface.
bpf: Add support for verifier warning messages
Introduces a new verifier facility to emit non-fatal warning messages during program verification, separate from the existing error-only log. Warnings allow the verifier to surface advisory information—such as use of deprecated kfuncs—without failing the load. This is patch 1/4 of the v3 series; subsequent patches use the mechanism to implement the deprecated kfunc annotation. The change keeps the existing log level semantics intact and exposes the warnings through the bpf_attr verifier log interface so that libbpf and tools can display them to users.
bpf: Introduce __bpf_kfunc_mark_deprecated annotation
Adds a __bpf_kfunc_mark_deprecated macro that kernel developers can apply to kfunc definitions to signal that a function is deprecated and should not be used in new programs. When the verifier encounters a call to a deprecated kfunc it emits a warning (via the new warning infrastructure from patch 1/4) rather than rejecting the program, preserving backward compatibility. This follows a well-understood deprecation pattern familiar from other kernel annotation systems and gives BPF subsystem maintainers a clean path to phase out old kfuncs.
bpf: add bpf_get_cpu_time_counter kfunc
Introduces bpf_get_cpu_time_counter, a new kfunc that reads the raw CPU hardware time-stamp counter, providing BPF programs with a low-overhead, high-resolution time source for performance measurement. This is patch 2/6 of a 13-revision series that also adds bpf_cpu_time_counter_to_ns for converting the raw counter value to nanoseconds and includes ARM64 JIT support. The kfunc is useful for latency profiling and micro-benchmarking from within BPF programs without the overhead of a full clock_gettime call. The long revision history reflects careful review of security and portability concerns.
bpf: Extend BTF UAPI vlen, kinds to use unused bits
Expands the BTF type header to use previously reserved bits, growing the vlen field from 16 to 24 bits and the kind field to support additional type kinds. This removes a practical limit on the number of members a BTF struct or union can describe, which matters for large generated types. The patch is the first of a six-part v3 series that updates libbpf, bpftool, selftests, and documentation to match the new layout. Careful backward compatibility handling ensures existing tools and kernels can still parse older BTF blobs correctly.
bpf: Fix NULL deref in map_kptr_match_type for scalar regs
Fixes a NULL pointer dereference in map_kptr_match_type that could be triggered when a BPF program stored a scalar (non-pointer) value into a map slot typed as a kptr. The function assumed the register was always a pointer and dereferenced its type information without checking, leading to a verifier crash. The fix adds an early check that rejects the scalar store with a clear error message before the dereference occurs. The companion selftest patch (2/2) reproduces the crash to prevent regression.
libbpf: Report error when a negative kprobe offset is specified
Fixes a libbpf oversight where a negative offset for a kprobe attachment was silently forwarded to the kernel rather than rejected early with a clear error. Negative kprobe offsets are not supported and passing them produces confusing kernel-level failures. This is the third revision of the fix, refining the placement of the validation check based on earlier review feedback. Catching the invalid value in libbpf provides a much better error experience for programs that accidentally misconfigure their kprobe offsets.
arm32, bpf: Reject BPF-to-BPF calls and callbacks in the JIT
Makes the ARM32 BPF JIT explicitly reject programs that use BPF-to-BPF subprogram calls or callbacks, which the 32-bit ARM JIT does not support. Previously such programs could reach the JIT and fail in an undefined way; now they are turned away with a clear error at JIT time. This is a v2 follow-up that supersedes an earlier patch targeting only BPF_PSEUDO_CALL. Explicit rejection is preferable to a silent fallback to the interpreter, which could mask bugs and produce inconsistent performance characteristics.
selftests/bpf: fix off-by-one in bpf_cpumask_populate related selftest
Corrects an off-by-one error in a BPF selftest exercising bpf_cpumask_populate, where the loop bound caused a read one element past the intended array boundary. The bug could produce spurious failures or undefined behavior on configurations where the adjacent memory was not safely accessible. The fix is a one-line bound correction with no impact on the BPF subsystem itself. Accurate selftests are important so that CI results reflect real regressions rather than test-infrastructure noise.
Generated 2026-04-21T00:00:00Z
The week of April 6-13 on bpf-next was defined by two parallel verifier modernization efforts and a significant new calling-convention feature. Eduard Zingerman's static stack liveness analysis series (v4, 14 patches) completed its run, delivering 4-byte stack tracking granularity, a forward arg-tracking dataflow pass, and dead stack slot poisoning to strengthen initialization safety guarantees. Alexei Starovoitov simultaneously pursued a structural cleanup, splitting the monolithic verifier.c into focused modules across four revision rounds. On the feature side, Yonghong Song's v4 18-patch series brings stack-based argument passing to BPF functions and kfuncs, backed by x86_64 JIT support, while Emil Tsalapatis pushed the arena memory library to v7 with a buddy allocator and ASAN runtime.
The culmination of Zingerman's v4 static stack liveness series (14 patches), this patch uses the results of the new forward arg-tracking dataflow analysis to poison BPF stack slots that are written but never subsequently read. Poisoning dead slots causes the verifier to reject programs that rely on uninitialized stack memory, closing a class of subtle bugs where stale values could influence program behavior. The series builds on 4-byte stack granularity tracking, (callsite, depth)-keyed func_instances, and a new forward liveness API introduced in earlier patches.
bpf: Enforce regsafe base id consistency for BPF_ADD_CONST scalars
Fixes a verifier state-pruning correctness bug where the regsafe() check failed to account for base ID consistency when comparing two BPF_ADD_CONST scalar registers. Without this fix, the verifier could incorrectly declare two program states as equivalent and prune a branch that should have been explored, potentially accepting a program that reads out-of-bounds. A companion selftest is included to exercise the specific code path.
bpf: Split fixup/post-processing logic from verifier.c into fixups.c
The opening patch of Starovoitov's v4 verifier.c split series moves fixup and post-processing logic out of the monolithic verifier.c into fixups.c. Over four revision rounds this week the series also spun out liveness.c, cfg.c, states.c, backtrack.c, and check_btf.c, dramatically reducing the size of verifier.c and making each subsystem independently reviewable. The refactoring is behavior-preserving and comes with no functional changes.
bpf: Support stack arguments for bpf functions
The core verifier patch of Song's v4 18-patch series teaches the BPF verifier to validate stack-based arguments at BPF-to-BPF call sites, extending the calling convention beyond the five-register limit. A new BPF_REG_STACK_ARG_BASE register is introduced for addressing arguments passed on the caller's stack, and the verifier enforces that stack arguments are only used in JITed programs not reachable through tail calls. This enables BPF functions and kfuncs to accept more than five arguments.
bpf,x86: Implement JIT support for stack arguments
The x86_64 JIT backend patch in Song's stack-arguments series emits code to correctly marshal arguments placed on the caller's stack frame at BPF function call boundaries. Arguments beyond the five-register window are addressed via BPF_REG_STACK_ARG_BASE and copied into the appropriate stack location before the call. This patch completes the end-to-end implementation for x86_64, with negative tests for unsupported configurations included in the selftest series.
bpf: Allow instructions with arena source and non-arena dest registers
The first substantive verifier patch in Tsalapatis's v7 arena library series relaxes a restriction on mixed arena/non-arena arithmetic so that result values can be plain scalars or non-arena pointers. This is needed to support the user-space arena library code, which frequently mixes pointer types in address calculations. The v7 series accompanying it adds a buddy allocator, ASAN runtime, and a comprehensive libarena selftest suite.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
This v3 bug fix addresses a null-pointer dereference triggered when a BPF fmod_ret program attached to the security_task_alloc hook returns non-zero, causing kernel_clone() to proceed with a partially-initialized task struct. The fix adds the missing return-value check so the error path is taken before the null dereference, and a selftest verifies correct behavior. This patch appeared as v2 earlier in the week and was refined to v3 by April 11.
bpf: Move constants blinding out of arch-specific JITs
The first patch in Xu Kuohai's v13 5-patch series consolidates JIT constant blinding into the architecture-independent BPF core, removing per-arch duplication. The series' broader goal is to enable all JIT backends to emit ENDBR (x86) and BTI (AArch64) instructions for indirect call targets, strengthening CFI on those architectures. Earlier patches in the series abstract the blinding so that the arch-specific CFI instruction emission can slot in cleanly.
bpf: Use kmalloc_nolock() universally in local storage
Converts BPF local storage allocation paths to use the recently introduced kmalloc_nolock() variant, which avoids lock acquisition and improves performance in the common case where the per-CPU slab is warm. A companion patch in the same v2 series removes now-unnecessary gfp_flags plumbing from bpf_local_storage_update(). The series also fixes a selftest that was inadvertently tracing kmalloc calls and would be perturbed by the allocation strategy change.
bpf: add missing fsession to the verifier log
Adds the BPF_TRACE_FSESSION attach type to the verifier's attach-type log output, which omitted it despite the type being defined. Two companion patches in the v3 series fix the same omission in the BPF documentation and bpftool's usage text. This is a purely cosmetic/correctness fix with no change to runtime behavior.
Generated 2026-04-14T00:00:00Z
The week of March 30 - April 6 saw heavy activity around BPF verifier improvements and calling convention extensions. Yonghong Song iterated through three versions of stack argument support for BPF functions and kfuncs, culminating in v3 with a new BPF_REG_STACK_ARG_BASE register and x86_64 JIT implementation. Alexei Starovoitov continued refining prep patches for static stack liveness analysis, reaching v5 with subprogram topological ordering and constant-register computation passes that will enable smarter stack slot tracking. Additional highlights include Emil Tsalapatis introducing a full arena library and runtime, Xu Kuohai reaching v12 for emitting ENDBR/BTI instructions at indirect JIT jump targets, Chengkaitao refactoring how the verifier dispatches kfunc checks via a new BPF_VERIF_KFUNC_DEF mechanism, and Paul Chaignon fixing verifier invariant violations discovered by syzbot.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
Introduces BPF_REG_STACK_ARG_BASE, a new virtual BPF register serving as the base pointer for stack-allocated function arguments. This is the foundation of the 11-patch v3 series enabling BPF functions and kfuncs to receive arguments too large for the six general-purpose argument registers. The register is handled specially by both the verifier and x86_64 JIT backend to allocate, track, and validate stack argument slots. The series also includes selftests for BPF-to-BPF calls, kfunc calls, and negative cases for oversized arguments.
bpf: Add helper and kfunc stack access size resolution
The final patch in Alexei Starovoitov's v5 'Prep patches for static stack liveness' series, which adds helper and kfunc stack access size resolution used by upcoming static liveness analysis. The series as a whole sorts subprograms in topological order after check_cfg(), adds bpf_compute_const_regs() and bpf_prune_dead_branches() verifier passes, and moves verifier helpers to a shared header. Together these changes lay the groundwork for tracking which stack slots are actually live, reducing unnecessary spill/fill overhead.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
The first patch in the v3 'Introduce arena library and runtime' series, which teaches the verifier to promote a scalar register to PTR_TO_ARENA when added to an arena pointer. The broader 9-patch series introduces a libarena scaffolding with an ASAN-compatible runtime, a buddy allocator implementation, and comprehensive selftests. This infrastructure enables BPF programs using memory arenas to benefit from proper pointer type tracking and arena-aware address sanitization during testing.
bpf, x86: Emit ENDBR for indirect jump targets
Part of Xu Kuohai's v12 series adding Intel CET ENDBR (x86) and ARM64 BTI instructions at indirect JIT jump targets to harden BPF programs against control-flow hijacking. A companion patch adds a helper to detect indirect jump targets during JIT compilation, and another passes bpf_verifier_env to the JIT so it has the information needed to insert these instructions. The series also moves constant blinding out of arch-specific JITs into a shared location to simplify future JIT backends.
bpf: Introduce BTF_SET/ID_SUB and BPF_VERIF_KFUNC_DEF
Introduces BTF_SET/ID_SUB and BPF_VERIF_KFUNC_DEF macros that allow kfunc sets to embed their own verifier check callbacks, replacing the existing flat dispatch table used by the verifier. This refactor makes it easier to add verifier logic for new kfuncs without touching central verifier files. A follow-on patch converts the rbtree kfuncs to use the new mechanism, demonstrating the pattern.
bpf: Refactor reg_bounds_sanity_check
The first patch in Paul Chaignon's v3 'Fix invariant violations and improve branch detection' series, which addresses syzbot-reported verifier invariant violations. The series refactors reg_bounds_sanity_check, adds early exit for invalid reg_bounds_sync inputs, simulates branches to prune paths with range violations, and removes incorrect invariant-violation flags from selftests. These fixes improve verifier correctness when dealing with edge cases in register range tracking.
libbpf: Auto-upgrade kprobes to multi-kprobes when supported
Part of an RFC v3 series that transparently upgrades single kprobe and uprobe attachments to their multi-kprobe/multi-uprobe equivalents when the kernel supports them. A new FEAT_KPROBE_MULTI_LINK feature probe is added to libbpf to detect kernel support at runtime. This allows BPF programs written against the single-attach API to silently benefit from the performance improvements of multi-attach without any code changes.
bpf: Do not ignore offsets for loads from insn_arrays
Fixes a bug where the BPF verifier ignored non-zero offsets when loading values from instruction arrays, causing incorrect value reads. The fix ensures the offset is properly factored into the load address computation in the verifier's constant propagation path. A companion patch adds regression tests covering a variety of offset values to prevent recurrence.
pull-request: bpf-next 2026-04-01
Martin KaFai Lau's bpf-next pull request for April 1, 2026, consolidating the accumulated bpf-next changes for submission to Linus's tree. Pull requests like this mark a significant milestone in the development cycle, bundling verifier improvements, new helpers, libbpf changes, and selftests accumulated since the previous pull.
bpf: Refactor dynptr mutability tracking
Refactors how the BPF verifier tracks dynptr mutability, consolidating the immutability flag into the dynptr state representation for cleaner handling. This v2 patch simplifies the code paths that check whether a dynptr may be written through, reducing the risk of correctness issues when new dynptr types are added. The change is internal to the verifier with no user-visible behavior change.
Generated 2026-04-06T10:13:03Z
No monthly summaries yet. Check back on the 1st.