linux kernel patch summaries, generated daily
This period was dominated by Eduard Zingerman's ambitious static stack liveness data flow analysis series, which hit v4 with 14 patches and adds a forward arg-tracking pass to the verifier that enables poisoning of dead stack slots. Mykyta Yatsenko's sleepable tracepoint support reached v9, and Emil Tsalapatis posted a v5 of the arena library and runtime introducing buddy-allocator support and ASAN integration for BPF arena programs.
The final patch of the 14-part v4 static stack liveness series, this change poisons dead stack slots identified by the new dataflow analysis pass. By overwriting slots that the verifier proves are no longer live, it prevents inadvertent reuse of stale values and strengthens the safety guarantees of the BPF verifier. The series introduces 4-byte granularity liveness tracking, a forward arg-tracking pass, and function-instance keying by (callsite, depth) to correctly handle subprogram calls. Companion selftest patches validate the new behavior against both new and existing verifier test cases.
bpf: introduce forward arg-tracking dataflow analysis
This patch is the algorithmic core of the static stack liveness series: it adds a forward dataflow analysis pass that tracks which stack slots are written before being read, enabling the verifier to identify dead writes. Unlike the existing backward liveness pass, this forward pass computes arg-tracking results stored in bpf_liveness masks so they can be queried during normal verification. The approach handles subprogram calls by keying func_instances on (callsite, depth) pairs.
bpf: Add sleepable support for raw tracepoint programs
The first patch of a 6-part v9 series enabling BPF tracepoint programs to be marked sleepable, allowing them to call kfuncs and helpers that may block. This patch extends raw tracepoint support by running programs via a new bpf_prog_run_array_sleepable() helper that takes an RCU read-side lock safe for sleeping contexts. Verifier changes in patch 4 enforce that only raw and classic tracepoint program types may carry the sleepable flag. libbpf gains matching SEC() handlers and the series ships with selftests covering both raw and classic tracepoint flavors.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
This verifier change allows a scalar value added to a PTR_TO_ARENA pointer to itself be upgraded to a PTR_TO_ARENA, enabling more ergonomic arena-relative pointer arithmetic in BPF programs without requiring a full re-cast. It is the foundation patch for a 9-part v5 series that also introduces a userspace libarena scaffolding, an arena ASAN runtime, a buddy allocator library, and integration tests with ASAN support. The arena memory model is increasingly important for BPF programs that manage their own heap.
bpf: Enforce regsafe base id consistency for BPF_ADD_CONST scalars
This verifier fix ensures that when two scalar registers are compared for equivalence via regsafe(), their base_id fields are treated consistently for scalars produced by BPF_ADD_CONST operations. Without this check, the verifier could incorrectly mark two states as equivalent even when their add_const chains differ, potentially allowing unsound pruning. The companion patch adds a selftest to exercise the base_id consistency requirement directly.
bpf: Use kmalloc_nolock() universally in local storage
This patch (2/3, v2) extends the use of kmalloc_nolock() throughout the BPF local storage implementation so that allocations in IRQ and NMI contexts no longer need to fall back to pre-allocated memory. The companion patch removes the now-unnecessary gfp_flags plumbing from bpf_local_storage_update(), simplifying the call chain. The first patch in the series drops kmalloc tracing from the local storage create benchmark since it is no longer representative.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
This v2 fix addresses a null pointer dereference triggered when a BPF fmod_ret program attached to security_task_alloc returns a non-zero error code: kernel_clone() proceeds to call copy_process() which may dereference a task pointer that was never fully initialised. The fix adds an early return in the relevant path when the fmod_ret hook indicates failure, preventing the use-after-free or null dereference. A selftest validates the correct return value behavior of fmod_ret for this hook.
Generated 2026-04-11T10:00:00Z
The bpf-next mailing list on April 9-10 was dominated by Eduard Zingerman's static stack liveness data flow analysis series, which reached its third revision with 13 patches introducing forward argument tracking, fine-grained 4-byte stack liveness, and dead stack slot poisoning in the verifier. Daniel Borkmann contributed a fix to prevent stale packet range tracking after scalar arithmetic, and Feng Yang posted a v2 fix for a null-pointer dereference triggered through BPF fmod_ret on security_task_alloc.
bpf: share several utility functions as internal API
This foundational patch in the v3 static stack liveness series refactors several utility functions in the BPF verifier to be accessible as an internal API, enabling reuse by the new static analysis pass. It is the first of 13 patches in the series that together implement a full dataflow liveness analysis for BPF stack slots. By exposing these helpers cleanly, subsequent patches can build the forward arg-tracking and liveness propagation logic without duplicating code. The change sets the groundwork for improved dead stack slot detection and poisoning.
bpf: introduce forward arg-tracking dataflow analysis
This patch introduces a forward dataflow analysis pass to the BPF verifier that tracks which stack slots are written and subsequently read as function arguments. It complements the existing backward liveness analysis by determining which writes are live at call boundaries, enabling the verifier to precisely identify dead stack writes. The analysis operates with 4-byte granularity on stack slots and integrates with the func_instances tracking. This is the core algorithmic addition in Zingerman's static stack liveness series.
The final patch in the v3 static stack liveness series uses the results of the new dataflow analysis to poison stack slots that are written but never subsequently read. Poisoning these dead slots allows the verifier to detect and reject programs that rely on uninitialized or stale stack data, improving safety guarantees. This patch ties together all the preparatory infrastructure added by the preceding 12 patches. It also includes updates to verifier logging to report which slots were identified as dead.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
This v2 patch fixes a null-pointer dereference that can be triggered when a BPF program attached via fmod_ret to security_task_alloc returns an error, causing kernel_clone() to proceed with a partially initialized task structure. The fix adds a guard so that the cloned task is properly cleaned up when fmod_ret programs intercept the LSM hook and return a non-zero value. This closes a potential local denial-of-service vector exploitable by privileged BPF users. A companion selftest was added in patch 2/2 to verify the fix.
bpf: Drop pkt_end markers on arithmetic to prevent is_pkt_ptr_branch_taken
This patch fixes a verifier bug where performing scalar arithmetic on a packet pointer register incorrectly preserved the pkt_end marker, causing is_pkt_ptr_branch_taken() to make wrong assumptions about bounds checks in subsequent conditional branches. The fix clears the pkt_end marker when arithmetic modifies the register, preventing stale range information from influencing branch pruning. A regression selftest was added in the companion patch 2/2. This addresses a subtle correctness issue in XDP and TC packet processing programs that manipulate packet pointers arithmetically.
Generated 2026-04-10T10:10:19Z
Today's bpf-next activity was dominated by two major series: Eduard Zingerman's 14-patch overhaul introducing static stack liveness data flow analysis in the verifier, and Mykyta Yatsenko's RFC for a new resizable BPF hash map backed by the kernel's rhashtable infrastructure. Additional notable work includes Kumar Kartikeya Dwivedi's verifier warning message framework, enabling non-fatal deprecation warnings during program load, and Daniel Borkmann's fix for ld_{abs,ind} failure path analysis in BPF subprograms.
bpf: share several utility functions as internal API
This is the opening patch in a 14-part series introducing static stack liveness data flow analysis into the BPF verifier. It refactors several internal utility functions into a shared API to be reused by the upcoming liveness analysis pass. The broader series upgrades stack-slot tracking to 4-byte granularity and introduces a forward arg-tracking dataflow analysis, culminating in dead stack slot poisoning — marking unused stack slots to catch uninitialized reads more reliably. The work also includes logging improvements and extensive selftests covering the new analysis behavior.
This RFC introduces a new BPF map type backed by the kernel's rhashtable infrastructure, enabling dynamically resizable hash maps without the fixed-capacity constraints of BPF_MAP_TYPE_HASH. The 18-patch series implements full lookup/update/delete operations, batch ops, BPF iterators, timer and workqueue support, and libbpf integration. This addresses long-standing performance cliffs when BPF hash maps approach their pre-allocated capacity, as resizing happens transparently at runtime. bpftool documentation and comprehensive selftests round out the RFC.
bpf: Add support for verifier warning messages
This patch introduces a new BPF verifier infrastructure for emitting non-fatal warning messages to userspace during program load, distinct from errors that reject programs outright. The six-patch series adds a KF_DEPRECATED flag for kfuncs, a __bpf_kfunc_replacement() annotation to guide migration, and libbpf support to surface warnings by default. Source location information is exposed by making find_linfo widely available within the verifier. This closes an important ergonomics gap where developers had no in-band signal for deprecated or suboptimal BPF patterns.
bpf: Propagate error from visit_tailcall_insn
This series fixes a verifier bug where errors returned by visit_tailcall_insn were silently discarded during subprogram analysis, potentially allowing malformed programs through verification. A second patch corrects the failure-path analysis for ld_abs and ld_ind instructions when used inside subprograms. A third patch removes an overly narrow static qualifier on a local subprog pointer to support the fix. Selftests are added to cover the previously undetected failure paths, and this is the second revision following initial review feedback.
bpf: Reject sleepable kprobe_multi programs at attach time
kprobe_multi programs execute in a non-preemptible context where sleeping would cause a kernel crash, yet the BPF subsystem previously accepted programs with the sleepable flag for this attach type. This patch adds an explicit check at attach time to reject the sleepable flag in combination with BPF_TRACE_KPROBE_MULTI, returning a clear error rather than silently misbehaving. A selftest verifies the rejection behavior. This is the fifth revision of the series, refined through several rounds of review.
selftests/bpf: Add BPF struct_ops + livepatch integration test
This selftest exercises the interaction between BPF struct_ops programs and the kernel livepatch infrastructure, which allows BPF programs to replace kernel functions in a structured, reversible way. The test verifies that struct_ops-based function replacement behaves correctly alongside livepatch semantics, covering both attachment and detachment paths. This is important validation for a relatively new capability that enables BPF programs to participate in live kernel patching workflows.
libbpf: Allow use of feature cache for non-token cases
libbpf's BTF feature detection previously bypassed the feature cache in code paths that did not involve a BPF token, leading to redundant kernel probes on repeated calls. This patch relaxes that requirement so the feature cache is consulted and populated regardless of token availability. The companion patch adds a BTF sanitization selftest validating BTF layout correctness under various configurations. This is the third revision of the two-patch series.
bpf: add missing fsession to the verifier log
The BPF_ATTACH_TYPE_FSESSION attach type was missing from the verifier log output, bpftool's usage strings, and kernel documentation, leaving it as an undocumented attach type in all developer-facing surfaces. This three-patch series adds fsession to the verifier log, BPF documentation, and bpftool usage output, ensuring consistency across tooling. This is the second revision addressing minor style feedback from the initial submission.
Generated 2026-04-09T10:30:00Z
April 7-8 saw broad activity across verifier correctness, networking, and tooling. Kumar Kartikeya Dwivedi submitted a series adding verifier warning message support for deprecated kfuncs, while Daniel Borkmann fixed linked register delta tracking bugs in the verifier. Nick Hudson's v3 series introduced new tunnel decapsulation flags for bpf_skb_adjust_room, and Andrey Grodzovsky's kprobe symbol disambiguation fix reached v7.
bpf: Add support for verifier warning messages
This v2 series introduces a new verifier warning infrastructure that allows the BPF verifier to emit non-fatal warning messages to users, separate from hard errors. The series leverages KF_DEPRECATED to trigger warnings for deprecated kfuncs and adds a __bpf_kfunc_replacement() annotation to point developers toward preferred replacements. libbpf is updated to flush these warnings by default, giving developers earlier visibility into deprecated API usage without causing program rejection.
bpf: Fix linked reg delta tracking when src_reg == dst_reg
This series fixes two related verifier bugs in linked register delta tracking. The first patch addresses a case where src_reg == dst_reg causes stale delta state to propagate incorrectly through register linking. The second patch ensures the delta field is cleared whenever a register's ID is reset for non-add/sub operations, preventing stale deltas from leaking through ID reassignment. Both fixes are accompanied by targeted selftests.
tracing: Prefer vmlinux symbols over module symbols for unqualified kprobes
Now at v7 (with a concurrent v6 also posted on the same day), this patch stabilizes the fix for kprobe symbol disambiguation when a module symbol shadows a vmlinux symbol of the same name. Unqualified kprobe attachments now correctly prefer the vmlinux symbol, preventing inadvertent tracing of module code. A selftest covering duplicate symbol handling is included.
bpf: add BPF_F_ADJ_ROOM_DECAP_* flags for tunnel decapsulation
Part of the v3 'bpf: decap flags and GSO state updates' series, this patch introduces new BPF_F_ADJ_ROOM_DECAP_* flags for the bpf_skb_adjust_room helper to handle tunnel decapsulation scenarios correctly. A companion patch clears tunnel GSO state in skb_adjust_room when decapping, addressing correctness issues for BPF programs performing software tunnel decap. The series also refactors ADJ_ROOM flag masks and adds guard rails for invalid flag combinations.
bpf: add missing fsession to the verifier log
This v2 series adds missing support for the fsession BPF attach type across the verifier log, BPF documentation, and bpftool. The fsession attach type was supported in the kernel but absent from these user-facing surfaces, making it invisible to developers using introspection tools. The three-patch series ensures fsession is consistently recognized and displayed alongside other attach types.
bpf: Retire rcu_trace_implies_rcu_gp()
This patch removes the rcu_trace_implies_rcu_gp() function from the BPF RCU machinery, which was a temporary workaround that treated RCU trace critical sections as implying a full RCU grace period. As the kernel RCU subsystem has matured, this workaround is no longer necessary and its removal simplifies the BPF memory model and reduces maintenance burden.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
The v4 arena library and runtime series continues to appear in this period, covering the core verifier change and an extensive libarena user-space test library. The kernel patch upgrades a scalar register to PTR_TO_ARENA when derived from arena pointer arithmetic, enabling safe arena pointer tracking in the BPF verifier. The selftest side introduces a complete arena library with buddy allocator and ASAN runtime support.
Generated 2026-04-08T12:00:00Z
Activity on April 6-7 was dominated by two substantial series: Emil Tsalapatis's v4 arena library and runtime series, which introduces a BPF memory arena with buddy allocator and ASAN support, and Kumar Kartikeya Dwivedi's v5 series enabling variable offsets for syscall PTR_TO_CTX access. Additional notable work includes Andrey Grodzovsky's RFC for fixing kprobe attachment priority when module symbols shadow vmlinux symbols, and smaller fixes for dynptr reference handling and insn_array offset loads.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
Part of the v4 'Introduce arena library and runtime' series, this patch updates the BPF verifier to upgrade a scalar register to PTR_TO_ARENA when it results from adding a scalar to an arena pointer. This is a key verifier change that enables safe tracking of pointers derived from BPF arena memory regions. The companion patches introduce a full arena user-space library for BPF selftests, including a buddy allocator and ASAN runtime integration.
bpf: Support variable offsets for syscall PTR_TO_CTX
This v5 patch extends the BPF verifier to allow variable (non-constant) offsets when accessing PTR_TO_CTX in BPF programs running in syscall context. Previously, only fixed offsets were permitted, which was overly restrictive for programs that compute offsets dynamically. Companion patches also enable unaligned accesses for syscall context and add comprehensive selftests including tests for accesses beyond U16_MAX.
tracing: Prefer vmlinux symbols over module symbols for unqualified kprobes
This RFC v5 patch addresses an ambiguity in kprobe symbol resolution: when a kernel module exports a symbol with the same name as a vmlinux symbol, an unqualified kprobe attachment would previously latch onto the module symbol. The fix ensures vmlinux symbols take precedence for unqualified probes, aligning behavior with user expectations and reducing inadvertent tracing of module code. A selftest covering the duplicate symbol scenario is included.
bpf: Do not ignore offsets for loads from insn_arrays
This v3 fix corrects a bug in the BPF loader where non-zero offsets in insn_array map loads were silently ignored, resulting in incorrect instruction loading. The patch ensures the offset is correctly applied when reading BPF instructions from array maps, preventing subtle program errors that would otherwise be difficult to diagnose. A companion selftest verifies loading from various non-zero offsets.
bpf: Allow overwriting referenced dynptr when refcnt > 1
The BPF verifier currently rejects programs that attempt to overwrite a referenced dynptr even when sibling states still hold a valid reference, causing overly conservative program rejections. This patch relaxes the restriction by tracking the reference count across sibling states and permitting the overwrite when refcnt > 1, ensuring the sibling state can still clean up the dynptr on exit. A selftest demonstrating the previously-rejected but safe pattern is included.
Generated 2026-04-08T12:00:00Z
Activity on April 5-6 was dominated by Yonghong Song's v2 and v3 iterations of the 'Support stack arguments for BPF functions and kfuncs' series, which introduces a new BPF_REG_STACK_ARG_BASE register and extends the BPF calling convention to allow structs larger than 8 bytes to be passed via the stack. The v3 revision refines the design with improved verifier validation, x86_64 JIT support, and comprehensive selftests for both BPF-to-BPF calls and kfunc calls.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
Introduces BPF_REG_STACK_ARG_BASE, a new virtual BPF register used as a base pointer for stack-allocated function arguments. This is the foundational patch in the series enabling BPF functions and kfuncs to accept arguments too large to fit in the six general-purpose argument registers. The new register is handled specially by the verifier and JIT backends to track and validate stack argument slots. It allows BPF programs to pass structs larger than 8 bytes by value through a well-defined stack ABI.
bpf: Support stack arguments for bpf functions
Extends the BPF verifier to recognize and validate stack-based argument passing for BPF-to-BPF function calls. The patch teaches the verifier to track argument slots relative to BPF_REG_STACK_ARG_BASE and verify their types, sizes, and liveness. This enables BPF subprograms to receive large struct arguments that cannot fit in registers, matching a common pattern in kernel C code.
bpf: Support stack arguments for kfunc calls
Extends stack argument support to kfunc calls, allowing BPF programs to pass large structs by value to kernel functions exposed via kfuncs. The verifier is updated to check stack argument slots when validating kfunc call sites, ensuring type safety between the BPF caller and the kernel-side parameter declaration. Stack arguments for kfuncs are limited to 8 bytes per slot to match kernel ABI expectations.
bpf: Reject stack arguments in non-JITed programs
Adds a verifier check that rejects programs using stack arguments when running without a JIT compiler. Stack argument passing requires JIT support because the interpreter cannot implement the necessary stack manipulation semantics. This guard ensures the feature is only enabled on platforms and configurations where it is fully supported.
bpf,x86: Implement JIT support for stack arguments
Implements x86_64 JIT backend support for emitting code to set up and tear down stack argument frames for BPF function and kfunc calls. The JIT allocates space on the native stack, copies argument values into position relative to the stack pointer, and passes the base address in the appropriate register. This patch is the concrete implementation that makes the stack argument ABI functional on x86_64.
selftests/bpf: Add verifier tests for stack argument validation
Adds verifier-level selftests that exercise both positive and negative cases for stack argument validation, including type mismatches, size violations, and use of uninitialized stack slots. These tests complement the functional selftests from earlier patches and ensure the verifier correctly rejects malformed programs using stack arguments. The negative tests cover the greater-than-8-byte kfunc stack argument restriction introduced in the series.
Generated 2026-04-06T10:13:03Z
No patches were submitted to the bpf mailing list during this period.
Generated 2026-04-05T09:43:13Z
The bpf-next mailing list saw active development on April 3-4, 2026, centered on BPF verifier improvements, JIT code generation, and libbpf usability enhancements. Alexei Starovoitov continued iterating on preparatory patches for static stack liveness analysis (reaching v5), while Xu Kuohai posted a 12th revision of the ENDBR/BTI CFI series for x86 and arm64. Emil Tsalapatis introduced a comprehensive arena library and runtime for BPF programs, and Chengkaitao proposed new infrastructure to simplify kfunc verifier registration.
bpf: Do register range validation early
This patch moves register range validation to an earlier stage in the BPF verifier pipeline as a preparatory step for implementing static stack liveness analysis. By validating register ranges sooner, subsequent analysis passes can make more informed decisions about stack usage. This is the first of a 6-patch v5 series from Alexei Starovoitov that lays the groundwork for static stack liveness, a significant verifier enhancement aimed at improving precision in BPF program analysis.
bpf: Add bpf_compute_const_regs() and bpf_prune_dead_branches() passes
Introduces two new compiler-style analysis passes to the BPF verifier: constant register computation and dead branch pruning. These passes allow the verifier to identify and eliminate unreachable code paths before the main verification pass runs, reducing the state space that must be explored. This is foundational infrastructure for static stack liveness analysis, which will allow the verifier to precisely track stack slot usage across subprograms and enable future optimizations.
bpf: Add helper and kfunc stack access size resolution
The final patch in Alexei Starovoitov's v5 series adds logic for resolving the sizes of stack accesses made by helpers and kfuncs, a prerequisite for accurate static stack liveness computation. Understanding how much stack space each helper or kfunc call may touch is essential for the verifier to determine which stack slots are live at any given program point. Together with the earlier patches in the series, this completes the preparatory infrastructure for static stack liveness.
bpf: Introduce BTF_SET/ID_SUB and BPF_VERIF_KFUNC_DEF
Introduces new BTF infrastructure (BTF_SET/ID_SUB) and a BPF_VERIF_KFUNC_DEF macro to simplify how the BPF verifier registers and matches kfunc verification callbacks. Currently kfunc verification logic requires manual BTF set management and is scattered across the codebase; this refactoring provides a unified, declarative mechanism for associating kfuncs with their verifier hooks. The accompanying patch applies this new infrastructure to rbtree kfuncs as a concrete demonstration.
bpf: Add helper to detect indirect jump targets
Adds a helper function to the BPF JIT infrastructure for identifying indirect jump targets in BPF programs, enabling subsequent patches to emit control-flow integrity (CFI) landing pad instructions at those sites. On x86 this means emitting ENDBR instructions (for Intel IBT), and on arm64 BTI instructions. This is the 12th revision of a mature series by Xu Kuohai that improves BPF JIT compatibility with CPU-enforced CFI features, with both x86 and arm64 backends covered.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
Enhances the BPF verifier to recognize that a scalar value resulting from arithmetic on an arena pointer should itself be typed as PTR_TO_ARENA, improving the ergonomics and correctness of arena-based BPF programs. This is the core kernel-side change in a 9-patch v3 series that also introduces a libarena library and runtime for BPF, including a buddy allocator and ASAN integration. The series significantly lowers the barrier for BPF programs to perform dynamic memory management using arenas.
libbpf: Auto-upgrade kprobes to multi-kprobes when supported
This RFC proposes transparent automatic upgrading of single kprobe attachments to the more efficient multi-kprobe mechanism when the kernel supports it, mirroring a companion patch that does the same for uprobes. Multi-kprobes attach to multiple functions via a single file descriptor, reducing per-attach overhead considerably. The series (RFC v3) also adds a libbpf feature probe to detect kernel multi-kprobe link support, making the upgrade decision automatic and safe across kernel versions.
Generated 2026-04-04T09:42:10Z
A busy day on bpf-next dominated by verifier and JIT work. Yonghong Song posted a major 10-patch series introducing stack-based argument passing for BPF functions and kfuncs, enabling larger structs to be passed by value. Alexei Starovoitov continued iterating—reaching v5—on preparatory verifier patches for static stack liveness analysis, while Emil Tsalapatis proposed a new arena library and runtime for BPF selftests.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
First patch in a 10-part series adding stack-based argument passing to BPF functions and kfuncs. It introduces a new virtual register BPF_REG_STACK_ARG_BASE to represent the base of stack-passed arguments in the BPF calling convention. This enables passing large structs by value that exceed the available register count. Subsequent patches in the series add verifier enforcement, x86-64 JIT support, and selftests covering both positive and negative cases.
bpf: Do register range validation early
First patch (v5) in a 6-patch series preparing the verifier for static stack liveness analysis. This patch moves register range validation to an earlier point in the verification pipeline so that subsequent passes can rely on consistent range invariants. The series also adds topological subprogram ordering after check_cfg(), dead branch pruning, and constant register computation passes. A v5 respin was posted within hours of v4, indicating rapid iteration.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
First patch (v3) in a 9-part series introducing an arena library and runtime for BPF selftests. This verifier change teaches the BPF verifier to upgrade a scalar register to PTR_TO_ARENA when it is the result of adding a scalar to an arena pointer, improving type-safety for arena-allocated memory. The rest of the series builds libarena scaffolding, an ASAN runtime for detecting memory errors in arena programs, a buddy allocator, and comprehensive selftests.
bpf: Move constants blinding out of arch-specific JITs
First patch (v11) in a 5-patch series that emits ENDBR (x86) and BTI (arm64) instructions at indirect jump targets in BPF JIT-compiled programs to harden against control-flow hijacking attacks. This initial patch refactors constants blinding out of architecture-specific JITs and into shared BPF core code, passing the bpf_verifier_env to the JIT. Later patches add a verifier helper to detect indirect jump targets and the per-arch emission logic for ENDBR and BTI landing pads.
bpf: Refactor reg_bounds_sanity_check
First patch (v3) in a 6-patch series fixing verifier invariant violations surfaced by syzbot. The series refactors the register bounds sanity check, exits early when reg_bounds_sync receives invalid inputs, simulates branches to prune states based on range violations, and removes now-unnecessary invariant violation flags from selftests. These fixes improve the reliability of the verifier's range-tracking logic and address potential incorrect pruning decisions.
bpf: Do not ignore offsets for loads from insn_arrays
Bug fix (v2) correcting the BPF verifier's handling of loads from instruction arrays with non-zero offsets. Previously the offset was silently ignored, leading to incorrect values being read. The fix ensures the offset is properly applied, and a companion selftest patch adds coverage for the various offset scenarios to prevent regressions.
bpf: Refactor dynptr mutability tracking
A v2 verifier cleanup that refactors how dynptr mutability is tracked internally. Instead of scattering mutability checks across dynptr helper validation paths, this patch consolidates the tracking into a cleaner representation. This makes it easier to reason about read-only vs. read-write dynptr semantics and reduces the risk of future correctness bugs when new dynptr types or helpers are introduced.
Generated 2026-04-03T10:00:00Z
April 1-2 saw heavy activity on the verifier and libbpf fronts. Yonghong Song posted a significant new feature series enabling stack-based argument passing for BPF functions and kfuncs with x86_64 JIT support, while Alexei Starovoitov iterated to v3 on preparatory verifier passes for static stack liveness analysis. Paul Chaignon and Kumar Kartikeya Dwivedi also landed verifier improvements addressing invariant violations and variable-offset syscall context access.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
Introduces a new virtual BPF register BPF_REG_STACK_ARG_BASE to support stack-based argument passing for BPF subprograms and kfuncs. This is the first patch in a 10-part series that extends the BPF calling convention beyond the existing five register arguments. Subsequent patches add verifier support, x86_64 JIT code generation, and selftests. This enables BPF programs to call functions with more than five arguments by spilling extra arguments onto the stack, bringing BPF closer to native C calling conventions.
bpf: Add bpf_compute_const_regs() and bpf_prune_dead_branches() passes
Adds two new pre-verification passes to the BPF verifier: bpf_compute_const_regs() performs a lightweight constant propagation to identify registers holding compile-time constants, and bpf_prune_dead_branches() eliminates unreachable code paths before the main verification pass runs. These passes are groundwork for upcoming static stack liveness analysis, which will reduce the state space the verifier must explore. This is patch 4/6 in Alexei's v3 series "bpf: Prep patches for static stack liveness."
bpf: Add helper and kfunc stack access size resolution
Adds logic to the verifier to resolve the access size for stack slots passed to helpers and kfuncs, completing the v3 preparatory series for static stack liveness analysis. When a helper or kfunc receives a pointer to a stack slot, the verifier now computes the precise byte range being accessed rather than conservatively marking the entire slot as live. This precision is necessary for the upcoming static liveness pass to correctly determine which stack slots need to be initialized before use.
bpf: Simulate branches to prune based on range violations
Fixes a class of verifier invariant violations where register range bounds became inconsistent after branch pruning. When the verifier detects that a register's tracked range is provably violated on a branch, it now simulates taking that branch and prunes the state rather than leaving the inconsistency unresolved. This addresses syzbot-reported crashes caused by invalid register states propagating through the verifier. This is patch 4/6 in Paul Chaignon's v3 series "Fix invariant violations and improve branch detection."
bpf: Support variable offsets for syscall PTR_TO_CTX
Extends the BPF verifier to allow variable (non-constant) offsets when accessing syscall program context pointers of type PTR_TO_CTX. Previously, the verifier rejected any non-zero variable offset into a syscall ctx, requiring programs to use only constant offsets. The patch teaches the verifier to track variable offsets and validate bounds at access time, enabling more flexible syscall BPF programs. This is the first patch in Kumar's v4 seven-patch series.
bpf: Do not ignore offsets for loads from insn_arrays
Fixes a bug in the BPF loader where non-zero offsets within instruction arrays were silently ignored when resolving map file descriptors and other relocations. The offset field was being discarded, causing incorrect values to be loaded when programs accessed elements beyond the base of an insn_array. This is a correctness fix affecting programs that use offset-based access patterns into instruction arrays, with accompanying selftests added in patch 2/2.
bpf: Refactor dynptr mutability tracking
Refactors how the BPF verifier tracks whether a dynptr is mutable or read-only, consolidating scattered mutability checks into a cleaner abstraction. Previously, mutability was inferred from the dynptr type and call context at each check site; this patch centralizes the logic to reduce duplication and make the invariants easier to audit. The refactor prepares the codebase for future dynptr extensions without changing existing behavior.
bpf: reject short IPv4/IPv6 inputs in bpf_prog_test_run_skb
Adds input length validation to bpf_prog_test_run_skb() to reject buffers shorter than a minimum IPv4 or IPv6 header when the data is marked as IP traffic. Without this check, a malformed short packet could cause the verifier test runner to access memory beyond the supplied buffer. This is a v3 single-patch fix addressing a potential out-of-bounds read in the BPF test infrastructure.
libbpf: Fix BTF handling in bpf_program__clone()
Fixes a bug in libbpf's bpf_program__clone() where the cloned program did not correctly inherit or reference the parent's BTF object, leading to use-after-free or incorrect BTF type resolution when the cloned program was loaded. The fix ensures the BTF reference is properly managed across the clone operation. This is a v2 single-patch bug fix for an issue discovered in programs that use program cloning with BTF-dependent features.
Generated 2026-04-02T23:24:36Z
The week of March 30 - April 6 saw heavy activity around BPF verifier improvements and calling convention extensions. Yonghong Song iterated through three versions of stack argument support for BPF functions and kfuncs, culminating in v3 with a new BPF_REG_STACK_ARG_BASE register and x86_64 JIT implementation. Alexei Starovoitov continued refining prep patches for static stack liveness analysis, reaching v5 with subprogram topological ordering and constant-register computation passes that will enable smarter stack slot tracking. Additional highlights include Emil Tsalapatis introducing a full arena library and runtime, Xu Kuohai reaching v12 for emitting ENDBR/BTI instructions at indirect JIT jump targets, Chengkaitao refactoring how the verifier dispatches kfunc checks via a new BPF_VERIF_KFUNC_DEF mechanism, and Paul Chaignon fixing verifier invariant violations discovered by syzbot.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
Introduces BPF_REG_STACK_ARG_BASE, a new virtual BPF register serving as the base pointer for stack-allocated function arguments. This is the foundation of the 11-patch v3 series enabling BPF functions and kfuncs to receive arguments too large for the six general-purpose argument registers. The register is handled specially by both the verifier and x86_64 JIT backend to allocate, track, and validate stack argument slots. The series also includes selftests for BPF-to-BPF calls, kfunc calls, and negative cases for oversized arguments.
bpf: Add helper and kfunc stack access size resolution
The final patch in Alexei Starovoitov's v5 'Prep patches for static stack liveness' series, which adds helper and kfunc stack access size resolution used by upcoming static liveness analysis. The series as a whole sorts subprograms in topological order after check_cfg(), adds bpf_compute_const_regs() and bpf_prune_dead_branches() verifier passes, and moves verifier helpers to a shared header. Together these changes lay the groundwork for tracking which stack slots are actually live, reducing unnecessary spill/fill overhead.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
The first patch in the v3 'Introduce arena library and runtime' series, which teaches the verifier to promote a scalar register to PTR_TO_ARENA when added to an arena pointer. The broader 9-patch series introduces a libarena scaffolding with an ASAN-compatible runtime, a buddy allocator implementation, and comprehensive selftests. This infrastructure enables BPF programs using memory arenas to benefit from proper pointer type tracking and arena-aware address sanitization during testing.
bpf, x86: Emit ENDBR for indirect jump targets
Part of Xu Kuohai's v12 series adding Intel CET ENDBR (x86) and ARM64 BTI instructions at indirect JIT jump targets to harden BPF programs against control-flow hijacking. A companion patch adds a helper to detect indirect jump targets during JIT compilation, and another passes bpf_verifier_env to the JIT so it has the information needed to insert these instructions. The series also moves constant blinding out of arch-specific JITs into a shared location to simplify future JIT backends.
bpf: Introduce BTF_SET/ID_SUB and BPF_VERIF_KFUNC_DEF
Introduces BTF_SET/ID_SUB and BPF_VERIF_KFUNC_DEF macros that allow kfunc sets to embed their own verifier check callbacks, replacing the existing flat dispatch table used by the verifier. This refactor makes it easier to add verifier logic for new kfuncs without touching central verifier files. A follow-on patch converts the rbtree kfuncs to use the new mechanism, demonstrating the pattern.
bpf: Refactor reg_bounds_sanity_check
The first patch in Paul Chaignon's v3 'Fix invariant violations and improve branch detection' series, which addresses syzbot-reported verifier invariant violations. The series refactors reg_bounds_sanity_check, adds early exit for invalid reg_bounds_sync inputs, simulates branches to prune paths with range violations, and removes incorrect invariant-violation flags from selftests. These fixes improve verifier correctness when dealing with edge cases in register range tracking.
libbpf: Auto-upgrade kprobes to multi-kprobes when supported
Part of an RFC v3 series that transparently upgrades single kprobe and uprobe attachments to their multi-kprobe/multi-uprobe equivalents when the kernel supports them. A new FEAT_KPROBE_MULTI_LINK feature probe is added to libbpf to detect kernel support at runtime. This allows BPF programs written against the single-attach API to silently benefit from the performance improvements of multi-attach without any code changes.
bpf: Do not ignore offsets for loads from insn_arrays
Fixes a bug where the BPF verifier ignored non-zero offsets when loading values from instruction arrays, causing incorrect value reads. The fix ensures the offset is properly factored into the load address computation in the verifier's constant propagation path. A companion patch adds regression tests covering a variety of offset values to prevent recurrence.
pull-request: bpf-next 2026-04-01
Martin KaFai Lau's bpf-next pull request for April 1, 2026, consolidating the accumulated bpf-next changes for submission to Linus's tree. Pull requests like this mark a significant milestone in the development cycle, bundling verifier improvements, new helpers, libbpf changes, and selftests accumulated since the previous pull.
bpf: Refactor dynptr mutability tracking
Refactors how the BPF verifier tracks dynptr mutability, consolidating the immutability flag into the dynptr state representation for cleaner handling. This v2 patch simplifies the code paths that check whether a dynptr may be written through, reducing the risk of correctness issues when new dynptr types are added. The change is internal to the verifier with no user-visible behavior change.
Generated 2026-04-06T10:13:03Z
No monthly summaries yet. Check back on the 1st.