linux kernel patch summaries, generated daily
Yonghong Song submitted v3 of his 24-patch series adding stack argument support for BPF functions and kfuncs, superseding the v2 sent earlier in the week. The series enables BPF programs to pass arguments to subprograms and kfuncs via the stack when register pressure exceeds six, aligning the BPF calling convention with native ABI practices. It spans verifier liveness and precision analysis, a new r11-based instruction encoding, x86-64 and arm64 JIT backends, and extensive selftests.
bpf: Support stack arguments for bpf functions
The central verifier patch enabling BPF subprograms to accept arguments beyond register r5 by passing them on an auxiliary stack frame. The verifier is extended to recognise the new stack-based argument slots, validate their types, and propagate liveness information through calls. This removes the hard six-argument limit for BPF-to-BPF calls.
bpf: Add precision marking and backtracking for stack argument slots
Extends the verifier's precision backtracking engine to track stack argument slots in addition to registers. Precision marking is required for state pruning to be correct when programs use stack arguments, as the verifier must know which stack slots carry values that affect control flow. Without this, the verifier could incorrectly prune states and miss safety violations.
bpf: Extend liveness analysis to track stack argument slots
Teaches the verifier's liveness analysis to treat stack argument slots as live across a call site, ensuring that the writes to those slots are not incorrectly classified as dead stores. This is necessary for the verifier to correctly determine which stack writes must be preserved before a call instruction. The patch also updates the jmp_history mechanism to record stack-argument frame information.
Introduces support for a new pseudo-register r11 used as the base for stack argument addressing in BPF instructions. Since BPF's ISA did not previously expose r11, the verifier and disassembler are updated to accept and display r11-relative memory operands. This encoding allows the JIT to reliably distinguish stack argument accesses from regular frame-pointer-relative accesses.
bpf: Support stack arguments for kfunc calls
Extends the new stack argument convention to kfunc call sites, allowing kernel functions registered as kfuncs to receive more than six typed arguments from BPF programs. The verifier validates the type and alignment of stack-passed arguments against the kfunc's BTF signature. This is particularly useful for kfuncs with struct-typed or numerous parameters.
bpf,x86: Implement JIT support for stack arguments
Implements x86-64 JIT code generation for the new stack argument passing convention. The JIT emits r11-based MOV instructions to write arguments into the callee's stack frame before the call instruction, following the System V AMD64 ABI spill area layout. This patch makes the feature functional on x86-64, the primary development architecture.
bpf, arm64: Add JIT support for stack arguments
Adds arm64 JIT backend support for the new stack argument calling convention, mirroring the x86-64 implementation. The arm64 JIT must also remap BPF_REG_0 from x7 to x8 (addressed in a companion patch) to free x7 for use as the auxiliary stack frame pointer. With this patch, the feature gains support on both major 64-bit architectures targeted by the series.
Generated 2026-05-12T10:00:00Z
No patches were submitted to the bpf mailing list during this period.
Generated 2026-05-12T10:00:00Z
Activity today was limited to a single series from Kuniyuki Iwashima introducing BPF_SOCK_OPS_RCVLOWAT_CB, a new SOCK_OPS callback enabling BPF programs to dynamically control the TCP receive low-watermark threshold. The series adds a supporting kfunc to write back to sk_rcvlowat, extends bpf_skb_load_bytes() access to the new callback context, and includes selftest coverage.
bpf: tcp: Introduce BPF_SOCK_OPS_RCVLOWAT_CB.
Introduces the new BPF_SOCK_OPS_RCVLOWAT_CB operation in the SOCK_OPS framework, fired when TCP needs to determine a socket's effective receive low watermark. BPF programs attached to this op can inspect packet data and socket state to compute an appropriate threshold, enabling TCP AutoLOWAT-style behaviour. The patch defines the new op constant and wires it into the SOCK_OPS dispatch path.
bpf: tcp: Support bpf_skb_load_bytes() for BPF_SOCK_OPS_RCVLOWAT_CB.
Extends the bpf_skb_load_bytes() helper to be callable within the new BPF_SOCK_OPS_RCVLOWAT_CB context. This allows BPF programs running under the callback to inspect the contents of the socket receive buffer, which is necessary for making data-driven decisions about the appropriate low watermark. The change adds the new op to the set of SOCK_OPS operations that have a valid skb pointer.
bpf: tcp: Add kfunc to adjust sk->sk_rcvlowat.
Adds a kfunc that BPF programs can call within BPF_SOCK_OPS_RCVLOWAT_CB to explicitly set sk_rcvlowat on the current socket. Using a kfunc for the write-back rather than the SOCK_OPS return value avoids convention conflicts and keeps the API clean. The patch includes proper BTF type annotations and checks that the kfunc is only callable in the correct callback context.
bpf: tcp: Factorise bpf_skops_established().
Refactors the internal bpf_skops_established() function to extract shared logic needed by the new rcvlowat hook. This is a preparatory cleanup that avoids code duplication between the existing established-state SOCK_OPS dispatch and the new BPF_SOCK_OPS_RCVLOWAT_CB dispatch site. No functional change is intended.
bpf: tcp: Add SOCK_OPS rcvlowat hook.
The culminating patch of the series, this hooks BPF_SOCK_OPS_RCVLOWAT_CB into the TCP data-ready path so the callback fires at the correct moment. The result returned or written back by the BPF program is then applied as the socket's effective receive low watermark. Together with the preceding patches, this completes the end-to-end implementation of BPF-controlled TCP AutoLOWAT.
Generated 2026-05-12T10:00:00Z
Two major patch series dominated the day's activity. Yonghong Song submitted v2 of a 23-patch series adding stack argument support for BPF functions and kfuncs, touching the verifier, JIT backends for x86 and arm64, and liveness/precision analysis. Kuniyuki Iwashima proposed a new BPF_SOCK_OPS_RCVLOWAT_CB callback enabling BPF programs to dynamically adjust TCP receive low watermarks via a new kfunc. Justin Suess also posted a fix to offload kptr destructors running from NMI context to avoid potential deadlocks.
bpf: tcp: Introduce BPF_SOCK_OPS_RCVLOWAT_CB.
Introduces a new BPF_SOCK_OPS_RCVLOWAT_CB callback in the SOCK_OPS framework, invoked when the kernel needs to determine the effective TCP receive low watermark (sk_rcvlowat). This enables BPF programs to intercept and override the receive threshold on a per-socket basis, which is a building block for TCP AutoLOWAT. The patch wires the new op into the existing SOCK_OPS dispatch path and defines the callback flag.
bpf: tcp: Add kfunc to adjust sk->sk_rcvlowat.
Adds a new kfunc that BPF programs can call within the BPF_SOCK_OPS_RCVLOWAT_CB context to set the socket's sk_rcvlowat field. By exposing this as a kfunc rather than a return value, the API is extensible and avoids ambiguity with other SOCK_OPS return conventions. The patch includes appropriate BTF annotations and guards against misuse outside the designated callback.
bpf: tcp: Add SOCK_OPS rcvlowat hook.
Hooks the BPF_SOCK_OPS_RCVLOWAT_CB into the TCP stack so that the callback is invoked at the right point in data-ready processing. The hook calls into the BPF SOCK_OPS dispatch machinery and applies the result to update the socket's effective receive low watermark. This completes the core implementation of TCP AutoLOWAT support in the SOCK_OPS framework.
bpf: Support stack arguments for bpf functions
Core patch in Yonghong Song's 23-patch v2 series that enables BPF subprograms to receive arguments passed on the stack, moving beyond the current six-register limit. The verifier is extended to understand the new stack-argument slots, tracking their types and liveness. This is a significant capability improvement allowing BPF programs to call subprograms with more than six parameters.
bpf: Support stack arguments for kfunc calls
Extends the stack argument calling convention to kfunc calls as well as BPF-to-BPF calls. The verifier is taught to validate argument types passed via the stack when invoking kernel functions registered as kfuncs. This is important for kfuncs with complex or numerous parameters that currently cannot be expressed within the six-register limit.
bpf,x86: Implement JIT support for stack arguments
Implements the x86-64 JIT backend changes required to emit correct code for stack-based argument passing in BPF programs. The JIT must set up an auxiliary stack frame (using r11 as a frame pointer) and copy argument values to the expected offsets before a call. This is the first architecture JIT to gain stack argument support in this series.
bpf: Offload kptr destructors that run from NMI
Fixes a potential deadlock when a BPF kptr destructor is invoked from NMI context, where taking locks required for safe reference counting is not possible. The fix offloads such destructors to a work queue so they run in a sleepable context. The companion patch adds an NMI exerciser selftest to verify the fix holds under stress.
Generated 2026-05-12T10:00:00Z
Two patch series landed on bpf-next today. Amery Hung posted v4 of a 12-patch series refactoring verifier object relationship tracking, unifying dynptr and referenced-object handling while fixing a use-after-free bug in dynptr operations. Yazhou Tang posted v10 of a 3-patch series fixing an out-of-bounds read and s16 truncation bug in bpf_patch_call_args() for large bpf-to-bpf call offsets.
bpf: Refactor object relationship tracking and fix dynptr UAF bug
This is the core patch in a 12-patch series refactoring how the BPF verifier tracks relationships between objects. It rewrites the parent-child relationship model for dynptrs, slices, and referenced objects under a unified representation, and simultaneously fixes a use-after-free bug where a dynptr could be accessed after its underlying object was freed. The fix enforces stricter lifetime rules in the verifier so that deriving a dynptr from a freed resource is correctly rejected at verification time. This is a significant correctness improvement for programs that use dynptrs backed by kernel objects.
bpf: Unify dynptr handling in the verifier
Consolidates dynptr state tracking in the verifier by routing all dynptr-related checks and state propagation through a single code path, removing previously scattered handling of dynptr metadata. This patch is a prerequisite for the later patches in the series that generalize object relationship tracking. The change improves maintainability and reduces the risk of subtle inconsistencies between different dynptr handling sites.
bpf: Unify referenced object tracking in verifier
Merges the per-type tracking of referenced objects (dynptrs, slices, kptrs) into a single unified mechanism within the verifier. Previously, each object type carried partially redundant and inconsistent tracking data structures. The unified approach simplifies the verifier's internal state and makes it easier to extend object tracking for new reference types in the future.
bpf: Unify release handling for helpers and kfuncs
Merges the release-handling paths for BPF helpers and kfuncs in the verifier, which previously maintained separate but largely duplicated logic for releasing acquired references. Unifying these paths ensures consistent semantics regardless of whether a reference is released via a helper or a kfunc. This is part of the broader series to consolidate verifier object tracking infrastructure.
bpf: Fix out-of-bounds read in bpf_patch_call_args()
Fixes an out-of-bounds read in bpf_patch_call_args(), the function responsible for rewriting bpf-to-bpf call instruction offsets during program loading. The bug could be triggered when patching programs with calls positioned near the end of the instruction array, causing the function to read beyond the allocated buffer. This is v10 of the fix series, reflecting a lengthy review process to ensure the bounds check is correct under all patching scenarios.
bpf: Fix s16 truncation for large bpf-to-bpf call offsets
Fixes an s16 truncation bug in bpf_patch_call_args() where call offsets exceeding the range of a signed 16-bit integer were silently truncated, producing incorrect jump targets in patched programs. The fix widens the internal offset representation to correctly handle large programs where the distance between caller and callee exceeds 32767 instructions. This patch pairs with the out-of-bounds read fix in the same series.
Generated 2026-05-08T00:00:00Z
Two fix series landed targeting correctness and safety in core BPF infrastructure. Yazhou Tang's v10 series addresses an out-of-bounds read and s16 call offset truncation bug in bpf_patch_call_args(), preventing memory corruption when BPF programs use large bpf-to-bpf call offsets. Justin Suess's v2 series fixes a deadlock that can occur when kptr destructors are triggered from NMI context by offloading them to a safe workqueue path.
bpf: Fix out-of-bounds read in bpf_patch_call_args()
bpf_patch_call_args() is responsible for rewriting call instructions in BPF programs when functions are relocated during loading. This patch fixes an out-of-bounds memory read that occurs in that function when processing bpf-to-bpf calls with large offsets. The bug arises because the code reads beyond the bounds of the instruction array before the offset is validated. At v10, this fix has been through significant refinement and is paired with a companion patch correcting the underlying s16 truncation issue.
bpf: Fix s16 truncation for large bpf-to-bpf call offsets
When a bpf-to-bpf call offset is large enough to overflow a signed 16-bit integer, the value gets silently truncated during patching, causing the call instruction to jump to an incorrect address. This patch fixes the truncation by ensuring offsets are handled with the correct width throughout the call patching path. The bug could cause silent misbehavior in complex BPF programs with many subprograms spread far apart in the instruction stream. A selftest (patch 3/3) accompanies this fix to cover the large-offset case.
bpf: Offload kptr destructors that run from NMI
BPF kptr destructors can be invoked from NMI (non-maskable interrupt) context, for example via perf-event-attached programs, but acquiring the locks required for cleanup is unsafe in that context and can deadlock the kernel. This patch resolves the issue by detecting the NMI case and offloading the destructor call to a workqueue so it runs in a safe, preemptible context. The fix preserves correct lifecycle management for kptrs while eliminating the deadlock risk. A companion selftest (patch 2/2) exercises the NMI destructor path to prevent regressions.
Generated 2026-05-07T00:00:00Z
A quiet day on bpf-next, with a single two-patch series from Matt Bobrowski. The v2 series enforces VFS constraints on the xattr BPF kfuncs and pairs that change with negative selftests that verify the error paths.
bpf: enforce VFS constraints for xattr related BPF kfuncs
This v2 patch enforces standard VFS permission and existence checks inside the BPF xattr kfuncs bpf_get_dentry_xattr, bpf_set_dentry_xattr, and bpf_remove_dentry_xattr. Without these guards, BPF LSM programs could bypass the capability checks and dentry validity requirements that the normal VFS xattr code path enforces, creating a privilege inconsistency. The fix aligns kfunc semantics with userspace-visible VFS behavior, closing a potential privilege-related gap for LSM-heavy deployments. This is the v2 revision incorporating feedback from the initial posting.
selftests/bpf: add new negative tests for xattr related BPF kfuncs
This companion patch adds a set of negative test cases exercising the VFS constraint enforcement introduced in the first patch of the series. The tests attempt xattr kfunc calls on negative dentries, on filesystems that do not support extended attributes, and with invalid capability state, confirming that the kernel returns the expected error codes in each case. Having explicit negative coverage prevents future regressions from quietly re-opening the constraint bypass.
Generated 2026-05-06T00:00:00Z
The May 3-4 window saw two active series. Matt Bobrowski posted a v2 two-patch series enforcing VFS constraints for xattr-related BPF kfuncs, tightening permission and existence checks that were previously bypassable from BPF context. Kuan-Wei Chiu followed up with a v2 of the initial BPF JIT compiler for the m68k architecture, bringing Motorola 68000-series CPUs into the JIT-capable BPF tier.
bpf: enforce VFS constraints for xattr related BPF kfuncs
This patch (v2, 1/2) enforces standard VFS permission and existence constraints inside the xattr BPF kfuncs bpf_get_dentry_xattr, bpf_set_dentry_xattr, and bpf_remove_dentry_xattr. Previously, BPF programs could bypass the checks that the regular VFS xattr path imposes, such as requiring a positive dentry and appropriate capabilities. The fix aligns kfunc behavior with what a userspace caller would experience, closing a privilege-related inconsistency in LSM hook programs. It is the companion to the negative-dentry crash fix posted earlier in the week.
selftests/bpf: add new negative tests for xattr related BPF kfuncs
This patch (v2, 2/2) adds a set of negative test cases that verify the VFS constraint enforcement introduced in the companion patch. The tests exercise scenarios such as operating on negative dentries, missing capability bits, and invalid xattr name prefixes to confirm the kfuncs now return the expected error codes. Covering these failure paths in selftests ensures regressions will be caught before the series lands in the tree.
m68k, bpf: Add initial BPF JIT compiler support
This v2 patch introduces a BPF JIT compiler for the Motorola m68k architecture, making m68k the newest architecture to gain native BPF execution instead of falling back to the interpreter. The JIT covers the core BPF instruction set including ALU ops, memory loads and stores, branching, and function calls, mapping them to m68k assembly. The v2 addresses review feedback from the initial posting, primarily around instruction selection and register allocation details. This expands BPF's JIT footprint to an architecture frequently used in embedded and legacy systems.
Generated 2026-05-06T00:00:00Z
No patches were submitted to the bpf mailing list during this period.
Generated 2026-05-04T00:00:00Z
May 1 was a quiet day on the bpf-next mailing list with just two series submitted. Florian Lehner posted v3 of LINK_DETACH support for perf links, and hadrien Patte submitted two revisions of a bpftool build fix to resolve libcrypto link flags via pkg-config.
bpf: Add LINK_DETACH support for perf link
Adds LINK_DETACH semantics to perf links, enabling a perf link to be detached from its underlying perf event without closing the link file descriptor. This mirrors detach behavior already available for other BPF link types and is useful for programs that need to temporarily suspend tracing without fully tearing down associated state. The v3 series also includes a selftest that exercises the detach path for perf links and verifies correct behavior after detachment.
bpftool: Resolve libcrypto link flags via pkg-config
Switches bpftool's libcrypto linkage from a hardcoded -lcrypto flag to a pkg-config query, improving portability across distributions and build environments where OpenSSL may be installed in non-standard locations. This v2 incorporates review feedback from the initial submission posted earlier the same day. The fix matters for downstream packagers who build bpftool against system-provided or vendored OpenSSL installations where pkg-config is the canonical way to obtain library flags.
Generated 2026-05-02T10:30:00Z
April 30 saw heavy activity around the ongoing selftests/bpf build robustness series from Ricardo B. Marlière, which reached its eleventh revision with both v10 and v11 landing on the same day. Notable companion patches include Paul Chaignon's verifier enhancement to print per-subprogram instruction counts, and a crash fix from Matt Bobrowski for negative dentry handling in xattr kfuncs.
selftests/bpf: Add BPF_STRICT_BUILD toggle
First patch of an 11-part series (now at v11) reworking the BPF selftests build system to gracefully handle partial or misconfigured kernel configurations. This patch introduces a BPF_STRICT_BUILD Makefile toggle that, when disabled, allows the test suite to build and run even when some BPF features or kernel modules are absent. The series as a whole adds skip logic for missing compiled objects, tolerance for benchmark and skeleton generation failures, and fixes KDIR handling for distro kernels built with O=. This is particularly valuable for CI environments and downstream packagers who build BPF selftests against non-default kernel configs.
bpf: Print breakdown of insns processed by subprogs
Extends the BPF verifier's log output to print a per-subprogram breakdown of instruction counts processed, rather than only a single aggregate total. Previously, developers debugging large BPF programs with multiple subprograms had no direct way to identify which subprogram was consuming most of the verifier budget. This v3 addresses reviewer feedback on the log format and is accompanied by a selftest that validates the new per-subprogram lines in the verifier log.
bpf: fix crash in bpf_[set|remove]_dentry_xattr for negative dentries
Fixes a null pointer dereference crash in the bpf_set_dentry_xattr and bpf_remove_dentry_xattr kfuncs when called with a negative dentry, i.e., one that does not correspond to an existing filesystem object. Both functions previously assumed the dentry had an associated inode and would crash when that assumption was violated. This v2 adds an early guard to reject negative dentries, preventing BPF LSM programs from triggering the crash when walking filesystem paths that include non-existent entries.
Generated 2026-05-02T10:30:00Z
This daily window's highlight is a new BPF JIT compiler for the m68k architecture, alongside fixes and new 32-bit atomic support for the RISC-V 32-bit JIT. The verifier gained a useful diagnostic improvement to print per-subprogram instruction counts, and a crash in the BPF LSM dentry xattr helpers for negative dentries was corrected. A large selftests series (v9) to allow partial builds across varying kernel configs also landed.
m68k, bpf: Add initial BPF JIT compiler support
Adds an initial BPF JIT compiler for the m68k architecture, bringing JIT-accelerated BPF execution to this historically interpreter-only platform. The implementation covers the core BPF instruction set, translating BPF bytecode into native m68k machine code. This is significant because JIT compilation greatly reduces BPF program overhead compared to the interpreter path. It extends the set of architectures with BPF JIT support, which has grown substantially in recent kernel cycles.
riscv, bpf: Fix support for BPF_SDIV and BPF_SMOD in RV32 JIT
Fixes handling of signed division (BPF_SDIV) and signed modulo (BPF_SMOD) in the RISC-V 32-bit BPF JIT, correcting incorrect results for negative operands. This is the first patch in a three-part series that also fixes BPF_MOVSX sign-extension support and adds 32-bit atomic operations to the RV32 JIT. Together, the series brings the RV32 JIT closer to feature parity with its 64-bit counterpart. Correct signed arithmetic is essential for BPF programs that perform integer division on potentially negative values.
bpf: Print breakdown of insns processed by subprogs
Extends the BPF verifier's log output to include a per-subprogram breakdown of the instruction count processed during verification, rather than just reporting the aggregate total. This makes it much easier to identify which subprogram is responsible for hitting verifier complexity limits in large BPF programs composed of multiple subprograms. The companion patch adds a selftest exercising this new output format. This is a diagnostic quality-of-life improvement that helps developers debug complex BPF programs.
bpf: fix crash in bpf_[set|remove]_dentry_xattr for negative dentries
Fixes a kernel crash in the BPF LSM helpers bpf_set_dentry_xattr and bpf_remove_dentry_xattr when they are called with a negative dentry (one that does not resolve to an existing inode). Without this fix, operating on a negative dentry would cause a NULL pointer dereference. This is the second version of the fix, refining the approach from v1 submitted the previous day. The fix adds a proper check for the negative dentry case and returns an appropriate error code.
bpf: Fix out-of-bounds read in bpf_patch_call_args()
Fixes an out-of-bounds read in bpf_patch_call_args() that can occur when patching BPF-to-BPF call instructions during program loading. This is the first patch in a v9 three-part series; the second patch addresses a related s16 truncation bug for large call offsets that could produce incorrect branch targets. Together these fixes prevent memory safety issues in the BPF program loading path. The series also includes a selftest to exercise the large-offset call scenario.
selftests/bpf: Add BPF_STRICT_BUILD toggle
Introduces a BPF_STRICT_BUILD Makefile toggle as the first step in a large 11-patch series (v9) to allow the BPF selftests to build and run gracefully under partial kernel configurations. Without this work, missing kernel features (such as CONFIG options not selected) cause the entire selftest build to fail, making it difficult to run any tests on non-standard kernels. Subsequent patches in the series tolerate BPF/skeleton generation failures, test file compilation errors, benchmark build failures, and missing install files. This is important for downstream distributions and CI environments that build kernels with non-default configs.
Generated 2026-04-30T10:57:52Z
The bpf-next mailing list for April 28-29 featured correctness fixes in core BPF infrastructure alongside testing improvements. Yazhou Tang posted a v8 series fixing an out-of-bounds read and s16 truncation bug in `bpf_patch_call_args()` for programs with large bpf-to-bpf call offsets, while Justin Suess addressed an NMI deadlock in referenced kptr destructors. Paul Chaignon improved verifier diagnostics by printing per-subprogram instruction counts, and Ricardo B. Marlière continued a large selftests series enabling BPF tests to tolerate partial kernel builds.
bpf: Fix out-of-bounds read in bpf_patch_call_args()
Fixes an out-of-bounds read in `bpf_patch_call_args()` that can occur when patching bpf-to-bpf calls in programs with a large instruction count. The function previously lacked a bounds check before reading into the instruction buffer, creating a potential memory safety violation in the BPF core. This is patch 1 of a v8 three-patch series that also addresses a related s16 truncation bug for call offsets exceeding the 16-bit signed range. The series has gone through eight revisions reflecting the careful scrutiny applied to verifier-adjacent bug fixes.
bpf: Fix s16 truncation for large bpf-to-bpf call offsets
Addresses silent truncation of bpf-to-bpf call offsets when the relative distance between subprograms exceeds the range of a signed 16-bit integer. The offset was previously stored as s16 without range validation, causing the JIT to encode incorrect call targets in programs with many subprograms spread far apart in the instruction stream. This patch widens the representation and adds an explicit range check before encoding the call offset. It accompanies the out-of-bounds read fix submitted in the same v8 series by Yazhou Tang.
bpf: Limit fields used in btf_record_equal comparisons
Tightens the `btf_record_equal()` comparison to only consider the fields relevant for determining whether two BTF records are structurally equivalent. Comparing unnecessary fields can cause false mismatches or mask actual differences, and in this series the change is a prerequisite for safely restructuring BTF teardown. This is patch 1 of a 4-patch series titled "bpf: Fix NMI deadlock in referenced kptr destructors". The series also converts BTF teardown to rcu_work and fixes the kptr destructor deadlock in NMI context.
bpf: Fix deadlock in kptr dtor in nmi
Fixes a deadlock that arises when a referenced kptr destructor is invoked from NMI context on an SMP system. NMI handlers cannot safely acquire certain sleeping or spinlocks that the normal kptr destruction path takes, leading to a hard deadlock. The fix defers lock-requiring cleanup out of the NMI-safe hot path, relying on workqueue-based deferred execution introduced earlier in the series. A selftest reproducer accompanies the fix in patch 4/4.
bpf: Print breakdown of insns processed by subprogs
Extends the BPF verifier log to emit a per-subprogram breakdown of instructions processed during verification, in addition to the existing total count. Currently it is difficult to identify which subprogram dominates verification complexity when working with large BPF programs that contain many subprograms. The new output gives developers a direct signal for where to focus optimization efforts. This is the v2 revision of the series, paired with a selftest in patch 2/2.
selftests/bpf: Add BPF_STRICT_BUILD toggle
Introduces a `BPF_STRICT_BUILD` Makefile variable for the BPF selftests as the first step in an 11-patch v8 series aimed at tolerating partial kernel builds. When the toggle is absent, individual test build failures are treated as non-fatal, allowing the suite to compile and run whatever subset of tests the current kernel config supports. This is particularly valuable on distribution kernels and CI systems that do not enable every BPF feature. The series goes on to handle benchmark failures, skeleton generation errors, missing install files, and cross-test weak symbol definitions.
xskmap: reject TX-only AF_XDP sockets
Adds a validation check to `xskmap` that rejects AF_XDP sockets configured for TX-only operation at map update time. TX-only sockets have no receive queue, so placing them in an xskmap entry that the kernel uses for packet reception can cause a null pointer dereference on the RX path. The fix enforces the constraint early during `BPF_MAP_UPDATE_ELEM`, returning an error before a bad socket can be installed. This is the third revision of the patch.
Generated 2026-04-30T10:21:05Z
This period's bpf-next activity centered on a v10 series extending the BPF linked-list API with new kfuncs (bpf_list_del, bpf_list_add, bpf_list_is_first/last/empty), and a v2 series adding arm64 JIT support for stack arguments by remapping registers and wiring in the AArch64 calling convention. The day also brought a new XDP load-balancer benchmark suite, a bpf_init_inode_xattr kfunc for atomic inode security labeling, syncookie statistics fixes, and build-failure patches addressing undefined symbol references from recent cnum changes.
bpf: Introduce the bpf_list_del kfunc.
Adds bpf_list_del, a new kfunc that removes a node from a BPF linked list given a direct pointer to the node rather than requiring callers to manage the list head. This is the core new primitive in the v10 "Extend the bpf_list family of APIs" series, which has been iterated extensively to handle ownership semantics and verifier integration correctly. The kfunc plugs into the existing BPF ownership model so the verifier can statically reason about node membership and prevent double-removal bugs.
bpf: add bpf_list_is_first/last/empty kfuncs
Introduces three new introspection kfuncs—bpf_list_is_first, bpf_list_is_last, and bpf_list_empty—that let BPF programs query the position and emptiness of nodes in a linked list without full traversal. These predicates complement the bpf_list_del and bpf_list_add kfuncs added earlier in the same series, rounding out the mid-list manipulation API. Together the series enables BPF programs to implement significantly more expressive in-kernel data structures.
bpf, arm64: Add JIT support for stack arguments
Extends the arm64 BPF JIT to spill function arguments onto the stack when a call exceeds the number of available argument registers, which is required for kfuncs that take more arguments than AArch64 registers can hold. The patch works in tandem with an earlier change in the series that remaps BPF_REG_0 from x7 to x8 to align with the AArch64 indirect result location register convention. A companion selftest patch validates stack-argument passing behavior on arm64.
selftests/bpf: Add XDP load-balancer BPF program
Adds the core BPF XDP program for a new load-balancer benchmark suite intended to measure and track XDP forwarding performance across architectures and kernel versions. The seven-patch series also contributes a batch-timing library, a nop-baseline benchmark for overhead calibration, common definitions, a userspace benchmark driver, and a shell script for automated benchmark runs. The suite is designed for head-to-head regression testing rather than absolute throughput claims.
bpf: add bpf_init_inode_xattr kfunc for atomic inode labeling
Introduces bpf_init_inode_xattr, a new kfunc that allows BPF LSM programs to atomically set an xattr on a newly created inode during the inode_init_security hook, mirroring how in-kernel LSMs like SELinux and Smack perform mandatory access control labeling at creation time. Doing this atomically at initialization avoids the TOCTOU race that would result from setting xattrs after the inode is already visible. A companion selftest verifies the kfunc's behavior under the BPF LSM framework.
net: add missing syncookie statistics for BPF custom syncookies
Fixes a gap where TCP syncookie-sent and syncookie-received statistics counters are not incremented when BPF programs implement custom syncookie logic via the BPF sock_ops hooks. Without these increments, standard monitoring tools and kernel selftests cannot detect or verify that the custom syncookie path is active. The v3 series includes a selftest that validates the counter values after the fix is applied.
Fix undefined symbol references for module build post cnum changes
Fixes a module build breakage where symbols used by modular BPF components became undefined after recent cnum (commit-number) infrastructure changes in the BPF tree. The patch adds EXPORT_SYMBOL annotations for the affected symbols to restore out-of-tree and modular kernel build compatibility. This follows a report from Thierry Reding that linux-next failed to build after pulling the bpf-next tree.
Generated 2026-04-29T10:09:27Z
April 26-27 brought three distinct series to bpf-next. The largest submission was Emil Tsalapatis's v9 of the libarena library, adding a buddy allocator and ASAN runtime for BPF arena-backed memory. David Windsor introduced a new kfunc for atomically labeling inodes with xattrs, and Jiayuan Chen fixed missing syncookie statistics for BPF custom syncookie implementations.
bpf: add bpf_init_inode_xattr kfunc for atomic inode labeling
Introduces bpf_init_inode_xattr(), a new kfunc that allows BPF programs to atomically set an xattr on an inode during its initialization phase, before the inode is visible to other processes. This is intended for LSM-based security labeling workflows where a label must be present before the first access. The companion patch (2/2) adds selftests covering the kfunc under various inode types.
selftests/bpf: add tests for bpf_init_inode_xattr kfunc
Adds a selftest suite for the bpf_init_inode_xattr kfunc introduced in patch 1/2. The tests attach a BPF LSM program to the inode_init_security hook and verify that the xattr is correctly set and readable after inode creation. Covers both success paths and error conditions such as oversized values or missing permissions.
net: add missing syncookie statistics for BPF custom syncookies
Fixes a gap where BPF programs using the custom syncookie mechanism (via the bpf_tcp_raw_gen_syncookie_ipv4/ipv6 kfuncs) did not increment the standard SYN cookie counters visible via netstat. This makes BPF-handled syncookies observable through the same monitoring interfaces as kernel-native syncookies. The v3 series also includes a selftest that verifies the statistics are updated correctly.
selftests/bpf: Add basic libarena scaffolding
Part of the v9 libarena series, this patch establishes the build scaffolding and test harness for a standalone user-space library that manages BPF arena memory. The library provides a C-callable allocator backed by BPF arena pages, letting user-space and BPF programs share memory without copying. Subsequent patches in the series add a buddy allocator and ASAN instrumentation for catching out-of-bounds accesses in arena-backed allocations.
selftests/bpf: Add buddy allocator for libarena
Adds a power-of-two buddy allocator to libarena so that BPF programs can perform dynamic memory allocation within a BPF arena. The buddy allocator supports split and merge operations for efficient reuse of arena pages without external fragmentation. This enables BPF programs that need variable-sized allocations — such as per-connection state blocks — to manage their own memory without resorting to fixed-size map elements.
Generated 2026-04-28T00:00:00Z
A single two-patch series from Eduard Zingerman addresses a correctness bug in the BPF verifier's range_within() function, which is used by is_state_visited() to prune redundant verification paths. The fix ensures that range subset checks operate on cnum (cross-value number) ranges rather than plain min/max pairs, preventing the verifier from incorrectly concluding that a prior state subsumes the current one.
bpf: range_within() must check cnum ranges instead of min/max pairs
This patch fixes a bug in the BPF verifier's range_within() helper, which checks whether one register value range is a subset of another. The function was comparing raw min/max pairs rather than the correct cnum (cross-value number) ranges, causing is_state_visited() to make incorrect pruning decisions during verification. An incorrect subset determination can cause the verifier to skip re-examining a code path it should explore, potentially leading to missed safety violations or spurious rejections. The fix aligns range_within() with the same cnum-based representation used elsewhere in the verifier's range tracking logic.
selftests/bpf: a test for proper cnums compare in is_state_visited()
This patch adds a selftest to the BPF test suite that exercises the corrected cnum-based range comparison in is_state_visited(). The test constructs a scenario where the old min/max comparison would have produced a wrong result, confirming that the verifier now makes the correct pruning decision. Having an explicit regression test prevents future changes from silently reintroducing the same class of state-pruning bug.
Generated 2026-04-26T09:56:49Z
April 24 was one of the busiest days recently on the bpf mailing list, with two major new-feature series landing alongside continued iteration on earlier work. Yonghong Song posted an 18-patch series implementing full stack argument support for BPF functions and kfuncs, covering verifier liveness analysis, precision backtracking, JIT backends for x86 and arm64, and a comprehensive test suite. Mykyta Yatsenko's 10-patch v3 series introduces a resizable hash map type backed by the kernel rhashtable, supporting automatic resizing, batch operations, and BPF iterators.
bpf: Support stack arguments for bpf functions
The first patch in an 18-patch series that adds support for passing arguments on the stack to BPF subprograms and kfuncs, lifting the current hard limit of five register-passed arguments. When more arguments are needed than available registers, a pointer to an argument area is passed in r11 (BPF_REG_PARAMS) and the verifier is taught to validate accesses through that pointer. This enables writing BPF programs and kfuncs with richer signatures without resorting to context structs.
bpf: Add precision marking and backtracking for stack argument slots
Extends the verifier's precision backtracking pass to cover the stack slots used for stack-passed arguments. Precision marking is needed so that the verifier can correctly identify which stack argument slots must be tracked precisely for safety proofs and state pruning. Without this extension, the verifier would either over-approximate or reject valid programs that use stack arguments. The patch integrates stack argument liveness into the existing precision propagation framework.
bpf: Support stack arguments for kfunc calls
Extends the stack argument mechanism from BPF-to-BPF calls to kfunc calls, allowing kfuncs to declare parameters beyond the five-register limit. The verifier validates that BPF programs set up the argument area correctly before calling such kfuncs and that the types of stack-passed arguments satisfy the kfunc's BTF annotations. This patch is a key enabler for kfuncs with complex or wide argument lists without requiring callers to bundle arguments into a struct.
bpf,x86: Implement JIT support for stack arguments
Implements the x86-64 JIT backend changes needed to emit code that sets up and tears down the stack argument area when calling functions or kfuncs with stack arguments. The JIT must allocate space on the program's stack frame, marshal arguments into the argument area, pass r11 pointing to it, and restore the stack afterward. A companion patch in the same series handles arm64.
bpf: Implement resizable hashmap basic functions
Introduces a new BPF map type BPF_MAP_TYPE_RHASH backed by the kernel's rhashtable, which automatically resizes as entries are inserted and removed. Unlike the existing BPF_MAP_TYPE_HASH, rhash does not require a pre-allocated fixed capacity and can grow without manual intervention, making it better suited for workloads with unpredictable cardinality. This first patch implements the core lookup, update, and delete operations; subsequent patches in the series add iterators, batch ops, timer/workqueue support, and libbpf/bpftool integration.
libbpf: Support resizable hashtable
Adds libbpf-side support for the new BPF_MAP_TYPE_RHASH map type, allowing userspace programs to create and interact with resizable hash maps through the standard libbpf map API. The patch updates the map type table and any type-specific helpers so that tools like bpftool and skeleton-generated code can handle rhash maps transparently.
bpf: representation and basic operations on circular numbers
V3 of the patch introducing cnum32 and cnum64 typed structs for circular integer range representation in the BPF verifier. This revision addresses review feedback on the arithmetic semantics and adds more thorough documentation of the invariants that the types maintain. The circular number abstraction replaces the existing eight loose scalar-range fields in bpf_reg_state, and this patch provides the foundational primitives used throughout the series.
bpf: Implement dtor for struct file BTF ID
V3 of the patch registering fput() as the destructor for the struct file BTF ID, enabling BPF programs to store referenced struct file kptrs in maps. The new version incorporates review feedback on the destructor registration mechanism and ensures the BTF ID lookup is robust across kernel configurations. Together with the accompanying selftest patch, this series allows BPF programs to hold long-lived file references in map storage for use across program invocations.
Generated 2026-04-25T10:15:04Z
Activity on April 23-24 centered on three series targeting verifier range tracking, kptrs, and kfunc call conventions. Eduard Zingerman posted v2 of a series refactoring bpf_reg_state by replacing bare min/max fields with a typed circular-number abstraction (cnum), enabling stronger 32-to-64-bit range refinements in the verifier. Justin Suess enabled struct file as a reference-counted kptr storable in BPF maps, while Yonghong Song posted v3 preparatory cleanups to verifier argument handling ahead of upcoming kfunc stack argument support.
bpf: representation and basic operations on circular numbers
Introduces cnum32 and cnum64, typed structs representing circular (wrapping) integer numbers with defined arithmetic semantics for use in the BPF verifier. Circular numbers more accurately model unsigned integer range constraints that can wrap around, avoiding the imprecision of separate smin/smax/umin/umax fields. This patch provides the foundation — constructors, comparison, and arithmetic primitives — that subsequent patches use to replace the existing verifier range fields. The approach enables more accurate range propagation, particularly for 32-to-64-bit sign-extension scenarios.
bpf: replace min/max fields with struct cnum{32,64}
Replaces the eight loose scalar range fields in bpf_reg_state (smin32, smax32, umin32, umax32, smin64, smax64, umin64, umax64) with two typed structs cnum32 and cnum64. The structural change enforces correct usage through accessor functions added in the preceding patch and eliminates a class of subtle bugs where fields could be updated inconsistently. This is the core mechanical transformation of the series, affecting the verifier's central register-state data structure.
bpf: Implement dtor for struct file BTF ID
Registers a destructor for the struct file BTF ID so that BPF programs can hold referenced kptrs to struct file objects in maps without leaking file references. Without a dtor, the kernel refuses to allow struct file as a referenced kptr type because it cannot safely release the reference on map entry deletion. The patch wires up fput() as the destructor, enabling map-stored file references to be properly cleaned up when entries are removed or the map is freed.
selftests/bpf: Add test for map-stored struct file kptrs
Adds a selftest exercising the new ability to store referenced struct file kptrs in BPF maps. The test acquires a file reference via a kfunc, stores it in a hash map, retrieves it, and verifies that the reference is properly released on map cleanup. Coverage confirms that both the kptr store/load paths and the destructor-triggered fput work correctly end-to-end.
bpf: Remove unused parameter from check_map_kptr_access()
A small clean-up removing a parameter from check_map_kptr_access() that is no longer used after earlier refactoring. This is the first of a nine-patch preparatory series that restructures verifier internals to support passing arguments on the stack to BPF functions and kfuncs. The patch series as a whole refactors argument tracking, memory/size pairing, and verifier log messages without yet enabling the stack-argument feature itself.
bpf: Introduce bpf register BPF_REG_PARAMS
Introduces a new pseudo-register alias BPF_REG_PARAMS (mapped to r11) to name the register that will hold a pointer to the stack-spilled arguments area when kfunc stack arguments are eventually supported. Using a named alias instead of a raw register number makes the upcoming JIT and verifier changes easier to follow and review. This patch is part of the v3 preparatory series by Yonghong Song and does not yet enable stack argument passing.
Generated 2026-04-25T10:15:04Z
Activity on April 21-22 was dominated by two major verifier-adjacent series: Yonghong Song's v2 9-patch series preparing the BPF verifier and calling convention to support stack-based arguments for kfuncs, and Amery Hung's v3 refactor of verifier object relationship tracking that also fixes a dynptr use-after-free bug. Mykyta Yatsenko's long-running effort to add sleepable tracepoint program support reached its 11th revision, while Emil Tsalapatis pushed an 8th iteration of the arena library and runtime. Eduard Zingerman also submitted an RFC proposing a structural overhaul of verifier scalar range tracking using typed circular number types.
bpf: Introduce bpf register BPF_REG_PARAMS
Introduces a new BPF pseudo-register BPF_REG_PARAMS as part of the groundwork for supporting stack-based calling conventions in kfuncs. Currently all kfunc arguments pass through the standard register file; adding stack argument support requires a dedicated register to track the stack parameter region. This is patch 8 of 9 in Yonghong Song's v2 series 'bpf: Prepare to support stack arguments', which makes several preparatory refactors before the stack ABI extension lands.
bpf: Refactor object relationship tracking and fix dynptr UAF bug
Refactors how the BPF verifier tracks ownership and dependency relationships between objects such as dynptrs, slices, and kptrs, and simultaneously fixes a use-after-free bug in dynptr handling. The prior tracking was ad hoc and missed some invalidation paths, allowing a program to use a dynptr after the underlying object was released. This is the core patch in Amery Hung's v3 9-patch series on verifier object relationship tracking.
bpf: Unify dynptr handling in the verifier
Consolidates the divergent code paths for dynptr validation in the BPF verifier into a single unified representation and set of helpers. The unification is a prerequisite for the subsequent object relationship tracking refactor in the same series. Together the series improves correctness guarantees for dynptr lifetime and cloning.
bpf: Add sleepable support for raw tracepoint programs
Extends the BPF raw tracepoint infrastructure to allow programs to be marked sleepable, enabling use of blocking helpers and memory allocations within raw tracepoint handlers. Sleepable tracepoint programs are valuable for observability use cases that need to perform I/O or acquire locks during event capture. This is the first of six patches in Mykyta Yatsenko's 11th revision, which also covers classic tracepoints, verifier support, and libbpf section handlers.
selftests/bpf: Add basic libarena scaffolding
Introduces the foundational test scaffolding for libarena, a new userspace-style dynamic memory management library built on top of BPF arena maps. libarena aims to give BPF programs safe, flexible allocation patterns without requiring fixed-size map entries. This is part of Emil Tsalapatis's v8 8-patch series, which includes a buddy allocator, ASAN runtime support, and comprehensive selftests.
bpf: replace min/max fields with struct cnum{32,64}
RFC patch replacing the loose scalar min/max range fields in bpf_reg_state with typed circular number structs (cnum32/cnum64) that encode value and bit-width together. The goal is to make verifier range tracking structurally sound and enable better 32-to-64-bit range refinements. This is the central patch of Eduard Zingerman's 4-patch RFC series, which first introduces the cnum abstraction and accessor functions before applying the broad refactor.
bpf, x86: Granlund-Montgomery optimization for 64-bit div/mod by immediate
Applies the Granlund-Montgomery algorithm to the BPF x86 JIT, replacing expensive hardware division instructions with a multiply-and-shift sequence when the divisor is a compile-time immediate. Division is among the slowest x86 instructions, and BPF programs with tight loops that perform constant-divisor modulo or divide operations benefit significantly. This is the third revision of the single-patch optimization.
bpf: Fix out-of-bounds read in bpf_patch_call_args()
Fixes an out-of-bounds memory read in bpf_patch_call_args() that could be triggered by BPF-to-BPF calls with large offsets. The function failed to account for all expansion cases when reallocating the instruction array, allowing reads past the buffer end. This is the first of three patches in Yazhou Tang's v7 series, which also fixes s16 truncation of large call offsets and adds a regression selftest.
Generated 2026-04-23T00:00:00Z
Today's bpf-next activity spanned three significant feature series alongside a pair of RFC submissions. Leon Hwang's long-running series (now at v12) to extend the BPF syscall with common attributes landed alongside new kfunc work from Mahe Tardy and arm64 JIT improvements from Puranjay Mohan. Mohan also submitted an RFC XDP load-balancer benchmark framework, while Justin Suess introduced support for storing referenced struct file kptrs in BPF maps.
bpf: Implement dtor for struct file BTF ID
Implements a destructor for the struct file BTF ID, enabling BPF maps to store referenced struct file kptrs. This is the core kernel patch of a two-part series that adds proper lifecycle management for file references held inside BPF maps. Tracking struct file references prevents resource leaks when map entries are removed or the map itself is destroyed. The accompanying selftest verifies that map-stored file kptrs are correctly acquired and released.
bpf, arm64: Map BPF_REG_0 to x8 instead of x7
Remaps BPF_REG_0 to the arm64 x8 register (the indirect result register) to free x7 for use as a stack-argument-passing register under the AAPCS64 calling convention. This register reassignment is a prerequisite for the arm64 BPF JIT to support BPF programs calling kernel functions that pass arguments on the stack rather than solely in registers. Follow-on patches in the series add the JIT logic for stack arguments and enable the relevant selftests on arm64.
bpf: Extend BPF syscall with common attributes support
Introduces a unified common-attributes mechanism for the BPF syscall, allowing prog_load, btf_load, and map_create commands to share a consistent log-size reporting path. At version 12, this series also adds libbpf support and the ability for userspace to retrieve the true log buffer size when BPF object loading fails. The change reduces duplication in the BPF syscall implementation and makes failure diagnostics more consistent across all BPF object types.
bpf: add bpf_icmp_send_unreach kfunc
Adds a new kfunc bpf_icmp_send_unreach that allows BPF programs to generate ICMP destination-unreachable messages for both IPv4 and IPv6. This enables tc and XDP programs to reject packets with meaningful ICMP feedback rather than silently dropping them, improving network-level error signaling. The series refactors netfilter helper functions into core ipv4/ipv6 to make them reusable outside of netfilter, and is accompanied by comprehensive tests covering both address families and recursion safety.
selftests/bpf: Add bench_force_done() for early benchmark completion
First patch of an RFC series adding an XDP load-balancer benchmark to the BPF selftest suite. This patch introduces bench_force_done(), a helper that lets a benchmark signal early completion without waiting for the full configured duration. Subsequent patches build a batch-timing library, a full XDP load-balancer BPF program with common definitions, and a driver and shell script to run the benchmark end-to-end. The RFC status invites feedback on the benchmark design and infrastructure before finalization.
selftests/bpf: fix off-by-one in bpf_cpumask_populate related selftest
Fixes an off-by-one error in the selftest for bpf_cpumask_populate. The bug caused the test to iterate one index past the valid CPU range, potentially producing incorrect results or accessing memory beyond the expected bounds on systems with specific CPU counts. This is a standalone single-patch fix with no other dependencies.
Generated 2026-04-22T00:00:00Z
Activity on April 19–20 was dominated by Yonghong Song's v6 of the stack-arguments series for BPF functions and kfuncs, a 17-patch set that extends the BPF calling convention to pass arguments on the stack beyond the standard six registers with full x86-64 JIT support. Two smaller patches rounded out the day: Aaron Tomlin fixed libbpf to properly reject negative kprobe offsets, and Matt Bobrowski corrected an off-by-one error in the bpf_cpumask_populate selftest.
bpf: Support stack arguments for bpf functions
This patch adds verifier support for BPF subprogram functions to receive arguments on the stack, enabling function signatures with more than the standard six register-based parameters. A new BPF_REG_PARAMS mechanism tracks the stack argument state through verifier analysis, and the calling convention is updated to lay out excess parameters in a well-defined region of the caller's stack frame. This is patch 07 of a 17-part series (v6) that collectively introduces stack argument passing for both BPF functions and kfuncs. The change is the core enabler for the rest of the series and requires corresponding JIT backend work to become operational.
bpf: Support stack arguments for kfunc calls
Extends the new stack argument infrastructure to kfunc calls, allowing kernel functions exposed to BPF programs to accept arguments beyond the six-register limit. The verifier is updated to validate that stack argument types and sizes match the expected kfunc BTF signature, keeping the calling convention consistent with BPF-to-BPF calls. This patch is the twelfth in the series and pairs tightly with the BPF subprogram stack argument changes introduced earlier. Unified handling across both call sites simplifies future extensions to the argument-passing mechanism.
bpf,x86: Implement JIT support for stack arguments
Implements the x86-64 JIT backend changes required to physically place excess function arguments onto the stack when calling BPF subprograms or kfuncs. The JIT allocates additional stack space for spilled parameters and emits the appropriate store instructions to lay them out before the call site. Programs using stack arguments are rejected by the verifier on architectures without JIT support, making this x86-64 implementation the first concrete gate that allows the feature to be used in practice. Other JIT backends can add independent support following the same pattern.
libbpf: Report error when a negative kprobe offset is specified
Fixes a libbpf bug where a negative offset for a kprobe attachment would be silently accepted rather than rejected at the library level, leading to confusing downstream failures. With this patch, libbpf validates the offset field and returns a clear EINVAL if a negative value is provided. This is the third revision of the fix, addressing earlier review feedback on where in the attachment path the check should live. Negative kprobe offsets are not supported by the kernel and catching them early improves the user experience for programs that misconfigure their probes.
selftests/bpf: fix off-by-one in bpf_cpumask_populate related selftest
Corrects an off-by-one error in a BPF selftest exercising bpf_cpumask_populate, where the loop bound caused a read one element past the intended array boundary. The bug could produce spurious failures or undefined behavior on configurations where the adjacent memory was not safely accessible. The fix is a one-line bound correction with no impact on the BPF subsystem itself. Keeping selftests clean ensures CI results accurately reflect real regressions rather than test-infrastructure noise.
Generated 2026-04-21T00:00:00Z
Today's bpf-next activity featured three series spanning the verifier, kfuncs, and libbpf. Kumar Kartikeya Dwivedi posted v3 of a series adding verifier warning infrastructure and a kfunc deprecation annotation, enabling non-fatal diagnostic messages during BPF program loading. Puranjay Mohan posted v13 of a long-running series introducing CPU time counter kfuncs with arm64 JIT support, bringing high-resolution per-CPU timing to BPF programs.
libbpf: Report error when a negative kprobe offset is specified
libbpf now returns an error when a user specifies a negative offset for a kprobe attachment point. Previously this case could be silently accepted, leading to undefined behavior at attach time. This is a defensive input validation improvement that catches misconfigured kprobe offsets early during program load rather than at runtime.
bpf: Add support for verifier warning messages
Introduces a new mechanism in the BPF verifier to emit non-fatal warning messages during program verification. Unlike verifier errors that abort loading, warnings allow programs to load successfully while surfacing diagnostic information to the user. This patch is the foundation of the series, adding the core warning message infrastructure that subsequent patches in the series build upon.
bpf: Introduce __bpf_kfunc_mark_deprecated annotation
Adds the `__bpf_kfunc_mark_deprecated` annotation macro that kernel developers can use to mark kfuncs as deprecated. When a BPF program calls a deprecated kfunc, the verifier emits a warning rather than rejecting the program outright. This enables gradual kfunc lifecycle management, giving users time to migrate away from old APIs without breaking existing BPF programs.
libbpf: Request verifier warnings for object loads
Updates libbpf to opt in to the new verifier warning infrastructure when loading BPF objects, so that warning messages emitted by the kernel verifier are surfaced to userspace. This wires the kernel-side warning mechanism into the standard BPF program loading path. Users relying on libbpf will automatically receive deprecation and other verifier warnings without any application-level changes.
bpf: add bpf_get_cpu_time_counter kfunc
Introduces the `bpf_get_cpu_time_counter` kfunc, which exposes the per-CPU hardware time counter to BPF programs. This allows BPF programs to perform high-resolution timing measurements using the CPU's native cycle counter. Part of a series that has reached v13 after extensive review, this kfunc gives BPF programs direct access to low-overhead hardware timing primitives.
bpf: add bpf_cpu_time_counter_to_ns kfunc
Adds `bpf_cpu_time_counter_to_ns` as a companion kfunc to convert raw CPU time counter values to nanoseconds. Raw cycle counter values are CPU-frequency-dependent and not directly portable, so this conversion kfunc makes timing results meaningful across different hardware. Together with `bpf_get_cpu_time_counter`, BPF programs can now perform accurate, portable elapsed-time measurements.
bpf, arm64: Add JIT support for cpu time counter kfuncs
Adds arm64 JIT backend support for the new CPU time counter kfuncs, enabling them to be efficiently inlined on AArch64 hardware. Without JIT support the kfuncs would fall back to a slower generic execution path. This patch completes the architecture-specific optimization needed for production-quality use of the CPU timing kfuncs on arm64 systems.
Generated 2026-04-19T09:51:17Z
A busy day on bpf-next dominated by Jiri Olsa's 28-patch tracing_multi link series, which introduces a new BPF link type for attaching a single program to multiple kernel functions simultaneously via a single syscall. Yonghong Song's 16-patch series adding stack argument support for BPF functions and kfuncs also appeared, extending the calling convention to pass structs beyond the six-register limit on x86-64.
bpf: Add support for tracing multi link
Introduces the new BPF_LINK_TYPE_TRACING_MULTI link type, allowing a single BPF tracing program to be attached to many kernel functions at once rather than requiring one link per function. The implementation reuses and extends the existing trampoline infrastructure, adding bpf_trampoline_multi_attach/detach helpers to manage bulk attachment. This is a significant usability improvement for tools that need to trace large numbers of functions—for example, function-graph style tracers or security monitors—without the overhead of managing thousands of individual links.
libbpf: Add support to create tracing multi link
Adds the libbpf-side API for creating tracing_multi links, exposing the new kernel capability to userspace BPF programs. The patch wires up bpf_link_create() for the new attach type and introduces a btf_type_is_traceable_func() helper so that callers can filter BTF entries to only traceable functions before bulk attachment. Together with the kernel patches in this series, libbpf users gain a high-level interface for multi-function tracing.
bpf: Support stack arguments for bpf functions
Extends the BPF verifier and calling convention to allow struct arguments larger than eight bytes to be passed on the stack to BPF-to-BPF calls, mirroring the C ABI on x86-64. Previously BPF functions were limited to six register-width arguments; this patch introduces the BPF_REG_PARAMS pseudo-register to track stack-passed parameters and updates the verifier to validate them. The change is a prerequisite for supporting the full kfunc calling convention when kfuncs themselves accept stack-spilled arguments.
bpf: Support stack arguments for kfunc calls
Adds verifier support for kfunc calls that take struct arguments passed on the stack, complementing the BPF-function stack-argument patch in the same series. The patch enforces that such structs are no larger than eight bytes per slot and rejects stack arguments when tail calls are reachable (since tail calls don't preserve the stack frame). x86-64 JIT emission for the new calling convention is handled by a companion patch in the series.
bpf: Extend BTF UAPI vlen, kinds to use unused bits
Expands the BTF type-info fields by repurposing currently-unused bits in the type_info word, raising the vlen limit from 16 bits to 24 bits and the kind field from 5 bits to 8 bits. This removes a long-standing constraint on the number of struct members and enum values that can be described in a single BTF type, which matters for very large auto-generated BTF from complex kernel structs. The series updates libbpf, bpftool, and selftests to handle the wider fields, with libbpf gaining a feature-probe to detect kernel support.
arm32, bpf: Reject BPF-to-BPF calls and callbacks in the JIT
Makes the 32-bit ARM BPF JIT explicitly reject programs that use BPF-to-BPF calls or callbacks, which the JIT does not implement, rather than silently producing incorrect code. This is a correctness fix: without the rejection the interpreter would be invoked as a fallback but with a JIT-compiled caller, leading to undefined behavior. The v2 revision consolidates the rejection of both BPF_PSEUDO_CALL and callback-carrying helper calls into a single check.
selftests/bpf: Trace bpf_local_storage_update to debug flaky local storage tests
Adds a fentry tracepoint on bpf_local_storage_update in the BPF local-storage selftests to capture diagnostic information when the tests fail intermittently. Flaky local-storage tests have been observed under memory pressure; the additional tracing helps identify whether failures correlate with concurrent updates or allocation failures. This is a test-infrastructure improvement rather than a kernel change.
Generated 2026-04-18T09:52:31Z
A productive day on bpf-next with three major series in flight. Yonghong Song's v5 stack-argument series for BPF functions and kfuncs reached near-final shape, while Paul Chaignon posted an RFC improving verifier register-bounds refinement for 32-to-64-bit range propagation. Mykyta Yatsenko fixed a NULL dereference in the verifier's kptr slot type-checking path, and Nick Hudson continued refining tunnel decapsulation flags for skb_adjust_room.
bpf: Support stack arguments for bpf functions
The core patch of Yonghong Song's 16-patch v5 series, teaching the BPF verifier to accept struct arguments passed on the stack in BPF-to-BPF calls. A new BPF_REG_PARAMS pseudo-register tracks the stack pointer for parameter spilling, and the verifier validates that stack slots are properly initialized before the call. The x86-64 JIT is updated in a companion patch to emit the required push/pop sequences, while non-JITed paths and tail-call-reachable paths are explicitly rejected.
bpf: Fix NULL deref in map_kptr_match_type for scalar regs
Fixes a NULL pointer dereference in map_kptr_match_type() that occurs when a BPF program tries to store a scalar register into a map slot typed as a kernel pointer (kptr). The function assumed the source register always holds a pointer with associated BTF type info, but scalars have no such info, causing a crash during verification. The fix adds a scalar-register check before accessing the BTF type, and the companion selftest confirms the verifier now properly rejects such stores.
bpf: Extend BTF UAPI vlen, kinds to use unused bits
Version 2 of Alan Maguire's series widening the BTF type-info word's vlen field from 16 to 24 bits and the kind field from 5 to 8 bits by repurposing reserved bits. The kernel change is accompanied by libbpf updates that add a feature probe for extended-vlen kernel support and adjust btf_vlen() to return __u32, plus bpftool changes to display and handle 24-bit vlen values. This removes a hard ceiling on the number of members in large structs and enum types representable in BTF.
bpf/verifier: Use intersection checks when simulating to detect dead branches
An RFC series improving the BPF verifier's ability to prune dead branches by using intersection checks between tnum (tracked number) constraints and integer range bounds when simulating conditional jumps. The series also fixes a bug in the verifier's slow-mode reg_bounds path and improves 32-to-64-bit range refinement so that the verifier derives tighter 64-bit bounds from known 32-bit constraints. Several new selftests capture the refinement cases that were previously missed.
bpf: add BPF_F_ADJ_ROOM_DECAP_* flags for tunnel decapsulation
Introduces new BPF_F_ADJ_ROOM_DECAP_L3_IPV4 and BPF_F_ADJ_ROOM_DECAP_L3_IPV6 flags for the bpf_skb_adjust_room() helper, allowing BPF programs performing tunnel decapsulation to signal the kernel that the outer IP header has been removed. A companion patch clears the GSO tunnel state in skb_adjust_room when decap flags are set, preventing the networking stack from incorrectly re-segmenting the now-bare inner packet. The v4 revision also adds a tc_tunnel selftest validating the GSO state after decapsulation.
selftests/bpf: Add BPF_STRICT_BUILD toggle
The first patch of Ricardo B. Marlière's v7 11-patch series that makes the BPF selftest build system more robust against partial kernel configurations. This patch adds a BPF_STRICT_BUILD Makefile toggle: when unset, compilation and BPF skeleton generation failures are tolerated rather than aborting the whole build. Subsequent patches in the series handle benchmark build failures, cross-test weak-symbol definitions, and install-time missing-file tolerance, making it practical to build and run BPF selftests on distro kernels without full source trees.
Generated 2026-04-17T10:16:06Z
The most notable submission was Mykyta Yatsenko's v10 of sleepable tracepoint support, a long-requested feature that allows raw and classic tracepoint BPF programs to call sleeping helpers and kfuncs. Nick Hudson's v4 series introduced new BPF_F_ADJ_ROOM_DECAP_* flags to fix GSO state corruption during tunnel decapsulation. Harishankar Vishwanathan improved the verifier's branch pruning with tnum intersection checks, and Ricardo B. Marlière posted an 11-patch series overhauling the BPF selftests build system to tolerate partial kernel configurations.
bpf: Add sleepable support for raw tracepoint programs
Adds support for BPF programs attaching to raw tracepoints to be marked sleepable, enabling them to call helpers and kfuncs that may sleep. This has been a long-requested feature (v10 of this series), as raw tracepoints see heavy use in production tracing infrastructure but could not previously use the growing set of sleepable-only BPF primitives. The series also extends libbpf with new section handlers for sleepable tracepoints and adds verifier logic to validate the sleepable flag for these program types.
bpf: add BPF_F_ADJ_ROOM_DECAP_* flags for tunnel decapsulation
Introduces new BPF_F_ADJ_ROOM_DECAP_* flags for the bpf_skb_adjust_room() helper to properly signal tunnel decapsulation operations to the kernel. Previously, programs performing decapsulation had no standard way to inform the kernel that GSO state needed updating after header removal, leading to potential packet corruption on large segmented packets. This series pairs the new flags with a fix to clear GSO state appropriately in skb_adjust_room when decapsulating.
bpf/verifier: Use intersection checks when simulating to detect dead branches
Improves the BPF verifier's branch pruning by computing tnum/u64 intersections to detect branches that can never be taken given current register constraints. This reduces the number of states the verifier must explore for programs with range checks, lowering verification time for complex programs. The accompanying selftest adds cases where the tnum and u64 ranges produce an empty intersection, verifying that the verifier correctly prunes those paths.
bpf: copy BPF token from main program to subprograms
V4 of the fix ensuring BPF token delegation is correctly propagated from a main BPF program to its subprograms during verification. Without this, privileged operations in subprograms are incorrectly rejected even when the token grants the necessary permissions. This iteration addresses review feedback from v3 and improves selftest coverage verifying that kallsyms entries are present for token-loaded subprograms.
selftests/bpf: Add BPF_STRICT_BUILD toggle
First patch in an 11-part series overhauling the BPF selftests build system to tolerate partial kernel configurations. Introduces a BPF_STRICT_BUILD toggle that lets upstreams enforce strict build behavior while allowing distro kernel CI environments to skip tests for features not compiled in. The full series handles BPF object compilation failures, skeleton generation failures, benchmark build failures, and install-time missing file handling.
selftests/bpf: Use local type for flow_offload_tuple_rhash in xdp_flowtable
Updates BPF selftests to use local type definitions for kfunc declarations rather than pulling in internal kernel headers directly, improving portability across kernel versions and configurations. The series covers two test files—xdp_flowtable and test_tunnel_kern—both of which referenced internal kernel types that can differ between kernel builds. Using local type definitions avoids header inclusion issues that arise when testing against distro or out-of-tree kernels.
Generated 2026-04-17T00:00:00Z
The day's patches centered on two substantial new features: Alan Maguire's series extending BTF's btf_type struct to use previously unused bits for larger vlen and kind fields, and Leon Hwang's v4 series introducing global per-CPU data support in BPF programs. Eduard Zingerman continued refining BPF token propagation to subprograms, while KaFai Wan added a kernel-side guard rejecting TCP_NODELAY from BPF TCP header option callbacks.
bpf: Introduce global percpu data
Introduces first-class support for global per-CPU variables in BPF programs, allowing programs to declare and use per-CPU data in a way that is reflected in generated skeletons. This eliminates the need for manual per-CPU map management when global per-CPU state is desired. The series also adds BPF_F_ALL_CPUS flag support for per-CPU map updates and extends libbpf with feature probing and skeleton generation for the new type.
bpf: Extend BTF UAPI vlen, kinds to use unused bits
Extends the BTF btf_type UAPI to repurpose previously unused bits, expanding the vlen field from 16 to 24 bits and the kind field from 5 to 8 bits. This unblocks future growth of BTF type counts (particularly for large structs with many members) and new kind definitions. The series includes matching libbpf feature detection, bpftool support for the wider fields, and selftest coverage for the new limits.
bpf: copy BPF token from main program to subprograms
Fixes a bug where the BPF token associated with a main program was not propagated to its subprograms during verification, causing permission checks on subprogram-specific operations to fail when loading via token delegation. Without this fix, privileged operations in subprograms could be incorrectly rejected even when the token grants the necessary permissions. The accompanying selftest verifies that kallsyms entries are correctly created for token-loaded subprograms.
bpf: tcp: Reject TCP_NODELAY from BPF hdr opt callbacks
Adds a kernel-side guard to reject attempts to set TCP_NODELAY from within BPF TCP header option write and reserve callbacks. Setting TCP_NODELAY from these callbacks can cause unexpected behavior because the callback context does not allow safe modification of socket-level TCP flags. The patch ensures consistent and safe behavior by failing such attempts early with an appropriate error code.
bpf: Refactor dynptr mutability tracking
Refactors how the BPF verifier tracks whether a dynptr is mutable or read-only, consolidating the logic to make it cleaner and easier to extend. The existing tracking was spread across multiple code paths using implicit conventions; this change makes mutability an explicit property of dynptr state. This v3 incorporates reviewer feedback from earlier rounds and should make future dynptr feature additions less error-prone.
s390/bpf: inline smp_processor_id and current_task
Teaches the s390 BPF JIT to inline calls to smp_processor_id() and current_task rather than emitting out-of-line function calls. Inlining these frequently-used helpers reduces call overhead and improves performance of BPF programs running on s390 hardware. This brings s390 more in line with x86 and arm64 JITs which have had similar optimizations for some time.
Generated 2026-04-17T00:00:00Z
Activity for April 13-14 was dominated by two significant RFC proposals: KASAN instrumentation for JIT-compiled BPF programs on x86, and an expanded atomics selftest suite targeting cpuv4 and sub-32-bit operations. The day also saw important verifier fixes from Eduard Zingerman correcting argument tracking through imprecise and multi-offset stack pointers, plus a use-after-free fix in BPF arena's fork handling from Alexei Starovoitov. Security hardening continued with Xu Kuohai's v14 series adding ENDBR/BTI emission for indirect jump targets across x86 and arm64.
bpf: add BPF_JIT_KASAN for KASAN instrumentation of JITed programs
This RFC introduces a new Kconfig option BPF_JIT_KASAN that enables Kernel Address Sanitizer checks inside JIT-compiled BPF programs on x86. The series works by having the BPF verifier mark instructions that access the program stack, then having the x86 JIT emit inline KASAN shadow-memory checks around those accesses. This brings the same memory-safety guarantees that KASAN provides to kernel C code into the JIT-compiled BPF execution path, significantly improving the ability to catch out-of-bounds and use-after-free bugs in BPF programs. The series is eight patches covering KASAN helper exposure, stack-access marking in the verifier, the core Kconfig, x86 JIT emission, and selftests.
bpf: Fix use-after-free in arena_vm_close on fork
This single patch fixes a use-after-free bug triggered when a process that has a BPF arena mapped forks and then the child or parent closes the arena's VM region. The arena_vm_close callback was accessing memory that could already be freed in the fork path, leading to potential memory corruption or a kernel crash. The fix ensures proper reference counting and ordering so that the arena structure remains valid for the lifetime of all mappings referencing it.
bpf: fix arg tracking for imprecise/multi-offset BPF_ST/STX
This v2 two-patch series corrects the BPF verifier's argument liveness tracking for BPF_ST and BPF_STX instructions when accessed through imprecise or multi-offset stack pointers. Without this fix, the verifier could fail to mark stack slots as live, causing incorrect pruning of program states and potentially accepting unsafe programs or rejecting valid ones. The companion selftest patch adds regression coverage for these edge cases involving imprecise pointer arithmetic targeting stack memory.
bpf: Move constants blinding out of arch-specific JITs
This is the base patch of a v14 five-patch series that refactors BPF JIT infrastructure to enable emission of ENDBR (x86 IBT) and BTI (arm64) instructions at indirect jump targets. The series first centralizes constant blinding out of arch-specific JITs, then passes bpf_verifier_env into the JIT, adds a generic helper to identify indirect jump targets, and finally adds x86 ENDBR and arm64 BTI emission. The result hardens JIT-compiled BPF programs against control-flow hijacking attacks on hardware that supports CET/BTI.
bpf, arm64: Remove redundant bpf_flush_icache() after pack allocator finalize
This v2 series removes redundant instruction-cache flush calls on arm64 and RISC-V that were being issued after the BPF pack allocator's finalize step. The pack allocator already performs an icache flush as part of finalization, making the subsequent flush in the JIT code superfluous and wasteful. Eliminating the duplicate flushes reduces overhead during BPF program load, particularly for workloads that frequently load and unload programs.
selftests/bpf: Prevent allocating data larger than a page
This three-patch series fixes bugs in the BPF task local storage selftests where allocations larger than a page were permitted, leading to garbage data being returned by tld_get_data(). The series adds a guard against oversized allocations, fixes the garbage-data return path, and adds a new selftest verifying that small task local data allocations work correctly end-to-end. These fixes improve reliability of the task local storage feature for programs that use it to track per-task state.
bpf/tests: Exhaustive test coverage for signed division and modulo
This v3 single patch adds exhaustive test cases for signed 32-bit and 64-bit division and modulo operations in the BPF test infrastructure. The tests cover edge cases including division by negative numbers, INT_MIN divided by -1 (overflow), and modulo by negative divisors, which are all areas where interpreter and JIT implementations can diverge. Comprehensive coverage here helps catch correctness regressions across different architectures when new JIT backends are added or existing ones are modified.
selftests/bpf: Only define ENABLE_ATOMICS_TESTS for cpuv4 runner
This RFC four-patch series updates the BPF atomics selftest suite with broader coverage, starting by scoping the ENABLE_ATOMICS_TESTS macro to cpuv4 runner environments to avoid spurious failures on older hardware. Subsequent patches in the series add 8-bit and 16-bit fetch-based atomic testcases, non-fetch-based atomics for all widths, and exhaustive stack-based atomic operation coverage. The expanded suite is motivated by work on LoongArch BPF JIT support and improves confidence in atomic instruction correctness across architectures.
Generated 2026-04-15T00:00:00Z
April 12-13 brought a wave of structural and feature work to bpf-next. Alexei Starovoitov posted four revision rounds of a series splitting the monolithic verifier.c into focused modules, while Yonghong Song's v4 18-patch series adds stack-based argument support for BPF functions and kfuncs with x86_64 JIT backing. Emil Tsalapatis's arena library reached v7, Menglong Dong fixed missing fsession references across the subsystem, and a lone test fix replaced a deprecated shm_open call with memfd_create.
bpf: Split fixup/post-processing logic from verifier.c into fixups.c
The first patch of Starovoitov's v4 verifier.c split series moves the fixup and post-processing logic out of the monolithic verifier.c into a dedicated fixups.c. The overarching goal is to make the BPF verifier codebase more navigable by isolating distinct concerns into their own files, reducing the size of verifier.c from tens of thousands of lines to a more manageable core. This is the opening move in a 6-patch series that also splits out liveness, CFG analysis, state equivalence, backtracking, and BTF checking.
bpf: Move backtracking logic to backtrack.c
Part of the v4 verifier.c split series, this patch extracts the precision backtracking logic into its own backtrack.c file. Precision backtracking is one of the more complex subsystems in the verifier, responsible for determining which register values must be tracked precisely to correctly prune equivalent states. Isolating it improves reviewability and makes future modifications to the backtracking algorithm easier to scope.
bpf: Support stack arguments for bpf functions
This is the core verifier patch in Song's v4 18-patch series enabling BPF functions to pass arguments via the stack, overcoming the five-register argument limit. A new BPF_REG_STACK_ARG_BASE register is introduced to address arguments spilled beyond the register window, and the verifier is taught to validate PTR_TO_STACK arguments at call sites. The series handles both BPF-to-BPF calls and kfunc calls, with safeguards against use in programs reachable by tail calls or in non-JITed contexts.
bpf,x86: Implement JIT support for stack arguments
The x86_64 JIT backend patch in Song's stack arguments series teaches the JIT to emit code that correctly marshals stack-based arguments at BPF function call boundaries. Arguments exceeding the five-register limit are placed in a designated area of the caller's stack frame and addressed relative to the new BPF_REG_STACK_ARG_BASE. The patch is paired with architecture enablement and verifier-side validation patches in the same series.
bpf: Allow instructions with arena source and non-arena dest registers
The first substantive patch in Tsalapatis's v7 arena library series relaxes a verifier restriction to allow arithmetic operations where one operand is an arena pointer and the result is a plain scalar or non-arena pointer. This is needed so that user-space arena library code can freely mix arena and non-arena pointers in calculations without triggering spurious verifier rejections. The v7 series also adds a buddy allocator, ASAN support, and a full libarena test harness.
bpf: add missing fsession to the verifier log
This v3 patch adds the missing BPF_TRACE_FSESSION attach type to the verifier's human-readable log output, which previously omitted it when printing program attach type information. Companion patches in the same 3-patch series add fsession to the BPF documentation and to bpftool's usage and man page, rounding out the coverage for this attach type. The series is a straightforward completeness fix with no functional behavior change.
selftests/bpf: Use memfd_create instead of shm_open in cgroup_iter_memcg
Replaces the use of the now-deprecated shm_open() call in the cgroup_iter_memcg BPF selftest with the more modern memfd_create() interface. The existing shm_open usage was causing test infrastructure issues on systems where POSIX shared memory is not available or behaves differently. This is a one-patch cleanup with no impact on what the test actually exercises.
Generated 2026-04-14T00:00:00Z
The April 11–12 bpf-next window was dominated by verifier refactoring and significant new feature work. Alexei Starovoitov continued the multi-part effort to split the monolithic verifier.c into focused modules (fixups.c, liveness.c, cfg.c, states.c, backtrack.c, check_btf.c) and posted follow-up cleanups to simplify the main instruction-dispatch loop and move reserved-field checks out of the hot path. Yonghong Song posted a v4 18-patch series enabling stack-passed arguments for BPF-to-BPF calls and kfunc calls on x86-64, while Emil Tsalapatis's v6 arena-library series introduced a buddy allocator and ASAN runtime for BPF arena programs.
bpf: Support stack arguments for bpf functions
Part of an 18-patch v4 series that adds first-class support for passing arguments on the stack to BPF-to-BPF functions and kfuncs. This patch adds the core verifier logic to validate PTR_TO_STACK arguments in BPF function calls, teaching the verifier to track stack-passed memory regions across call boundaries. The feature is needed because BPF programs calling functions with more than five arguments (the current register limit) have no way to pass the extras without this infrastructure. Companion patches add x86-64 JIT emission, kfunc support, and restrictions against use with tail calls or non-JITed programs.
bpf: Split fixup/post-processing logic from verifier.c into fixups.c
First patch of a v2 six-part series that breaks up the notoriously large verifier.c by extracting distinct subsystems into their own files. This patch moves fixup and post-processing logic into a new fixups.c, while companion patches create liveness.c, cfg.c, states.c, backtrack.c, and check_btf.c. The goal is to reduce verifier.c to a manageable size and improve code navigation and maintainability for one of the most complex files in the kernel. The v2 revision addresses review feedback on include dependencies and symbol visibility.
A standalone cleanup that refactors do_check_insn(), the core per-instruction dispatch function in the BPF verifier. The patch reorganizes the function to reduce nesting and improve readability without changing behavior. This is part of the broader ongoing effort to make verifier.c easier to split and maintain, complementing the multi-file decomposition series posted the same day.
bpf: Move checks for reserved fields out of the main pass
A v2 verifier cleanup that extracts reserved-field validation (zero-check of src_reg, imm, offset, etc.) from the main instruction-decode loop into a dedicated pre-pass. Moving these checks out of the hot verification path makes the main pass easier to read and avoids redundant branching on every instruction. This is a prerequisite refactoring for the broader verifier.c decomposition work.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
Lead patch of a v6 nine-patch series introducing an arena library and runtime for BPF programs. This specific patch teaches the verifier to upgrade a plain scalar register to PTR_TO_ARENA when it is the result of adding a scalar to an arena pointer, enabling safe arithmetic inside arena regions. Companion patches add basic libarena scaffolding, an ASAN runtime for memory error detection in arena programs, a buddy allocator, and a comprehensive selftest suite including ASAN-instrumented tests.
bpf, arm64: Emit BTI for indirect jump target
Final patch of a v13 five-patch series that adds ENDBR (x86 CET) and BTI (arm64) instructions at indirect-jump targets in BPF JIT-compiled programs. The series introduces a verifier helper to identify indirect jump targets, refactors constants blinding out of per-arch JITs to share common logic, and passes bpf_verifier_env to the JIT so architecture back-ends can use the target information. Reaching v13 reflects the extensive review this security hardening feature has undergone.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
A v3 fix for a null-pointer dereference triggered when a BPF fmod_ret program attaches to security_task_alloc and returns a non-zero value, causing kernel_clone() to proceed with an incompletely initialized task struct. The patch adds a check so that if fmod_ret short-circuits security_task_alloc with an error, the kernel correctly unwinds without dereferencing the null task pointer. A companion selftest verifies the return-value semantics of fmod_ret on this hook.
bpf: Use kmalloc_nolock() universally in local storage
Core patch of a v2 three-patch series that switches BPF local storage allocation to kmalloc_nolock() throughout, removing the need to plumb gfp_flags through the call chain. kmalloc_nolock() uses a per-CPU cache and avoids lock contention, which matters on fast paths like socket and task storage lookups. A companion patch removes the now-unnecessary kmalloc tracing from the local storage benchmark, and a final patch cleans up gfp_flags plumbing from bpf_local_storage_update().
bpf: add missing fsession to the verifier log
Part of a v3 three-patch series that adds the missing fsession attach type to the BPF verifier log, documentation, and bpftool. The fsession attach type was introduced but not reflected in the verifier's textual output or in user-facing tools, making it harder to debug programs using that hook. This patch fixes the verifier log output; companion patches update the BPF documentation and bpftool's usage text and man page.
Generated 2026-04-12T09:52:00Z
This period was dominated by Eduard Zingerman's ambitious static stack liveness data flow analysis series, which hit v4 with 14 patches and adds a forward arg-tracking pass to the verifier that enables poisoning of dead stack slots. Mykyta Yatsenko's sleepable tracepoint support reached v9, and Emil Tsalapatis posted a v5 of the arena library and runtime introducing buddy-allocator support and ASAN integration for BPF arena programs.
The final patch of the 14-part v4 static stack liveness series, this change poisons dead stack slots identified by the new dataflow analysis pass. By overwriting slots that the verifier proves are no longer live, it prevents inadvertent reuse of stale values and strengthens the safety guarantees of the BPF verifier. The series introduces 4-byte granularity liveness tracking, a forward arg-tracking pass, and function-instance keying by (callsite, depth) to correctly handle subprogram calls. Companion selftest patches validate the new behavior against both new and existing verifier test cases.
bpf: introduce forward arg-tracking dataflow analysis
This patch is the algorithmic core of the static stack liveness series: it adds a forward dataflow analysis pass that tracks which stack slots are written before being read, enabling the verifier to identify dead writes. Unlike the existing backward liveness pass, this forward pass computes arg-tracking results stored in bpf_liveness masks so they can be queried during normal verification. The approach handles subprogram calls by keying func_instances on (callsite, depth) pairs.
bpf: Add sleepable support for raw tracepoint programs
The first patch of a 6-part v9 series enabling BPF tracepoint programs to be marked sleepable, allowing them to call kfuncs and helpers that may block. This patch extends raw tracepoint support by running programs via a new bpf_prog_run_array_sleepable() helper that takes an RCU read-side lock safe for sleeping contexts. Verifier changes in patch 4 enforce that only raw and classic tracepoint program types may carry the sleepable flag. libbpf gains matching SEC() handlers and the series ships with selftests covering both raw and classic tracepoint flavors.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
This verifier change allows a scalar value added to a PTR_TO_ARENA pointer to itself be upgraded to a PTR_TO_ARENA, enabling more ergonomic arena-relative pointer arithmetic in BPF programs without requiring a full re-cast. It is the foundation patch for a 9-part v5 series that also introduces a userspace libarena scaffolding, an arena ASAN runtime, a buddy allocator library, and integration tests with ASAN support. The arena memory model is increasingly important for BPF programs that manage their own heap.
bpf: Enforce regsafe base id consistency for BPF_ADD_CONST scalars
This verifier fix ensures that when two scalar registers are compared for equivalence via regsafe(), their base_id fields are treated consistently for scalars produced by BPF_ADD_CONST operations. Without this check, the verifier could incorrectly mark two states as equivalent even when their add_const chains differ, potentially allowing unsound pruning. The companion patch adds a selftest to exercise the base_id consistency requirement directly.
bpf: Use kmalloc_nolock() universally in local storage
This patch (2/3, v2) extends the use of kmalloc_nolock() throughout the BPF local storage implementation so that allocations in IRQ and NMI contexts no longer need to fall back to pre-allocated memory. The companion patch removes the now-unnecessary gfp_flags plumbing from bpf_local_storage_update(), simplifying the call chain. The first patch in the series drops kmalloc tracing from the local storage create benchmark since it is no longer representative.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
This v2 fix addresses a null pointer dereference triggered when a BPF fmod_ret program attached to security_task_alloc returns a non-zero error code: kernel_clone() proceeds to call copy_process() which may dereference a task pointer that was never fully initialised. The fix adds an early return in the relevant path when the fmod_ret hook indicates failure, preventing the use-after-free or null dereference. A selftest validates the correct return value behavior of fmod_ret for this hook.
Generated 2026-04-11T10:00:00Z
Activity over this period was dominated by Eduard Zingerman's static stack liveness data flow analysis series, which progressed through three revisions (v1, v2, v3) and implements a new verifier pass to track dead stack slots and poison them at verification time. Daniel Borkmann contributed a fix to drop pkt_end markers after arithmetic operations to prevent the verifier's is_pkt_ptr_branch_taken() from making incorrect branch decisions, while Feng Yang addressed a null-pointer dereference in kernel_clone() triggered by a BPF fmod_ret program attached to security_task_alloc.
bpf: share several utility functions as internal API
This patch opens the 13-patch v3 series implementing static stack liveness data flow analysis by refactoring several internal verifier utilities into a shared internal API. Exposing these helpers avoids duplication between liveness.c and the rest of the verifier. The series as a whole introduces a new forward dataflow analysis pass that precisely tracks which BPF stack slots are live across a program's execution paths, feeding into improved liveness masks. Later patches in the series build on this foundation to identify and poison dead stack slots, improving both safety and verifier diagnostics.
bpf: introduce forward arg-tracking dataflow analysis
Introduces the core new analysis pass in the static stack liveness series: a forward arg-tracking dataflow analysis that computes which subprogram arguments and stack slots are actually consumed during execution. This complements the existing backward liveness analysis by propagating use information in the forward direction through the CFG. The results are recorded in bpf_liveness masks, enabling the verifier to distinguish truly live slots from dead ones with higher precision. This is the algorithmic heart of the feature, upon which the subsequent logging improvements and dead-slot poisoning depend.
The final patch of the v3 static stack liveness series implements the actual poisoning of stack slots determined to be dead by the new analysis pass. Dead slots are written with a special poison marker during verification, ensuring that any path the verifier missed which accesses them will be caught. This provides a defense-in-depth safety property and improves the quality of error messages when BPF programs touch uninitialized or logically dead stack memory. Accompanying selftests in patches 12/13 and earlier verify both the analysis results and the poisoning behavior.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
Fixes a null-pointer dereference in kernel_clone() that occurs when a BPF fmod_ret program attached to the security_task_alloc LSM hook returns a non-zero (error) value. In that case the fmod_ret causes an early return from the hook, bypassing actual task allocation, but the caller still dereferences the resulting null task pointer. The fix adjusts the error path to correctly handle the case where fmod_ret aborted allocation before a task object was produced. This is v2 of the series; patch 2/2 adds selftests exercising the corrected behavior.
bpf: Drop pkt_end markers on arithmetic to prevent is_pkt_ptr_branch_taken
Fixes a verifier bug where pkt_end pointer markers were incorrectly retained after arithmetic operations (scalar addition or subtraction) on a packet-end pointer. Preserving the marker after arithmetic causes is_pkt_ptr_branch_taken() to treat the derived pointer as a genuine pkt_end boundary, leading to incorrect branch-pruning decisions and potential unsoundness. The fix drops the pkt_end marker whenever arithmetic is performed on such a pointer, since the result no longer carries the same semantic guarantee. Patch 2/2 adds a selftest reproducing the stale pkt range scenario to prevent regressions.
Generated 2026-04-16T00:00:00Z
Today's bpf-next activity was dominated by two major series: Eduard Zingerman's 14-patch overhaul introducing static stack liveness data flow analysis in the verifier, and Mykyta Yatsenko's RFC for a new resizable BPF hash map backed by the kernel's rhashtable infrastructure. Additional notable work includes Kumar Kartikeya Dwivedi's verifier warning message framework, enabling non-fatal deprecation warnings during program load, and Daniel Borkmann's fix for ld_{abs,ind} failure path analysis in BPF subprograms.
bpf: share several utility functions as internal API
This is the opening patch in a 14-part series introducing static stack liveness data flow analysis into the BPF verifier. It refactors several internal utility functions into a shared API to be reused by the upcoming liveness analysis pass. The broader series upgrades stack-slot tracking to 4-byte granularity and introduces a forward arg-tracking dataflow analysis, culminating in dead stack slot poisoning — marking unused stack slots to catch uninitialized reads more reliably. The work also includes logging improvements and extensive selftests covering the new analysis behavior.
This RFC introduces a new BPF map type backed by the kernel's rhashtable infrastructure, enabling dynamically resizable hash maps without the fixed-capacity constraints of BPF_MAP_TYPE_HASH. The 18-patch series implements full lookup/update/delete operations, batch ops, BPF iterators, timer and workqueue support, and libbpf integration. This addresses long-standing performance cliffs when BPF hash maps approach their pre-allocated capacity, as resizing happens transparently at runtime. bpftool documentation and comprehensive selftests round out the RFC.
bpf: Add support for verifier warning messages
This patch introduces a new BPF verifier infrastructure for emitting non-fatal warning messages to userspace during program load, distinct from errors that reject programs outright. The six-patch series adds a KF_DEPRECATED flag for kfuncs, a __bpf_kfunc_replacement() annotation to guide migration, and libbpf support to surface warnings by default. Source location information is exposed by making find_linfo widely available within the verifier. This closes an important ergonomics gap where developers had no in-band signal for deprecated or suboptimal BPF patterns.
bpf: Propagate error from visit_tailcall_insn
This series fixes a verifier bug where errors returned by visit_tailcall_insn were silently discarded during subprogram analysis, potentially allowing malformed programs through verification. A second patch corrects the failure-path analysis for ld_abs and ld_ind instructions when used inside subprograms. A third patch removes an overly narrow static qualifier on a local subprog pointer to support the fix. Selftests are added to cover the previously undetected failure paths, and this is the second revision following initial review feedback.
bpf: Reject sleepable kprobe_multi programs at attach time
kprobe_multi programs execute in a non-preemptible context where sleeping would cause a kernel crash, yet the BPF subsystem previously accepted programs with the sleepable flag for this attach type. This patch adds an explicit check at attach time to reject the sleepable flag in combination with BPF_TRACE_KPROBE_MULTI, returning a clear error rather than silently misbehaving. A selftest verifies the rejection behavior. This is the fifth revision of the series, refined through several rounds of review.
selftests/bpf: Add BPF struct_ops + livepatch integration test
This selftest exercises the interaction between BPF struct_ops programs and the kernel livepatch infrastructure, which allows BPF programs to replace kernel functions in a structured, reversible way. The test verifies that struct_ops-based function replacement behaves correctly alongside livepatch semantics, covering both attachment and detachment paths. This is important validation for a relatively new capability that enables BPF programs to participate in live kernel patching workflows.
libbpf: Allow use of feature cache for non-token cases
libbpf's BTF feature detection previously bypassed the feature cache in code paths that did not involve a BPF token, leading to redundant kernel probes on repeated calls. This patch relaxes that requirement so the feature cache is consulted and populated regardless of token availability. The companion patch adds a BTF sanitization selftest validating BTF layout correctness under various configurations. This is the third revision of the two-patch series.
bpf: add missing fsession to the verifier log
The BPF_ATTACH_TYPE_FSESSION attach type was missing from the verifier log output, bpftool's usage strings, and kernel documentation, leaving it as an undocumented attach type in all developer-facing surfaces. This three-patch series adds fsession to the verifier log, BPF documentation, and bpftool usage output, ensuring consistency across tooling. This is the second revision addressing minor style feedback from the initial submission.
Generated 2026-04-09T10:30:00Z
April 7-8 saw broad activity across verifier correctness, networking, and tooling. Kumar Kartikeya Dwivedi submitted a series adding verifier warning message support for deprecated kfuncs, while Daniel Borkmann fixed linked register delta tracking bugs in the verifier. Nick Hudson's v3 series introduced new tunnel decapsulation flags for bpf_skb_adjust_room, and Andrey Grodzovsky's kprobe symbol disambiguation fix reached v7.
bpf: Add support for verifier warning messages
This v2 series introduces a new verifier warning infrastructure that allows the BPF verifier to emit non-fatal warning messages to users, separate from hard errors. The series leverages KF_DEPRECATED to trigger warnings for deprecated kfuncs and adds a __bpf_kfunc_replacement() annotation to point developers toward preferred replacements. libbpf is updated to flush these warnings by default, giving developers earlier visibility into deprecated API usage without causing program rejection.
bpf: Fix linked reg delta tracking when src_reg == dst_reg
This series fixes two related verifier bugs in linked register delta tracking. The first patch addresses a case where src_reg == dst_reg causes stale delta state to propagate incorrectly through register linking. The second patch ensures the delta field is cleared whenever a register's ID is reset for non-add/sub operations, preventing stale deltas from leaking through ID reassignment. Both fixes are accompanied by targeted selftests.
tracing: Prefer vmlinux symbols over module symbols for unqualified kprobes
Now at v7 (with a concurrent v6 also posted on the same day), this patch stabilizes the fix for kprobe symbol disambiguation when a module symbol shadows a vmlinux symbol of the same name. Unqualified kprobe attachments now correctly prefer the vmlinux symbol, preventing inadvertent tracing of module code. A selftest covering duplicate symbol handling is included.
bpf: add BPF_F_ADJ_ROOM_DECAP_* flags for tunnel decapsulation
Part of the v3 'bpf: decap flags and GSO state updates' series, this patch introduces new BPF_F_ADJ_ROOM_DECAP_* flags for the bpf_skb_adjust_room helper to handle tunnel decapsulation scenarios correctly. A companion patch clears tunnel GSO state in skb_adjust_room when decapping, addressing correctness issues for BPF programs performing software tunnel decap. The series also refactors ADJ_ROOM flag masks and adds guard rails for invalid flag combinations.
bpf: add missing fsession to the verifier log
This v2 series adds missing support for the fsession BPF attach type across the verifier log, BPF documentation, and bpftool. The fsession attach type was supported in the kernel but absent from these user-facing surfaces, making it invisible to developers using introspection tools. The three-patch series ensures fsession is consistently recognized and displayed alongside other attach types.
bpf: Retire rcu_trace_implies_rcu_gp()
This patch removes the rcu_trace_implies_rcu_gp() function from the BPF RCU machinery, which was a temporary workaround that treated RCU trace critical sections as implying a full RCU grace period. As the kernel RCU subsystem has matured, this workaround is no longer necessary and its removal simplifies the BPF memory model and reduces maintenance burden.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
The v4 arena library and runtime series continues to appear in this period, covering the core verifier change and an extensive libarena user-space test library. The kernel patch upgrades a scalar register to PTR_TO_ARENA when derived from arena pointer arithmetic, enabling safe arena pointer tracking in the BPF verifier. The selftest side introduces a complete arena library with buddy allocator and ASAN runtime support.
Generated 2026-04-08T12:00:00Z
Activity on April 6-7 was dominated by two substantial series: Emil Tsalapatis's v4 arena library and runtime series, which introduces a BPF memory arena with buddy allocator and ASAN support, and Kumar Kartikeya Dwivedi's v5 series enabling variable offsets for syscall PTR_TO_CTX access. Additional notable work includes Andrey Grodzovsky's RFC for fixing kprobe attachment priority when module symbols shadow vmlinux symbols, and smaller fixes for dynptr reference handling and insn_array offset loads.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
Part of the v4 'Introduce arena library and runtime' series, this patch updates the BPF verifier to upgrade a scalar register to PTR_TO_ARENA when it results from adding a scalar to an arena pointer. This is a key verifier change that enables safe tracking of pointers derived from BPF arena memory regions. The companion patches introduce a full arena user-space library for BPF selftests, including a buddy allocator and ASAN runtime integration.
bpf: Support variable offsets for syscall PTR_TO_CTX
This v5 patch extends the BPF verifier to allow variable (non-constant) offsets when accessing PTR_TO_CTX in BPF programs running in syscall context. Previously, only fixed offsets were permitted, which was overly restrictive for programs that compute offsets dynamically. Companion patches also enable unaligned accesses for syscall context and add comprehensive selftests including tests for accesses beyond U16_MAX.
tracing: Prefer vmlinux symbols over module symbols for unqualified kprobes
This RFC v5 patch addresses an ambiguity in kprobe symbol resolution: when a kernel module exports a symbol with the same name as a vmlinux symbol, an unqualified kprobe attachment would previously latch onto the module symbol. The fix ensures vmlinux symbols take precedence for unqualified probes, aligning behavior with user expectations and reducing inadvertent tracing of module code. A selftest covering the duplicate symbol scenario is included.
bpf: Do not ignore offsets for loads from insn_arrays
This v3 fix corrects a bug in the BPF loader where non-zero offsets in insn_array map loads were silently ignored, resulting in incorrect instruction loading. The patch ensures the offset is correctly applied when reading BPF instructions from array maps, preventing subtle program errors that would otherwise be difficult to diagnose. A companion selftest verifies loading from various non-zero offsets.
bpf: Allow overwriting referenced dynptr when refcnt > 1
The BPF verifier currently rejects programs that attempt to overwrite a referenced dynptr even when sibling states still hold a valid reference, causing overly conservative program rejections. This patch relaxes the restriction by tracking the reference count across sibling states and permitting the overwrite when refcnt > 1, ensuring the sibling state can still clean up the dynptr on exit. A selftest demonstrating the previously-rejected but safe pattern is included.
Generated 2026-04-08T12:00:00Z
Activity on April 5-6 was dominated by Yonghong Song's v2 and v3 iterations of the 'Support stack arguments for BPF functions and kfuncs' series, which introduces a new BPF_REG_STACK_ARG_BASE register and extends the BPF calling convention to allow structs larger than 8 bytes to be passed via the stack. The v3 revision refines the design with improved verifier validation, x86_64 JIT support, and comprehensive selftests for both BPF-to-BPF calls and kfunc calls.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
Introduces BPF_REG_STACK_ARG_BASE, a new virtual BPF register used as a base pointer for stack-allocated function arguments. This is the foundational patch in the series enabling BPF functions and kfuncs to accept arguments too large to fit in the six general-purpose argument registers. The new register is handled specially by the verifier and JIT backends to track and validate stack argument slots. It allows BPF programs to pass structs larger than 8 bytes by value through a well-defined stack ABI.
bpf: Support stack arguments for bpf functions
Extends the BPF verifier to recognize and validate stack-based argument passing for BPF-to-BPF function calls. The patch teaches the verifier to track argument slots relative to BPF_REG_STACK_ARG_BASE and verify their types, sizes, and liveness. This enables BPF subprograms to receive large struct arguments that cannot fit in registers, matching a common pattern in kernel C code.
bpf: Support stack arguments for kfunc calls
Extends stack argument support to kfunc calls, allowing BPF programs to pass large structs by value to kernel functions exposed via kfuncs. The verifier is updated to check stack argument slots when validating kfunc call sites, ensuring type safety between the BPF caller and the kernel-side parameter declaration. Stack arguments for kfuncs are limited to 8 bytes per slot to match kernel ABI expectations.
bpf: Reject stack arguments in non-JITed programs
Adds a verifier check that rejects programs using stack arguments when running without a JIT compiler. Stack argument passing requires JIT support because the interpreter cannot implement the necessary stack manipulation semantics. This guard ensures the feature is only enabled on platforms and configurations where it is fully supported.
bpf,x86: Implement JIT support for stack arguments
Implements x86_64 JIT backend support for emitting code to set up and tear down stack argument frames for BPF function and kfunc calls. The JIT allocates space on the native stack, copies argument values into position relative to the stack pointer, and passes the base address in the appropriate register. This patch is the concrete implementation that makes the stack argument ABI functional on x86_64.
selftests/bpf: Add verifier tests for stack argument validation
Adds verifier-level selftests that exercise both positive and negative cases for stack argument validation, including type mismatches, size violations, and use of uninitialized stack slots. These tests complement the functional selftests from earlier patches and ensure the verifier correctly rejects malformed programs using stack arguments. The negative tests cover the greater-than-8-byte kfunc stack argument restriction introduced in the series.
Generated 2026-04-06T10:13:03Z
No patches were submitted to the bpf mailing list during this period.
Generated 2026-04-05T09:43:13Z
The bpf-next mailing list saw active development on April 3-4, 2026, centered on BPF verifier improvements, JIT code generation, and libbpf usability enhancements. Alexei Starovoitov continued iterating on preparatory patches for static stack liveness analysis (reaching v5), while Xu Kuohai posted a 12th revision of the ENDBR/BTI CFI series for x86 and arm64. Emil Tsalapatis introduced a comprehensive arena library and runtime for BPF programs, and Chengkaitao proposed new infrastructure to simplify kfunc verifier registration.
bpf: Do register range validation early
This patch moves register range validation to an earlier stage in the BPF verifier pipeline as a preparatory step for implementing static stack liveness analysis. By validating register ranges sooner, subsequent analysis passes can make more informed decisions about stack usage. This is the first of a 6-patch v5 series from Alexei Starovoitov that lays the groundwork for static stack liveness, a significant verifier enhancement aimed at improving precision in BPF program analysis.
bpf: Add bpf_compute_const_regs() and bpf_prune_dead_branches() passes
Introduces two new compiler-style analysis passes to the BPF verifier: constant register computation and dead branch pruning. These passes allow the verifier to identify and eliminate unreachable code paths before the main verification pass runs, reducing the state space that must be explored. This is foundational infrastructure for static stack liveness analysis, which will allow the verifier to precisely track stack slot usage across subprograms and enable future optimizations.
bpf: Add helper and kfunc stack access size resolution
The final patch in Alexei Starovoitov's v5 series adds logic for resolving the sizes of stack accesses made by helpers and kfuncs, a prerequisite for accurate static stack liveness computation. Understanding how much stack space each helper or kfunc call may touch is essential for the verifier to determine which stack slots are live at any given program point. Together with the earlier patches in the series, this completes the preparatory infrastructure for static stack liveness.
bpf: Introduce BTF_SET/ID_SUB and BPF_VERIF_KFUNC_DEF
Introduces new BTF infrastructure (BTF_SET/ID_SUB) and a BPF_VERIF_KFUNC_DEF macro to simplify how the BPF verifier registers and matches kfunc verification callbacks. Currently kfunc verification logic requires manual BTF set management and is scattered across the codebase; this refactoring provides a unified, declarative mechanism for associating kfuncs with their verifier hooks. The accompanying patch applies this new infrastructure to rbtree kfuncs as a concrete demonstration.
bpf: Add helper to detect indirect jump targets
Adds a helper function to the BPF JIT infrastructure for identifying indirect jump targets in BPF programs, enabling subsequent patches to emit control-flow integrity (CFI) landing pad instructions at those sites. On x86 this means emitting ENDBR instructions (for Intel IBT), and on arm64 BTI instructions. This is the 12th revision of a mature series by Xu Kuohai that improves BPF JIT compatibility with CPU-enforced CFI features, with both x86 and arm64 backends covered.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
Enhances the BPF verifier to recognize that a scalar value resulting from arithmetic on an arena pointer should itself be typed as PTR_TO_ARENA, improving the ergonomics and correctness of arena-based BPF programs. This is the core kernel-side change in a 9-patch v3 series that also introduces a libarena library and runtime for BPF, including a buddy allocator and ASAN integration. The series significantly lowers the barrier for BPF programs to perform dynamic memory management using arenas.
libbpf: Auto-upgrade kprobes to multi-kprobes when supported
This RFC proposes transparent automatic upgrading of single kprobe attachments to the more efficient multi-kprobe mechanism when the kernel supports it, mirroring a companion patch that does the same for uprobes. Multi-kprobes attach to multiple functions via a single file descriptor, reducing per-attach overhead considerably. The series (RFC v3) also adds a libbpf feature probe to detect kernel multi-kprobe link support, making the upgrade decision automatic and safe across kernel versions.
Generated 2026-04-04T09:42:10Z
A busy day on bpf-next dominated by verifier and JIT work. Yonghong Song posted a major 10-patch series introducing stack-based argument passing for BPF functions and kfuncs, enabling larger structs to be passed by value. Alexei Starovoitov continued iterating—reaching v5—on preparatory verifier patches for static stack liveness analysis, while Emil Tsalapatis proposed a new arena library and runtime for BPF selftests.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
First patch in a 10-part series adding stack-based argument passing to BPF functions and kfuncs. It introduces a new virtual register BPF_REG_STACK_ARG_BASE to represent the base of stack-passed arguments in the BPF calling convention. This enables passing large structs by value that exceed the available register count. Subsequent patches in the series add verifier enforcement, x86-64 JIT support, and selftests covering both positive and negative cases.
bpf: Do register range validation early
First patch (v5) in a 6-patch series preparing the verifier for static stack liveness analysis. This patch moves register range validation to an earlier point in the verification pipeline so that subsequent passes can rely on consistent range invariants. The series also adds topological subprogram ordering after check_cfg(), dead branch pruning, and constant register computation passes. A v5 respin was posted within hours of v4, indicating rapid iteration.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
First patch (v3) in a 9-part series introducing an arena library and runtime for BPF selftests. This verifier change teaches the BPF verifier to upgrade a scalar register to PTR_TO_ARENA when it is the result of adding a scalar to an arena pointer, improving type-safety for arena-allocated memory. The rest of the series builds libarena scaffolding, an ASAN runtime for detecting memory errors in arena programs, a buddy allocator, and comprehensive selftests.
bpf: Move constants blinding out of arch-specific JITs
First patch (v11) in a 5-patch series that emits ENDBR (x86) and BTI (arm64) instructions at indirect jump targets in BPF JIT-compiled programs to harden against control-flow hijacking attacks. This initial patch refactors constants blinding out of architecture-specific JITs and into shared BPF core code, passing the bpf_verifier_env to the JIT. Later patches add a verifier helper to detect indirect jump targets and the per-arch emission logic for ENDBR and BTI landing pads.
bpf: Refactor reg_bounds_sanity_check
First patch (v3) in a 6-patch series fixing verifier invariant violations surfaced by syzbot. The series refactors the register bounds sanity check, exits early when reg_bounds_sync receives invalid inputs, simulates branches to prune states based on range violations, and removes now-unnecessary invariant violation flags from selftests. These fixes improve the reliability of the verifier's range-tracking logic and address potential incorrect pruning decisions.
bpf: Do not ignore offsets for loads from insn_arrays
Bug fix (v2) correcting the BPF verifier's handling of loads from instruction arrays with non-zero offsets. Previously the offset was silently ignored, leading to incorrect values being read. The fix ensures the offset is properly applied, and a companion selftest patch adds coverage for the various offset scenarios to prevent regressions.
bpf: Refactor dynptr mutability tracking
A v2 verifier cleanup that refactors how dynptr mutability is tracked internally. Instead of scattering mutability checks across dynptr helper validation paths, this patch consolidates the tracking into a cleaner representation. This makes it easier to reason about read-only vs. read-write dynptr semantics and reduces the risk of future correctness bugs when new dynptr types or helpers are introduced.
Generated 2026-04-03T10:00:00Z
April 1-2 saw heavy activity on the verifier and libbpf fronts. Yonghong Song posted a significant new feature series enabling stack-based argument passing for BPF functions and kfuncs with x86_64 JIT support, while Alexei Starovoitov iterated to v3 on preparatory verifier passes for static stack liveness analysis. Paul Chaignon and Kumar Kartikeya Dwivedi also landed verifier improvements addressing invariant violations and variable-offset syscall context access.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
Introduces a new virtual BPF register BPF_REG_STACK_ARG_BASE to support stack-based argument passing for BPF subprograms and kfuncs. This is the first patch in a 10-part series that extends the BPF calling convention beyond the existing five register arguments. Subsequent patches add verifier support, x86_64 JIT code generation, and selftests. This enables BPF programs to call functions with more than five arguments by spilling extra arguments onto the stack, bringing BPF closer to native C calling conventions.
bpf: Add bpf_compute_const_regs() and bpf_prune_dead_branches() passes
Adds two new pre-verification passes to the BPF verifier: bpf_compute_const_regs() performs a lightweight constant propagation to identify registers holding compile-time constants, and bpf_prune_dead_branches() eliminates unreachable code paths before the main verification pass runs. These passes are groundwork for upcoming static stack liveness analysis, which will reduce the state space the verifier must explore. This is patch 4/6 in Alexei's v3 series "bpf: Prep patches for static stack liveness."
bpf: Add helper and kfunc stack access size resolution
Adds logic to the verifier to resolve the access size for stack slots passed to helpers and kfuncs, completing the v3 preparatory series for static stack liveness analysis. When a helper or kfunc receives a pointer to a stack slot, the verifier now computes the precise byte range being accessed rather than conservatively marking the entire slot as live. This precision is necessary for the upcoming static liveness pass to correctly determine which stack slots need to be initialized before use.
bpf: Simulate branches to prune based on range violations
Fixes a class of verifier invariant violations where register range bounds became inconsistent after branch pruning. When the verifier detects that a register's tracked range is provably violated on a branch, it now simulates taking that branch and prunes the state rather than leaving the inconsistency unresolved. This addresses syzbot-reported crashes caused by invalid register states propagating through the verifier. This is patch 4/6 in Paul Chaignon's v3 series "Fix invariant violations and improve branch detection."
bpf: Support variable offsets for syscall PTR_TO_CTX
Extends the BPF verifier to allow variable (non-constant) offsets when accessing syscall program context pointers of type PTR_TO_CTX. Previously, the verifier rejected any non-zero variable offset into a syscall ctx, requiring programs to use only constant offsets. The patch teaches the verifier to track variable offsets and validate bounds at access time, enabling more flexible syscall BPF programs. This is the first patch in Kumar's v4 seven-patch series.
bpf: Do not ignore offsets for loads from insn_arrays
Fixes a bug in the BPF loader where non-zero offsets within instruction arrays were silently ignored when resolving map file descriptors and other relocations. The offset field was being discarded, causing incorrect values to be loaded when programs accessed elements beyond the base of an insn_array. This is a correctness fix affecting programs that use offset-based access patterns into instruction arrays, with accompanying selftests added in patch 2/2.
bpf: Refactor dynptr mutability tracking
Refactors how the BPF verifier tracks whether a dynptr is mutable or read-only, consolidating scattered mutability checks into a cleaner abstraction. Previously, mutability was inferred from the dynptr type and call context at each check site; this patch centralizes the logic to reduce duplication and make the invariants easier to audit. The refactor prepares the codebase for future dynptr extensions without changing existing behavior.
bpf: reject short IPv4/IPv6 inputs in bpf_prog_test_run_skb
Adds input length validation to bpf_prog_test_run_skb() to reject buffers shorter than a minimum IPv4 or IPv6 header when the data is marked as IP traffic. Without this check, a malformed short packet could cause the verifier test runner to access memory beyond the supplied buffer. This is a v3 single-patch fix addressing a potential out-of-bounds read in the BPF test infrastructure.
libbpf: Fix BTF handling in bpf_program__clone()
Fixes a bug in libbpf's bpf_program__clone() where the cloned program did not correctly inherit or reference the parent's BTF object, leading to use-after-free or incorrect BTF type resolution when the cloned program was loaded. The fix ensures the BTF reference is properly managed across the clone operation. This is a v2 single-patch bug fix for an issue discovered in programs that use program cloning with BTF-dependent features.
Generated 2026-04-02T23:24:36Z
The week was headlined by Yonghong Song's sustained effort to add stack argument support for BPF functions and kfuncs, with v2 (23 patches) arriving mid-week and v3 (24 patches) following on Sunday, collectively touching the verifier, x86-64 and arm64 JITs, precision backtracking, and liveness analysis. Kuniyuki Iwashima introduced new BPF_SOCK_OPS hooks for TCP receive low-watermark tuning, enabling fine-grained per-socket control of sk_rcvlowat through a new kfunc. Amery Hung contributed a substantial 12-patch verifier refactor that unifies object relationship tracking and fixes a dynptr use-after-free bug. Justin Suess addressed a deadlock hazard by offloading kptr destructors invoked from NMI context to a work queue, and Yazhou Tang fixed an out-of-bounds read in bpf_patch_call_args() after ten revision cycles.
bpf: Support stack arguments for bpf functions
Core verifier patch from Yonghong Song's v3 series enabling BPF subprograms to receive arguments passed on an auxiliary stack frame when the six-register limit is exhausted. The verifier learns to validate new stack-based argument slots, track their types, and propagate liveness across call boundaries. This removes the hard six-argument ceiling for BPF-to-BPF calls and aligns the convention with native ABIs.
bpf: Add precision marking and backtracking for stack argument slots
Extends the verifier's precision backtracking engine to include stack argument slots so that state pruning remains correct when programs use the new calling convention. Without precision tracking for these slots, the verifier could incorrectly prune states and miss safety violations in programs that pass derived or constrained values as stack arguments.
bpf: Support stack arguments for kfunc calls
Extends the stack argument convention to kfunc call sites, letting kernel functions registered as kfuncs accept more than six typed arguments from BPF programs. The verifier validates each stack-passed argument against the kfunc's BTF signature, including type, size, and alignment. This is especially valuable for kfuncs with struct-typed or numerous parameters that could not previously be called with full argument sets.
bpf,x86: Implement JIT support for stack arguments
Implements x86-64 JIT code generation for the new stack argument passing convention, emitting r11-based MOV instructions to write arguments into the callee's stack area before a call. This makes the feature functional on x86-64 and serves as the reference JIT implementation for the feature across the series.
bpf: tcp: Introduce BPF_SOCK_OPS_RCVLOWAT_CB.
Adds a new BPF_SOCK_OPS_RCVLOWAT_CB callback to the SOCK_OPS framework, invoked when TCP needs to determine a socket's effective receive low watermark. This is the foundational piece of BPF-controlled TCP AutoLOWAT, allowing programs to inspect socket and buffer state and dynamically set sk_rcvlowat on a per-socket basis rather than relying on a fixed sysctl value.
bpf: tcp: Add kfunc to adjust sk->sk_rcvlowat.
Provides a kfunc callable within BPF_SOCK_OPS_RCVLOWAT_CB to write a new value back to sk_rcvlowat, completing the TCP AutoLOWAT control loop. Using a kfunc for the write-back, rather than the SOCK_OPS return value, keeps the API unambiguous and extensible. Proper BTF annotations and context guards are included to prevent misuse.
bpf: Offload kptr destructors that run from NMI
Fixes a deadlock hazard that occurs when a BPF kptr destructor is triggered from NMI context, where taking spinlocks required for safe reference-count management is prohibited. The fix defers such destructors to an IRQ work queue so they execute in a safe, non-NMI context. An NMI exerciser selftest accompanies the fix to verify correctness under stress.
bpf: Refactor object relationship tracking and fix dynptr UAF bug
Central patch in Amery Hung's 12-patch verifier refactor that unifies how the verifier tracks ownership relationships between referenced objects (kptrs, dynptrs, slices). The refactor also fixes a use-after-free bug where a dynptr's backing object could be freed while a slice pointing into it remained live. Subsequent patches extend the unified tracking to helpers and kfuncs and add regression tests.
bpf: Unify referenced object tracking in verifier
Consolidates the previously separate code paths for tracking referenced kptrs and dynptrs into a single, unified mechanism in the BPF verifier. This reduces duplication, makes it easier to reason about correctness, and lays the groundwork for future types of referenced objects to be tracked with minimal additional code.
bpf: Fix out-of-bounds read in bpf_patch_call_args()
Fixes an out-of-bounds read in bpf_patch_call_args() triggered when a BPF program has a very large number of instructions, causing the patched instruction array to be accessed beyond its allocated size. A companion patch addresses a related s16 truncation bug in call-offset encoding for large bpf-to-bpf call offsets. This is the tenth revision of the series, reflecting thorough review.
bpf: enforce VFS constraints for xattr related BPF kfuncs
Hardens the xattr BPF kfuncs by enforcing the same VFS-level constraints (capability checks, namespace restrictions, and immutability flags) that the standard getxattr/setxattr syscall path enforces. Without these checks, a BPF LSM program could read or write extended attributes that the calling process would not be permitted to access via normal syscalls. The series also adds negative selftests verifying each constraint is correctly enforced.
Generated 2026-05-12T10:00:00Z
The April 27 – May 4 week was busy across multiple BPF subsystems. The most active thread by patch volume was Ricardo B. Marlière's long-running selftests/bpf build-robustness series, which reached v11 and makes the test suite tolerate partial kernel configurations without aborting. On the JIT front, Kuan-Wei Chiu posted initial BPF JIT support for both m68k and RISC-V RV32, while Puranjay Mohan extended the arm64 JIT to handle stack arguments and contributed an XDP load-balancer benchmark suite. Matt Bobrowski addressed two xattr kfunc issues: a crash on negative dentries and a broader VFS constraint enforcement series. Kaitao cheng's v10 of the extended bpf_list kfunc API landed new list manipulation helpers, and Paul Chaignon added per-subprogram instruction-count reporting to improve verifier diagnostics.
m68k, bpf: Add initial BPF JIT compiler support
This v2 patch adds a BPF JIT for the m68k (Motorola 68000) architecture, eliminating the interpreter fallback on that platform. The JIT maps the full core BPF ISA — ALU operations, memory access, branches, and BPF-to-BPF calls — onto m68k assembly. The v2 revision addresses register allocation and instruction selection feedback from the initial posting. Gaining a JIT on m68k is meaningful for embedded and legacy m68k systems that run Linux and want the performance benefits of native BPF execution.
bpf: enforce VFS constraints for xattr related BPF kfuncs
This v2 patch enforces standard VFS permission and existence checks inside the BPF xattr kfuncs (bpf_get_dentry_xattr, bpf_set_dentry_xattr, bpf_remove_dentry_xattr). Without this, BPF LSM hook programs could bypass capability checks and operate on dentries that userspace code cannot access. The patch aligns kfunc semantics with what the normal VFS xattr path enforces, closing an inconsistency that could be exploited for privilege escalation in LSM-heavy environments. It pairs with the negative-dentry crash fix also posted this week.
bpf: fix crash in bpf_[set|remove]_dentry_xattr for negative dentries
This v2 single-patch fix prevents a NULL dereference crash in bpf_set_dentry_xattr and bpf_remove_dentry_xattr when the supplied dentry is negative (i.e., points to a path that does not exist). Negative dentries lack an associated inode, and the kfuncs were unconditionally dereferencing d_inode without checking first. The fix adds a guard that returns -ENOENT for negative dentries, matching VFS behavior and eliminating the crash vector for any BPF program that encounters a not-yet-created path.
bpf, arm64: Add JIT support for stack arguments
This patch (v2, 2/3) implements stack-based argument passing in the arm64 BPF JIT, allowing BPF programs to call kernel functions that take more arguments than fit in the eight AArch64 argument registers. The series remaps BPF_REG_0 from x7 to x8 to free the last argument slot (patch 1), then uses the stack for spilling additional arguments (patch 2), and adds selftests that verify the calling convention on arm64 (patch 3). This unblocks kfunc authors who need to pass large structs or many parameters to helper functions from BPF on arm64.
selftests/bpf: Add XDP load-balancer benchmark
This seven-patch series adds a complete XDP load-balancer benchmark to the BPF selftests suite, including a BPF program that performs L4 load balancing, a userspace driver, a batch-timing library, a bpf-nop baseline benchmark, and a run script. The benchmark is designed to measure end-to-end XDP packet-processing throughput and latency, giving developers a reproducible way to evaluate JIT and verifier changes against a realistic XDP workload. It complements the existing map and program-focused benchmarks already in selftests/bpf.
bpf: Extend the bpf_list family of APIs
This v10 eight-patch series extends the BPF linked-list kfunc API with several new operations: bpf_list_del (remove a node from a list without freeing), bpf_list_add (insert a node after a given position), and bpf_list_is_first/last/empty (query helpers). It also introduces the __nonown_allowed annotation so non-owning list-node pointers can be passed as kfunc arguments. These additions allow BPF programs to implement more sophisticated in-kernel data structures using the existing bpf_list_head/node primitives, moving toward parity with the C linked-list API available to kernel modules.
bpf: Add LINK_DETACH support for perf link
This v3 two-patch series adds LINK_DETACH support to perf-type BPF links, enabling userspace to detach a BPF program from its perf event via the BPF_LINK_DETACH command without destroying the link object. Previously, perf links did not implement the detach operation, which prevented use cases that require temporarily suspending a BPF program attached to a perf event while keeping the link fd alive for later re-use. The selftest patch validates that a detached perf link stops delivering events and can be distinguished from a fully destroyed link.
bpf: Print breakdown of insns processed by subprogs
This v3 two-patch series makes the BPF verifier log include a per-subprogram breakdown of the instructions processed count alongside the existing aggregate figure. When a complex program composed of multiple subprograms approaches the verifier complexity limit, it can be hard to identify which subprogram is the bottleneck; the new output lines directly attribute instruction counts to each function. The companion selftest verifies the format of the new log lines. This is a developer-facing diagnostic improvement with no runtime overhead.
bpf: Fix out-of-bounds read in bpf_patch_call_args()
This v9 three-patch series fixes two related bugs in bpf_patch_call_args(): an out-of-bounds read that occurs when the insn array is grown for a large number of subprograms, and a silent s16 truncation of call offsets that overflows when a BPF-to-BPF call target is far away in the instruction stream. The OOB read can lead to kernel memory exposure; the truncation causes incorrect branch targets and potential crashes at runtime. The third patch adds a selftest with a program that generates a large call offset to act as a regression guard.
bpf: Fix NMI deadlock in referenced kptr destructors
This four-patch series fixes a deadlock that can occur when a referenced kptr's destructor is called from NMI context, where taking the locks normally acquired during BTF teardown is not safe. The fix uses rcu_work to defer BTF reference dropping out of NMI context, and limits the fields compared in btf_record_equal to avoid unnecessary lock acquisition. A selftest reproducer is included to verify that the deadlock path is closed. The bug affects any BPF program that holds a referenced kptr and is invoked from a perf NMI handler.
xskmap: reject TX-only AF_XDP sockets
This v3 single patch adds a check in xskmap insertion that rejects TX-only AF_XDP sockets (those created without an RX ring). XSK maps are used for XDP redirect, which inherently requires an RX ring to receive packets; inserting a TX-only socket previously succeeded but caused silent misbehavior at redirect time. The fix returns -EINVAL early during map update if the socket lacks an RX ring, making the error explicit and preventing subtle data-path failures in production XDP setups.
selftests/bpf: Tolerate partial builds across kernel configs
This v11 eleven-patch series makes the selftests/bpf Makefile and test runner gracefully handle builds where some BPF objects or skeleton headers could not be compiled due to missing kernel config options, rather than failing the entire build. Key changes include a BPF_STRICT_BUILD toggle, tolerating BPF and skeleton generation failures, skipping tests whose objects were not built, and tolerating missing files during install. The series allows developers and CI systems running non-standard kernel configurations (e.g., distro kernels) to still execute the subset of BPF selftests that do apply to their config.
Generated 2026-05-06T00:00:00Z
The week of April 20-27 was one of the most active bpf-next periods in recent months, with 100 patches across 19 distinct series touching nearly every layer of the BPF stack. The headline feature is Yonghong Song's 18-patch series adding full stack-argument support for BPF functions and kfuncs, complete with x86 and arm64 JIT backends, which lifts the long-standing six-argument limit. Mykyta Yatsenko drove two major features in parallel: a 10-patch resizable hash map backed by rhashtable and a 6-patch series (reaching v13) enabling sleepable tracepoint programs. On the verifier side, Eduard Zingerman continued refining the cnum-based range representation and Amery Hung posted a 9-patch series unifying dynptr object-relationship tracking and fixing a UAF bug.
bpf: Support stack arguments for bpf functions
The first patch of an 18-patch series that introduces a stack-based calling convention allowing BPF programs and kfuncs to accept more than six arguments. When a callee requires extra arguments beyond the six hardware registers, a pointer in r11 (BPF_REG_PARAMS) points to an on-stack argument area that the verifier validates. The series covers verifier liveness, precision backtracking, x86 and arm64 JIT backends, and a comprehensive test suite. This is the most significant BPF calling-convention change since the subsystem was created.
bpf: Support stack arguments for kfunc calls
Extends the stack-argument calling convention to kfunc calls, allowing kernel functions exposed via the kfunc mechanism to declare parameters beyond position six. The verifier validates that BPF programs populate the stack argument area correctly before the call and that argument types match the kfunc's BTF annotations. This removes the need to bundle excess parameters into a context struct, enabling cleaner kfunc APIs for networking, storage, and LSM use cases.
bpf,x86: Implement JIT support for stack arguments
Implements x86-64 JIT emission for the new stack-argument calling convention. The JIT allocates space in the caller's stack frame, marshals excess arguments into the argument area, passes r11 pointing to it, and tears down the area on return. Stack arguments are rejected when the BPF interpreter is in use, so this patch is the prerequisite for the feature to be enabled on x86 systems.
bpf: Implement resizable hashmap basic functions
Introduces BPF_MAP_TYPE_RHASH, a new map type backed by the kernel's rhashtable that resizes automatically as entries are inserted and deleted, eliminating the need to pre-allocate a fixed capacity. This addresses a common operational pain point where over-provisioned hash maps waste memory while under-provisioned ones drop entries under load. The 10-patch v3 series adds batch ops, iterators, timer/workqueue support, libbpf integration, and bpftool documentation.
bpf: Add sleepable support for raw tracepoint programs
The first patch of a 6-patch v13 series enabling BPF programs attached to raw and classic tracepoints to be marked as sleepable. Sleepable tracepoint programs can acquire locks, call sleeping kfuncs, and perform GFP_KERNEL allocations, unlocking use cases such as per-event kernel object allocation that are currently impossible. The series adds the verifier gating, a new bpf_prog_run_array_sleepable() helper, libbpf section handlers, and a full selftest suite.
bpf: representation and basic operations on circular numbers
Third iteration of the foundational patch introducing cnum32/cnum64 typed structs to replace the eight loose min/max scalar fields in bpf_reg_state. Circular-number semantics correctly model modular arithmetic for 32-bit sub-register range tracking, preventing a class of precision loss in the verifier. The v3 iteration incorporates reviewer feedback on the arithmetic primitives and adds more detailed correctness arguments.
bpf: range_within() must check cnum ranges instead of min/max pairs
Fixes a correctness bug in range_within(), the verifier's state-subsumption check used during state pruning: it was comparing raw min/max fields instead of the new cnum range representation, causing the pruner to incorrectly merge states that differ in circular-number range. Incorrect pruning can lead the verifier to accept programs that should be rejected. The companion patch (2/2) adds a regression test that triggers the wrong behaviour before the fix.
bpf: Unify dynptr handling in the verifier
The first patch of a 9-patch v3 series that consolidates the verifier's scattered dynptr-validation logic into a single unified code path. Previously each dynptr type (ringbuf, skb, xdp, etc.) had its own partially duplicated checks; the refactor eliminates the duplication and provides a consistent foundation for the bug fixes and new tests that follow in the series.
bpf: Refactor object relationship tracking and fix dynptr UAF bug
The core patch of the v3 dynptr series, reworking how the verifier tracks ownership relationships between BPF objects (dynptrs, slices, and the underlying memory they reference). The refactor also fixes a use-after-free bug where the verifier failed to invalidate derived dynptr slices after the parent object was freed, potentially allowing a program to access freed memory at runtime.
bpf: add bpf_init_inode_xattr kfunc for atomic inode labeling
Introduces a new kfunc that BPF LSM programs can call from the inode_init_security hook to atomically attach an xattr to an inode before it becomes visible to the rest of the system. This fills a gap for security labeling workflows that need a label to be present from the moment of first access, without races against concurrent readers. The v1 series includes selftests exercising the kfunc across multiple inode types.
bpf, x86: Granlund-Montgomery optimization for 64-bit div/mod by immediate
Applies the Granlund-Montgomery strength-reduction technique to the x86 BPF JIT to replace 64-bit integer division and modulo by compile-time immediates with a multiply-shift sequence, avoiding the expensive DIV/IDIV instructions. The optimisation can be several times faster than hardware division on modern CPUs. This is v3 of the patch, incorporating earlier feedback on overflow edge cases and negative immediate handling.
net: add missing syncookie statistics for BPF custom syncookies
Fixes missing counter increments in the network stack when BPF programs handle SYN cookies via the kfunc-based custom syncookie API, ensuring that /proc/net/netstat SYN cookie counters accurately reflect BPF-generated cookies. Without this fix, operators relying on standard Linux TCP statistics cannot detect or diagnose syncookie activity handled by BPF programs. The v3 series adds a selftest that verifies the counters increment correctly.
Generated 2026-04-28T00:00:00Z
The week of April 13–20 saw substantial activity across the BPF subsystem. The most prominent contribution was Yonghong Song's stack-arguments series (reaching v6), which enables BPF functions and kfuncs to accept more than six arguments by spilling extras onto the stack, complete with x86-64 JIT support and verifier validation. Jiri Olsa posted a 28-patch series introducing a tracing_multi link type, allowing a single BPF link to attach to multiple kernel functions simultaneously for more efficient multi-function tracing. Other notable work included Alan Maguire extending the BTF UAPI to use previously reserved bits for larger vlen and kind fields, Puranjay Mohan adding CPU time counter kfuncs for precise hardware performance measurement, and Kumar Kartikeya Dwivedi adding a mechanism for the verifier to emit non-fatal warning messages along with a deprecated kfunc annotation.
bpf: Support stack arguments for bpf functions
Adds verifier support for BPF subprogram functions to receive arguments on the stack, enabling signatures with more than the standard six register-based parameters. A new BPF_REG_PARAMS mechanism tracks stack argument state through the verifier's analysis, and the calling convention is updated to lay out excess parameters in a defined region of the caller's stack frame. This is patch 07/17 of the v6 series and is the core enabler for the rest of the stack argument work. The feature requires JIT support and programs on interpreter-only configurations are rejected.
bpf,x86: Implement JIT support for stack arguments
Implements the x86-64 JIT backend changes needed to physically spill excess function arguments onto the stack when calling BPF subprograms or kfuncs. The JIT allocates additional stack space and emits store instructions to lay out parameters before the call site as the callee expects. This is patch 14/17 of the v6 series and is the first architecture-specific implementation, after which the feature becomes usable on x86-64 systems. Other JIT backends can follow the same pattern independently.
bpf: Add support for tracing multi link
Introduces the core kernel implementation of the tracing_multi link type, which lets a single BPF link attach a program to multiple kernel functions at once instead of requiring one link per function. The implementation reuses and extends the existing trampoline infrastructure, adding bulk attach and detach operations via new bpf_trampoline_multi_attach/detach functions. This is patch 13/28 of a 28-patch v5 series that also covers libbpf support, session semantics, cookies, fdinfo, and extensive selftests. Bulk attachment reduces per-function overhead and simplifies management of tracing programs that monitor many kernel entry points.
libbpf: Add support to create tracing multi link
Adds the libbpf API surface for creating tracing_multi links, enabling user-space programs to attach to multiple kernel functions through a single library call. The implementation resolves function names to BTF IDs and constructs the appropriate bpf_link_create attributes for the new link type. This is patch 20/28 of the tracing_multi series and depends on the earlier kernel-side implementation patches. Applications that currently loop over individual fentry/fexit attachments can migrate to this API for a simpler and more efficient interface.
bpf: Add support for verifier warning messages
Introduces a new verifier facility to emit non-fatal warning messages during program verification, separate from the existing error-only log. Warnings allow the verifier to surface advisory information—such as use of deprecated kfuncs—without failing the load. This is patch 1/4 of the v3 series; subsequent patches use the mechanism to implement the deprecated kfunc annotation. The change keeps the existing log level semantics intact and exposes the warnings through the bpf_attr verifier log interface so that libbpf and tools can display them to users.
bpf: Introduce __bpf_kfunc_mark_deprecated annotation
Adds a __bpf_kfunc_mark_deprecated macro that kernel developers can apply to kfunc definitions to signal that a function is deprecated and should not be used in new programs. When the verifier encounters a call to a deprecated kfunc it emits a warning (via the new warning infrastructure from patch 1/4) rather than rejecting the program, preserving backward compatibility. This follows a well-understood deprecation pattern familiar from other kernel annotation systems and gives BPF subsystem maintainers a clean path to phase out old kfuncs.
bpf: add bpf_get_cpu_time_counter kfunc
Introduces bpf_get_cpu_time_counter, a new kfunc that reads the raw CPU hardware time-stamp counter, providing BPF programs with a low-overhead, high-resolution time source for performance measurement. This is patch 2/6 of a 13-revision series that also adds bpf_cpu_time_counter_to_ns for converting the raw counter value to nanoseconds and includes ARM64 JIT support. The kfunc is useful for latency profiling and micro-benchmarking from within BPF programs without the overhead of a full clock_gettime call. The long revision history reflects careful review of security and portability concerns.
bpf: Extend BTF UAPI vlen, kinds to use unused bits
Expands the BTF type header to use previously reserved bits, growing the vlen field from 16 to 24 bits and the kind field to support additional type kinds. This removes a practical limit on the number of members a BTF struct or union can describe, which matters for large generated types. The patch is the first of a six-part v3 series that updates libbpf, bpftool, selftests, and documentation to match the new layout. Careful backward compatibility handling ensures existing tools and kernels can still parse older BTF blobs correctly.
bpf: Fix NULL deref in map_kptr_match_type for scalar regs
Fixes a NULL pointer dereference in map_kptr_match_type that could be triggered when a BPF program stored a scalar (non-pointer) value into a map slot typed as a kptr. The function assumed the register was always a pointer and dereferenced its type information without checking, leading to a verifier crash. The fix adds an early check that rejects the scalar store with a clear error message before the dereference occurs. The companion selftest patch (2/2) reproduces the crash to prevent regression.
libbpf: Report error when a negative kprobe offset is specified
Fixes a libbpf oversight where a negative offset for a kprobe attachment was silently forwarded to the kernel rather than rejected early with a clear error. Negative kprobe offsets are not supported and passing them produces confusing kernel-level failures. This is the third revision of the fix, refining the placement of the validation check based on earlier review feedback. Catching the invalid value in libbpf provides a much better error experience for programs that accidentally misconfigure their kprobe offsets.
arm32, bpf: Reject BPF-to-BPF calls and callbacks in the JIT
Makes the ARM32 BPF JIT explicitly reject programs that use BPF-to-BPF subprogram calls or callbacks, which the 32-bit ARM JIT does not support. Previously such programs could reach the JIT and fail in an undefined way; now they are turned away with a clear error at JIT time. This is a v2 follow-up that supersedes an earlier patch targeting only BPF_PSEUDO_CALL. Explicit rejection is preferable to a silent fallback to the interpreter, which could mask bugs and produce inconsistent performance characteristics.
selftests/bpf: fix off-by-one in bpf_cpumask_populate related selftest
Corrects an off-by-one error in a BPF selftest exercising bpf_cpumask_populate, where the loop bound caused a read one element past the intended array boundary. The bug could produce spurious failures or undefined behavior on configurations where the adjacent memory was not safely accessible. The fix is a one-line bound correction with no impact on the BPF subsystem itself. Accurate selftests are important so that CI results reflect real regressions rather than test-infrastructure noise.
Generated 2026-04-21T00:00:00Z
The week of April 6-13 on bpf-next was defined by two parallel verifier modernization efforts and a significant new calling-convention feature. Eduard Zingerman's static stack liveness analysis series (v4, 14 patches) completed its run, delivering 4-byte stack tracking granularity, a forward arg-tracking dataflow pass, and dead stack slot poisoning to strengthen initialization safety guarantees. Alexei Starovoitov simultaneously pursued a structural cleanup, splitting the monolithic verifier.c into focused modules across four revision rounds. On the feature side, Yonghong Song's v4 18-patch series brings stack-based argument passing to BPF functions and kfuncs, backed by x86_64 JIT support, while Emil Tsalapatis pushed the arena memory library to v7 with a buddy allocator and ASAN runtime.
The culmination of Zingerman's v4 static stack liveness series (14 patches), this patch uses the results of the new forward arg-tracking dataflow analysis to poison BPF stack slots that are written but never subsequently read. Poisoning dead slots causes the verifier to reject programs that rely on uninitialized stack memory, closing a class of subtle bugs where stale values could influence program behavior. The series builds on 4-byte stack granularity tracking, (callsite, depth)-keyed func_instances, and a new forward liveness API introduced in earlier patches.
bpf: Enforce regsafe base id consistency for BPF_ADD_CONST scalars
Fixes a verifier state-pruning correctness bug where the regsafe() check failed to account for base ID consistency when comparing two BPF_ADD_CONST scalar registers. Without this fix, the verifier could incorrectly declare two program states as equivalent and prune a branch that should have been explored, potentially accepting a program that reads out-of-bounds. A companion selftest is included to exercise the specific code path.
bpf: Split fixup/post-processing logic from verifier.c into fixups.c
The opening patch of Starovoitov's v4 verifier.c split series moves fixup and post-processing logic out of the monolithic verifier.c into fixups.c. Over four revision rounds this week the series also spun out liveness.c, cfg.c, states.c, backtrack.c, and check_btf.c, dramatically reducing the size of verifier.c and making each subsystem independently reviewable. The refactoring is behavior-preserving and comes with no functional changes.
bpf: Support stack arguments for bpf functions
The core verifier patch of Song's v4 18-patch series teaches the BPF verifier to validate stack-based arguments at BPF-to-BPF call sites, extending the calling convention beyond the five-register limit. A new BPF_REG_STACK_ARG_BASE register is introduced for addressing arguments passed on the caller's stack, and the verifier enforces that stack arguments are only used in JITed programs not reachable through tail calls. This enables BPF functions and kfuncs to accept more than five arguments.
bpf,x86: Implement JIT support for stack arguments
The x86_64 JIT backend patch in Song's stack-arguments series emits code to correctly marshal arguments placed on the caller's stack frame at BPF function call boundaries. Arguments beyond the five-register window are addressed via BPF_REG_STACK_ARG_BASE and copied into the appropriate stack location before the call. This patch completes the end-to-end implementation for x86_64, with negative tests for unsupported configurations included in the selftest series.
bpf: Allow instructions with arena source and non-arena dest registers
The first substantive verifier patch in Tsalapatis's v7 arena library series relaxes a restriction on mixed arena/non-arena arithmetic so that result values can be plain scalars or non-arena pointers. This is needed to support the user-space arena library code, which frequently mixes pointer types in address calculations. The v7 series accompanying it adds a buddy allocator, ASAN runtime, and a comprehensive libarena selftest suite.
bpf: Fix Null-Pointer Dereference in kernel_clone() via BPF fmod_ret on security_task_alloc
This v3 bug fix addresses a null-pointer dereference triggered when a BPF fmod_ret program attached to the security_task_alloc hook returns non-zero, causing kernel_clone() to proceed with a partially-initialized task struct. The fix adds the missing return-value check so the error path is taken before the null dereference, and a selftest verifies correct behavior. This patch appeared as v2 earlier in the week and was refined to v3 by April 11.
bpf: Move constants blinding out of arch-specific JITs
The first patch in Xu Kuohai's v13 5-patch series consolidates JIT constant blinding into the architecture-independent BPF core, removing per-arch duplication. The series' broader goal is to enable all JIT backends to emit ENDBR (x86) and BTI (AArch64) instructions for indirect call targets, strengthening CFI on those architectures. Earlier patches in the series abstract the blinding so that the arch-specific CFI instruction emission can slot in cleanly.
bpf: Use kmalloc_nolock() universally in local storage
Converts BPF local storage allocation paths to use the recently introduced kmalloc_nolock() variant, which avoids lock acquisition and improves performance in the common case where the per-CPU slab is warm. A companion patch in the same v2 series removes now-unnecessary gfp_flags plumbing from bpf_local_storage_update(). The series also fixes a selftest that was inadvertently tracing kmalloc calls and would be perturbed by the allocation strategy change.
bpf: add missing fsession to the verifier log
Adds the BPF_TRACE_FSESSION attach type to the verifier's attach-type log output, which omitted it despite the type being defined. Two companion patches in the v3 series fix the same omission in the BPF documentation and bpftool's usage text. This is a purely cosmetic/correctness fix with no change to runtime behavior.
Generated 2026-04-14T00:00:00Z
The week of March 30 - April 6 saw heavy activity around BPF verifier improvements and calling convention extensions. Yonghong Song iterated through three versions of stack argument support for BPF functions and kfuncs, culminating in v3 with a new BPF_REG_STACK_ARG_BASE register and x86_64 JIT implementation. Alexei Starovoitov continued refining prep patches for static stack liveness analysis, reaching v5 with subprogram topological ordering and constant-register computation passes that will enable smarter stack slot tracking. Additional highlights include Emil Tsalapatis introducing a full arena library and runtime, Xu Kuohai reaching v12 for emitting ENDBR/BTI instructions at indirect JIT jump targets, Chengkaitao refactoring how the verifier dispatches kfunc checks via a new BPF_VERIF_KFUNC_DEF mechanism, and Paul Chaignon fixing verifier invariant violations discovered by syzbot.
bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
Introduces BPF_REG_STACK_ARG_BASE, a new virtual BPF register serving as the base pointer for stack-allocated function arguments. This is the foundation of the 11-patch v3 series enabling BPF functions and kfuncs to receive arguments too large for the six general-purpose argument registers. The register is handled specially by both the verifier and x86_64 JIT backend to allocate, track, and validate stack argument slots. The series also includes selftests for BPF-to-BPF calls, kfunc calls, and negative cases for oversized arguments.
bpf: Add helper and kfunc stack access size resolution
The final patch in Alexei Starovoitov's v5 'Prep patches for static stack liveness' series, which adds helper and kfunc stack access size resolution used by upcoming static liveness analysis. The series as a whole sorts subprograms in topological order after check_cfg(), adds bpf_compute_const_regs() and bpf_prune_dead_branches() verifier passes, and moves verifier helpers to a shared header. Together these changes lay the groundwork for tracking which stack slots are actually live, reducing unnecessary spill/fill overhead.
bpf: Upgrade scalar to PTR_TO_ARENA on arena pointer addition
The first patch in the v3 'Introduce arena library and runtime' series, which teaches the verifier to promote a scalar register to PTR_TO_ARENA when added to an arena pointer. The broader 9-patch series introduces a libarena scaffolding with an ASAN-compatible runtime, a buddy allocator implementation, and comprehensive selftests. This infrastructure enables BPF programs using memory arenas to benefit from proper pointer type tracking and arena-aware address sanitization during testing.
bpf, x86: Emit ENDBR for indirect jump targets
Part of Xu Kuohai's v12 series adding Intel CET ENDBR (x86) and ARM64 BTI instructions at indirect JIT jump targets to harden BPF programs against control-flow hijacking. A companion patch adds a helper to detect indirect jump targets during JIT compilation, and another passes bpf_verifier_env to the JIT so it has the information needed to insert these instructions. The series also moves constant blinding out of arch-specific JITs into a shared location to simplify future JIT backends.
bpf: Introduce BTF_SET/ID_SUB and BPF_VERIF_KFUNC_DEF
Introduces BTF_SET/ID_SUB and BPF_VERIF_KFUNC_DEF macros that allow kfunc sets to embed their own verifier check callbacks, replacing the existing flat dispatch table used by the verifier. This refactor makes it easier to add verifier logic for new kfuncs without touching central verifier files. A follow-on patch converts the rbtree kfuncs to use the new mechanism, demonstrating the pattern.
bpf: Refactor reg_bounds_sanity_check
The first patch in Paul Chaignon's v3 'Fix invariant violations and improve branch detection' series, which addresses syzbot-reported verifier invariant violations. The series refactors reg_bounds_sanity_check, adds early exit for invalid reg_bounds_sync inputs, simulates branches to prune paths with range violations, and removes incorrect invariant-violation flags from selftests. These fixes improve verifier correctness when dealing with edge cases in register range tracking.
libbpf: Auto-upgrade kprobes to multi-kprobes when supported
Part of an RFC v3 series that transparently upgrades single kprobe and uprobe attachments to their multi-kprobe/multi-uprobe equivalents when the kernel supports them. A new FEAT_KPROBE_MULTI_LINK feature probe is added to libbpf to detect kernel support at runtime. This allows BPF programs written against the single-attach API to silently benefit from the performance improvements of multi-attach without any code changes.
bpf: Do not ignore offsets for loads from insn_arrays
Fixes a bug where the BPF verifier ignored non-zero offsets when loading values from instruction arrays, causing incorrect value reads. The fix ensures the offset is properly factored into the load address computation in the verifier's constant propagation path. A companion patch adds regression tests covering a variety of offset values to prevent recurrence.
pull-request: bpf-next 2026-04-01
Martin KaFai Lau's bpf-next pull request for April 1, 2026, consolidating the accumulated bpf-next changes for submission to Linus's tree. Pull requests like this mark a significant milestone in the development cycle, bundling verifier improvements, new helpers, libbpf changes, and selftests accumulated since the previous pull.
bpf: Refactor dynptr mutability tracking
Refactors how the BPF verifier tracks dynptr mutability, consolidating the immutability flag into the dynptr state representation for cleaner handling. This v2 patch simplifies the code paths that check whether a dynptr may be written through, reducing the risk of correctness issues when new dynptr types are added. The change is internal to the verifier with no user-visible behavior change.
Generated 2026-04-06T10:13:03Z
April 2026 was an active month for the bpf-next mailing list, with 100 patches across 25 series. The month was headlined by Kaitao Cheng's extended bpf_list kfunc API and Ricardo B. Marlière's substantial rework of the BPF selftests build system, the latter reaching its eleventh revision. JIT work was broad: Kuan-Wei Chiu added initial m68k BPF JIT support and fixed the RV32 JIT, while Puranjay Mohan added stack argument support to the arm64 JIT and contributed a new XDP load-balancer benchmark. LSM-related activity saw two new xattr kfuncs for atomic inode labeling and fixes for negative dentry crashes, and the verifier gained per-subprogram instruction count reporting from Paul Chaignon.
bpf: refactor __bpf_list_del to take list node pointer
First patch of an 8-part series (v10) extending the BPF linked-list kfunc API with new operations closer to the kernel's native list_head API. The series adds bpf_list_del (remove a node without knowing the head), bpf_list_add (insert after a given node), bpf_list_is_first/last/empty query kfuncs, and introduces __nonown_allowed annotations to permit non-owning reference arguments. These additions enable richer data structure manipulation in BPF programs and reflect extensive iteration on ownership semantics across the ten review rounds.
bpf, arm64: Map BPF_REG_0 to x8 instead of x7
First patch of a 3-part series (v2) enabling the arm64 BPF JIT to support functions with more than eight arguments via stack-based argument passing per the AArch64 calling convention. This initial patch remaps BPF_REG_0 from x7 to x8 to free the register slot needed for stack argument setup. Subsequent patches add JIT emission for stack arguments and update selftests to exercise the new path on arm64. This unblocks BPF programs that call kfuncs or helpers with many parameters on arm64 hardware.
selftests/bpf: Add bench_force_done() for early benchmark completion
First patch of a 7-part series adding an XDP load-balancer benchmark to the BPF selftests suite. The series contributes a hash-based XDP load-balancing BPF program, a batch-timing library for precise measurement, a userspace benchmark driver, and a shell script for automated benchmark runs. A bpf-nop benchmark is also added to establish a timing overhead baseline. This fills a significant gap in performance tooling for XDP-based packet processing programs in the upstream test suite.
m68k, bpf: Add initial BPF JIT compiler support
Introduces the first BPF JIT compiler for the m68k (Motorola 68000) architecture, bringing JIT acceleration to m68k systems running Linux. Before this patch, BPF programs on m68k ran exclusively through the interpreter. The JIT covers the core BPF instruction set and follows the established pattern of other architecture JIT implementations. This benefits embedded and retro computing platforms using m68k processors.
riscv, bpf: Fix support for BPF_SDIV and BPF_SMOD in RV32 JIT
First patch of a 3-part series fixing and extending the RISC-V 32-bit BPF JIT. The patches correct incorrect code generation for signed division and modulo (BPF_SDIV/BPF_SMOD) and sign-extend moves (BPF_MOVSX), then add support for 32-bit atomic operations. The correctness fixes prevent silent arithmetic errors in BPF programs using signed integer division on RV32 platforms, and 32-bit atomic support expands the range of lock-free data structure operations available to BPF programs on RV32.
bpf: Fix out-of-bounds read in bpf_patch_call_args()
First patch of a 3-part series (v9) fixing two bugs in bpf_patch_call_args(): an out-of-bounds array read when the patch buffer is exhausted, and silent truncation of large BPF-to-BPF call offsets that do not fit in a signed 16-bit field. The truncation bug can produce incorrect branch targets in large BPF programs, leading to incorrect behavior or crashes at runtime. The series includes a selftest that specifically exercises the large-offset scenario to prevent regression.
bpf: Limit fields used in btf_record_equal comparisons
First patch of a 4-part series fixing a deadlock that occurs when a referenced kptr's destructor is invoked from NMI context while a spinlock is already held on the same CPU. The series limits unnecessary fields compared in btf_record_equal, defers BTF teardown via rcu_work to avoid NMI-unsafe locking, and directly fixes the kptr destructor deadlock path. A selftest reproducer for the NMI deadlock scenario is included as the final patch.
bpf: Print breakdown of insns processed by subprogs
Extends the BPF verifier's log to emit per-subprogram instruction counts rather than only an aggregate total. This gives developers visibility into which subprograms are driving verification complexity in large multi-subprog BPF programs, making it much easier to diagnose and fix programs that approach the verifier instruction limit. This v3 refines the log format based on reviewer feedback and is paired with a selftest that validates the new output lines.
bpf: add bpf_init_inode_xattr kfunc for atomic inode labeling
Introduces bpf_init_inode_xattr, a new kfunc allowing BPF LSM programs to atomically set an extended attribute on an inode during its initialization, before it becomes visible to other processes. This fills a race window in bpf_set_dentry_xattr where a label could be missing briefly after inode creation. The kfunc is intended for security labeling use cases where the label must be present from the very first access to the inode, and a selftest verifies the behavior in a BPF LSM context.
bpf: fix crash in bpf_[set|remove]_dentry_xattr for negative dentries
Fixes a null pointer dereference crash in bpf_set_dentry_xattr and bpf_remove_dentry_xattr when called with a negative dentry that has no associated inode. BPF LSM programs that walk filesystem paths can encounter negative dentries for non-existent files, and both kfuncs previously lacked a guard for this case. This v2 adds the necessary check to reject negative dentries early, preventing the crash without changing behavior for positive dentries.
selftests/bpf: Add BPF_STRICT_BUILD toggle
First patch of an 11-part series (v11) reworking the BPF selftests Makefile to handle partial kernel configurations gracefully. The series introduces BPF_STRICT_BUILD to toggle strict vs. tolerant behavior, adds skip logic for tests whose BPF objects were not compiled, fixes KDIR handling for distro out-of-tree builds, and tolerates BPF skeleton generation and benchmark build failures. The extensive revision history reflects the complexity of making the selftests build system robust across the wide variety of kernel configurations encountered in practice.
selftests/bpf: Add arena ASAN runtime to libarena
Part of a v9 series developing the libarena memory allocator library for BPF selftests. This patch adds ASAN (AddressSanitizer) runtime support so that arena-backed allocations can be checked for memory safety errors during testing. Later patches in the series add a buddy allocator backend for libarena and associated selftests. The arena library provides a structured mechanism for BPF programs to manage large memory regions backed by BPF arena maps.
xskmap: reject TX-only AF_XDP sockets
Adds a validation check to reject TX-only AF_XDP sockets from being inserted into an XSKMAP. TX-only sockets do not have a receive queue and cannot process XDP_REDIRECT actions, so permitting them in the map leads to silent packet drops that are difficult to diagnose. This v3 enforces the restriction at map insertion time with a clear EINVAL error, preventing misconfiguration of AF_XDP-based packet processing pipelines.
net: add missing syncookie statistics for BPF custom syncookies
Fixes missing syncookie statistics when BPF programs handle SYN cookies via the custom syncookie interface. Without this fix, counters such as TcpExtSyncookiesRecv were not incremented for BPF-managed connections, making it impossible to distinguish BPF SYN flood mitigation from standard kernel behavior through standard statistics tooling like netstat or /proc/net/netstat. A selftest validates that the correct counters are updated when a BPF custom syncookie program handles a connection.
Generated 2026-05-02T10:30:00Z