Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2023-02-17

We've added 64 non-merge commits during the last 7 day(s) which contain
a total of 158 files changed, 4190 insertions(+), 988 deletions(-).

The main changes are:

1) Add a rbtree data structure following the "next-gen data structure"
   precedent set by recently-added linked-list, that is, by using
   kfunc + kptr instead of adding a new BPF map type, from Dave Marchevsky.

2) Add a new benchmark for hashmap lookups to BPF selftests,
   from Anton Protopopov.

3) Fix bpf_fib_lookup to only return valid neighbors and add an option
   to skip the neigh table lookup, from Martin KaFai Lau.

4) Add cgroup.memory=nobpf kernel parameter option to disable BPF memory
   accouting for container environments, from Yafang Shao.

5) Batch of ice multi-buffer and driver performance fixes,
   from Alexander Lobakin.

6) Fix a bug in determining whether global subprog's argument is
   PTR_TO_CTX, which is based on type names which breaks kprobe progs,
   from Andrii Nakryiko.

7) Prep work for future -mcpu=v4 LLVM option which includes usage of
   BPF_ST insn. Thus improve BPF_ST-related value tracking in verifier,
   from Eduard Zingerman.

8) More prep work for later building selftests with Memory Sanitizer
   in order to detect usages of undefined memory, from Ilya Leoshkevich.

9) Fix xsk sockets to check IFF_UP earlier to avoid a NULL pointer
   dereference via sendmsg(), from Maciej Fijalkowski.

10) Implement BPF trampoline for RV64 JIT compiler, from Pu Lehui.

11) Fix BPF memory allocator in combination with BPF hashtab where it could
    corrupt special fields e.g. used in bpf_spin_lock, from Hou Tao.

12) Fix LoongArch BPF JIT to always use 4 instructions for function
    address so that instruction sequences don't change between passes,
    from Hengqi Chen.

* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (64 commits)
  selftests/bpf: Add bpf_fib_lookup test
  bpf: Add BPF_FIB_LOOKUP_SKIP_NEIGH for bpf_fib_lookup
  riscv, bpf: Add bpf trampoline support for RV64
  riscv, bpf: Add bpf_arch_text_poke support for RV64
  riscv, bpf: Factor out emit_call for kernel and bpf context
  riscv: Extend patch_text for multiple instructions
  Revert "bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES"
  selftests/bpf: Add global subprog context passing tests
  selftests/bpf: Convert test_global_funcs test to test_loader framework
  bpf: Fix global subprog context argument resolution logic
  LoongArch, bpf: Use 4 instructions for function address in JIT
  bpf: bpf_fib_lookup should not return neigh in NUD_FAILED state
  bpf: Disable bh in bpf_test_run for xdp and tc prog
  xsk: check IFF_UP earlier in Tx path
  Fix typos in selftest/bpf files
  selftests/bpf: Use bpf_{btf,link,map,prog}_get_info_by_fd()
  samples/bpf: Use bpf_{btf,link,map,prog}_get_info_by_fd()
  bpftool: Use bpf_{btf,link,map,prog}_get_info_by_fd()
  libbpf: Use bpf_{btf,link,map,prog}_get_info_by_fd()
  libbpf: Introduce bpf_{btf,link,map,prog}_get_info_by_fd()
  ...
====================

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
  • Loading branch information
kuba-moo committed Feb 21, 2023
2 parents 01bb11a + 168de02 commit ee8d72a
Show file tree
Hide file tree
Showing 158 changed files with 4,194 additions and 1,007 deletions.
1 change: 1 addition & 0 deletions Documentation/admin-guide/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -557,6 +557,7 @@
Format: <string>
nosocket -- Disable socket memory accounting.
nokmem -- Disable kernel memory accounting.
nobpf -- Disable BPF memory accounting.

checkreqprot= [SELINUX] Set initial checkreqprot flag value.
Format: { "0" | "1" }
Expand Down
267 changes: 267 additions & 0 deletions Documentation/bpf/graph_ds_impl.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,267 @@
=========================
BPF Graph Data Structures
=========================

This document describes implementation details of new-style "graph" data
structures (linked_list, rbtree), with particular focus on the verifier's
implementation of semantics specific to those data structures.

Although no specific verifier code is referred to in this document, the document
assumes that the reader has general knowledge of BPF verifier internals, BPF
maps, and BPF program writing.

Note that the intent of this document is to describe the current state of
these graph data structures. **No guarantees** of stability for either
semantics or APIs are made or implied here.

.. contents::
:local:
:depth: 2

Introduction
------------

The BPF map API has historically been the main way to expose data structures
of various types for use within BPF programs. Some data structures fit naturally
with the map API (HASH, ARRAY), others less so. Consequentially, programs
interacting with the latter group of data structures can be hard to parse
for kernel programmers without previous BPF experience.

Luckily, some restrictions which necessitated the use of BPF map semantics are
no longer relevant. With the introduction of kfuncs, kptrs, and the any-context
BPF allocator, it is now possible to implement BPF data structures whose API
and semantics more closely match those exposed to the rest of the kernel.

Two such data structures - linked_list and rbtree - have many verification
details in common. Because both have "root"s ("head" for linked_list) and
"node"s, the verifier code and this document refer to common functionality
as "graph_api", "graph_root", "graph_node", etc.

Unless otherwise stated, examples and semantics below apply to both graph data
structures.

Unstable API
------------

Data structures implemented using the BPF map API have historically used BPF
helper functions - either standard map API helpers like ``bpf_map_update_elem``
or map-specific helpers. The new-style graph data structures instead use kfuncs
to define their manipulation helpers. Because there are no stability guarantees
for kfuncs, the API and semantics for these data structures can be evolved in
a way that breaks backwards compatibility if necessary.

Root and node types for the new data structures are opaquely defined in the
``uapi/linux/bpf.h`` header.

Locking
-------

The new-style data structures are intrusive and are defined similarly to their
vanilla kernel counterparts:

.. code-block:: c
struct node_data {
long key;
long data;
struct bpf_rb_node node;
};
struct bpf_spin_lock glock;
struct bpf_rb_root groot __contains(node_data, node);
The "root" type for both linked_list and rbtree expects to be in a map_value
which also contains a ``bpf_spin_lock`` - in the above example both global
variables are placed in a single-value arraymap. The verifier considers this
spin_lock to be associated with the ``bpf_rb_root`` by virtue of both being in
the same map_value and will enforce that the correct lock is held when
verifying BPF programs that manipulate the tree. Since this lock checking
happens at verification time, there is no runtime penalty.

Non-owning references
---------------------

**Motivation**

Consider the following BPF code:

.. code-block:: c
struct node_data *n = bpf_obj_new(typeof(*n)); /* ACQUIRED */
bpf_spin_lock(&lock);
bpf_rbtree_add(&tree, n); /* PASSED */
bpf_spin_unlock(&lock);
From the verifier's perspective, the pointer ``n`` returned from ``bpf_obj_new``
has type ``PTR_TO_BTF_ID | MEM_ALLOC``, with a ``btf_id`` of
``struct node_data`` and a nonzero ``ref_obj_id``. Because it holds ``n``, the
program has ownership of the pointee's (object pointed to by ``n``) lifetime.
The BPF program must pass off ownership before exiting - either via
``bpf_obj_drop``, which ``free``'s the object, or by adding it to ``tree`` with
``bpf_rbtree_add``.

(``ACQUIRED`` and ``PASSED`` comments in the example denote statements where
"ownership is acquired" and "ownership is passed", respectively)

What should the verifier do with ``n`` after ownership is passed off? If the
object was ``free``'d with ``bpf_obj_drop`` the answer is obvious: the verifier
should reject programs which attempt to access ``n`` after ``bpf_obj_drop`` as
the object is no longer valid. The underlying memory may have been reused for
some other allocation, unmapped, etc.

When ownership is passed to ``tree`` via ``bpf_rbtree_add`` the answer is less
obvious. The verifier could enforce the same semantics as for ``bpf_obj_drop``,
but that would result in programs with useful, common coding patterns being
rejected, e.g.:

.. code-block:: c
int x;
struct node_data *n = bpf_obj_new(typeof(*n)); /* ACQUIRED */
bpf_spin_lock(&lock);
bpf_rbtree_add(&tree, n); /* PASSED */
x = n->data;
n->data = 42;
bpf_spin_unlock(&lock);
Both the read from and write to ``n->data`` would be rejected. The verifier
can do better, though, by taking advantage of two details:

* Graph data structure APIs can only be used when the ``bpf_spin_lock``
associated with the graph root is held

* Both graph data structures have pointer stability

* Because graph nodes are allocated with ``bpf_obj_new`` and
adding / removing from the root involves fiddling with the
``bpf_{list,rb}_node`` field of the node struct, a graph node will
remain at the same address after either operation.

Because the associated ``bpf_spin_lock`` must be held by any program adding
or removing, if we're in the critical section bounded by that lock, we know
that no other program can add or remove until the end of the critical section.
This combined with pointer stability means that, until the critical section
ends, we can safely access the graph node through ``n`` even after it was used
to pass ownership.

The verifier considers such a reference a *non-owning reference*. The ref
returned by ``bpf_obj_new`` is accordingly considered an *owning reference*.
Both terms currently only have meaning in the context of graph nodes and API.

**Details**

Let's enumerate the properties of both types of references.

*owning reference*

* This reference controls the lifetime of the pointee

* Ownership of pointee must be 'released' by passing it to some graph API
kfunc, or via ``bpf_obj_drop``, which ``free``'s the pointee

* If not released before program ends, verifier considers program invalid

* Access to the pointee's memory will not page fault

*non-owning reference*

* This reference does not own the pointee

* It cannot be used to add the graph node to a graph root, nor ``free``'d via
``bpf_obj_drop``

* No explicit control of lifetime, but can infer valid lifetime based on
non-owning ref existence (see explanation below)

* Access to the pointee's memory will not page fault

From verifier's perspective non-owning references can only exist
between spin_lock and spin_unlock. Why? After spin_unlock another program
can do arbitrary operations on the data structure like removing and ``free``-ing
via bpf_obj_drop. A non-owning ref to some chunk of memory that was remove'd,
``free``'d, and reused via bpf_obj_new would point to an entirely different thing.
Or the memory could go away.

To prevent this logic violation all non-owning references are invalidated by the
verifier after a critical section ends. This is necessary to ensure the "will
not page fault" property of non-owning references. So if the verifier hasn't
invalidated a non-owning ref, accessing it will not page fault.

Currently ``bpf_obj_drop`` is not allowed in the critical section, so
if there's a valid non-owning ref, we must be in a critical section, and can
conclude that the ref's memory hasn't been dropped-and- ``free``'d or
dropped-and-reused.

Any reference to a node that is in an rbtree _must_ be non-owning, since
the tree has control of the pointee's lifetime. Similarly, any ref to a node
that isn't in rbtree _must_ be owning. This results in a nice property:
graph API add / remove implementations don't need to check if a node
has already been added (or already removed), as the ownership model
allows the verifier to prevent such a state from being valid by simply checking
types.

However, pointer aliasing poses an issue for the above "nice property".
Consider the following example:

.. code-block:: c
struct node_data *n, *m, *o, *p;
n = bpf_obj_new(typeof(*n)); /* 1 */
bpf_spin_lock(&lock);
bpf_rbtree_add(&tree, n); /* 2 */
m = bpf_rbtree_first(&tree); /* 3 */
o = bpf_rbtree_remove(&tree, n); /* 4 */
p = bpf_rbtree_remove(&tree, m); /* 5 */
bpf_spin_unlock(&lock);
bpf_obj_drop(o);
bpf_obj_drop(p); /* 6 */
Assume the tree is empty before this program runs. If we track verifier state
changes here using numbers in above comments:

1) n is an owning reference

2) n is a non-owning reference, it's been added to the tree

3) n and m are non-owning references, they both point to the same node

4) o is an owning reference, n and m non-owning, all point to same node

5) o and p are owning, n and m non-owning, all point to the same node

6) a double-free has occurred, since o and p point to same node and o was
``free``'d in previous statement

States 4 and 5 violate our "nice property", as there are non-owning refs to
a node which is not in an rbtree. Statement 5 will try to remove a node which
has already been removed as a result of this violation. State 6 is a dangerous
double-free.

At a minimum we should prevent state 6 from being possible. If we can't also
prevent state 5 then we must abandon our "nice property" and check whether a
node has already been removed at runtime.

We prevent both by generalizing the "invalidate non-owning references" behavior
of ``bpf_spin_unlock`` and doing similar invalidation after
``bpf_rbtree_remove``. The logic here being that any graph API kfunc which:

* takes an arbitrary node argument

* removes it from the data structure

* returns an owning reference to the removed node

May result in a state where some other non-owning reference points to the same
node. So ``remove``-type kfuncs must be considered a non-owning reference
invalidation point as well.
3 changes: 2 additions & 1 deletion Documentation/bpf/other.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ Other
:maxdepth: 1

ringbuf
llvm_reloc
llvm_reloc
graph_ds_impl
7 changes: 7 additions & 0 deletions MAINTAINERS
Original file line number Diff line number Diff line change
Expand Up @@ -4023,6 +4023,13 @@ L: [email protected]
S: Maintained
F: tools/testing/selftests/bpf/

BPF [DOCUMENTATION] (Related to Standardization)
R: David Vernet <[email protected]>
L: [email protected]
L: [email protected]
S: Maintained
F: Documentation/bpf/instruction-set.rst

BPF [MISC]
L: [email protected]
S: Odd Fixes
Expand Down
2 changes: 1 addition & 1 deletion arch/loongarch/net/bpf_jit.c
Original file line number Diff line number Diff line change
Expand Up @@ -841,7 +841,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
if (ret < 0)
return ret;

move_imm(ctx, t1, func_addr, is32);
move_addr(ctx, t1, func_addr);
emit_insn(ctx, jirl, t1, LOONGARCH_GPR_RA, 0);
move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
break;
Expand Down
21 changes: 21 additions & 0 deletions arch/loongarch/net/bpf_jit.h
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,27 @@ static inline void emit_sext_32(struct jit_ctx *ctx, enum loongarch_gpr reg, boo
emit_insn(ctx, addiw, reg, reg, 0);
}

static inline void move_addr(struct jit_ctx *ctx, enum loongarch_gpr rd, u64 addr)
{
u64 imm_11_0, imm_31_12, imm_51_32, imm_63_52;

/* lu12iw rd, imm_31_12 */
imm_31_12 = (addr >> 12) & 0xfffff;
emit_insn(ctx, lu12iw, rd, imm_31_12);

/* ori rd, rd, imm_11_0 */
imm_11_0 = addr & 0xfff;
emit_insn(ctx, ori, rd, rd, imm_11_0);

/* lu32id rd, imm_51_32 */
imm_51_32 = (addr >> 32) & 0xfffff;
emit_insn(ctx, lu32id, rd, imm_51_32);

/* lu52id rd, rd, imm_63_52 */
imm_63_52 = (addr >> 52) & 0xfff;
emit_insn(ctx, lu52id, rd, rd, imm_63_52);
}

static inline void move_imm(struct jit_ctx *ctx, enum loongarch_gpr rd, long imm, bool is32)
{
long imm_11_0, imm_31_12, imm_51_32, imm_63_52, imm_51_0, imm_51_31;
Expand Down
2 changes: 1 addition & 1 deletion arch/riscv/include/asm/patch.h
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@
#define _ASM_RISCV_PATCH_H

int patch_text_nosync(void *addr, const void *insns, size_t len);
int patch_text(void *addr, u32 insn);
int patch_text(void *addr, u32 *insns, int ninsns);

#endif /* _ASM_RISCV_PATCH_H */
19 changes: 12 additions & 7 deletions arch/riscv/kernel/patch.c
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@

struct patch_insn {
void *addr;
u32 insn;
u32 *insns;
int ninsns;
atomic_t cpu_count;
};

Expand Down Expand Up @@ -102,12 +103,15 @@ NOKPROBE_SYMBOL(patch_text_nosync);
static int patch_text_cb(void *data)
{
struct patch_insn *patch = data;
int ret = 0;
unsigned long len;
int i, ret = 0;

if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
ret =
patch_text_nosync(patch->addr, &patch->insn,
GET_INSN_LENGTH(patch->insn));
for (i = 0; ret == 0 && i < patch->ninsns; i++) {
len = GET_INSN_LENGTH(patch->insns[i]);
ret = patch_text_nosync(patch->addr + i * len,
&patch->insns[i], len);
}
atomic_inc(&patch->cpu_count);
} else {
while (atomic_read(&patch->cpu_count) <= num_online_cpus())
Expand All @@ -119,11 +123,12 @@ static int patch_text_cb(void *data)
}
NOKPROBE_SYMBOL(patch_text_cb);

int patch_text(void *addr, u32 insn)
int patch_text(void *addr, u32 *insns, int ninsns)
{
struct patch_insn patch = {
.addr = addr,
.insn = insn,
.insns = insns,
.ninsns = ninsns,
.cpu_count = ATOMIC_INIT(0),
};

Expand Down
Loading

0 comments on commit ee8d72a

Please sign in to comment.