Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
Daniel Borkmann says:

====================
pull-request: bpf-next 2022-09-05

The following pull-request contains BPF updates for your *net-next* tree.

We've added 106 non-merge commits during the last 18 day(s) which contain
a total of 159 files changed, 5225 insertions(+), 1358 deletions(-).

There are two small merge conflicts, resolve them as follows:

1) tools/testing/selftests/bpf/DENYLIST.s390x

  Commit 27e2383 ("selftests/bpf: Add lru_bug to s390x deny list") in
  bpf tree was needed to get BPF CI green on s390x, but it conflicted with
  newly added tests on bpf-next. Resolve by adding both hunks, result:

  [...]
  lru_bug                                  # prog 'printk': failed to auto-attach: -524
  setget_sockopt                           # attach unexpected error: -524                                               (trampoline)
  cb_refs                                  # expected error message unexpected error: -524                               (trampoline)
  cgroup_hierarchical_stats                # JIT does not support calling kernel function                                (kfunc)
  htab_update                              # failed to attach: ERROR: strerror_r(-524)=22                                (trampoline)
  [...]

2) net/core/filter.c

  Commit 1227c17 ("net: Fix data-races around sysctl_[rw]mem_(max|default).")
  from net tree conflicts with commit 2900387 ("bpf: Change bpf_setsockopt(SOL_SOCKET)
  to reuse sk_setsockopt()") from bpf-next tree. Take the code as it is from
  bpf-next tree, result:

  [...]
	if (getopt) {
		if (optname == SO_BINDTODEVICE)
			return -EINVAL;
		return sk_getsockopt(sk, SOL_SOCKET, optname,
				     KERNEL_SOCKPTR(optval),
				     KERNEL_SOCKPTR(optlen));
	}

	return sk_setsockopt(sk, SOL_SOCKET, optname,
			     KERNEL_SOCKPTR(optval), *optlen);
  [...]

The main changes are:

1) Add any-context BPF specific memory allocator which is useful in particular for BPF
   tracing with bonus of performance equal to full prealloc, from Alexei Starovoitov.

2) Big batch to remove duplicated code from bpf_{get,set}sockopt() helpers as an effort
   to reuse the existing core socket code as much as possible, from Martin KaFai Lau.

3) Extend BPF flow dissector for BPF programs to just augment the in-kernel dissector
   with custom logic. In other words, allow for partial replacement, from Shmulik Ladkani.

4) Add a new cgroup iterator to BPF with different traversal options, from Hao Luo.

5) Support for BPF to collect hierarchical cgroup statistics efficiently through BPF
   integration with the rstat framework, from Yosry Ahmed.

6) Support bpf_{g,s}et_retval() under more BPF cgroup hooks, from Stanislav Fomichev.

7) BPF hash table and local storages fixes under fully preemptible kernel, from Hou Tao.

8) Add various improvements to BPF selftests and libbpf for compilation with gcc BPF
   backend, from James Hilliard.

9) Fix verifier helper permissions and reference state management for synchronous
   callbacks, from Kumar Kartikeya Dwivedi.

10) Add support for BPF selftest's xskxceiver to also be used against real devices that
    support MAC loopback, from Maciej Fijalkowski.

11) Various fixes to the bpf-helpers(7) man page generation script, from Quentin Monnet.

12) Document BPF verifier's tnum_in(tnum_range(), ...) gotchas, from Shung-Hsi Yu.

13) Various minor misc improvements all over the place.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (106 commits)
  bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.
  bpf: Remove usage of kmem_cache from bpf_mem_cache.
  bpf: Remove prealloc-only restriction for sleepable bpf programs.
  bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs.
  bpf: Remove tracing program restriction on map types
  bpf: Convert percpu hash map to per-cpu bpf_mem_alloc.
  bpf: Add percpu allocation support to bpf_mem_alloc.
  bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU.
  bpf: Adjust low/high watermarks in bpf_mem_cache
  bpf: Optimize call_rcu in non-preallocated hash map.
  bpf: Optimize element count in non-preallocated hash map.
  bpf: Relax the requirement to use preallocated hash maps in tracing progs.
  samples/bpf: Reduce syscall overhead in map_perf_test.
  selftests/bpf: Improve test coverage of test_maps
  bpf: Convert hash map to bpf_mem_alloc.
  bpf: Introduce any context BPF specific memory allocator.
  selftest/bpf: Add test for bpf_getsockopt()
  bpf: Change bpf_getsockopt(SOL_IPV6) to reuse do_ipv6_getsockopt()
  bpf: Change bpf_getsockopt(SOL_IP) to reuse do_ip_getsockopt()
  bpf: Change bpf_getsockopt(SOL_TCP) to reuse do_tcp_getsockopt()
  ...
====================

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Paolo Abeni <[email protected]>
  • Loading branch information
Paolo Abeni committed Sep 6, 2022
2 parents 03fdb11 + 274052a commit 2786bcf
Show file tree
Hide file tree
Showing 159 changed files with 5,225 additions and 1,358 deletions.
10 changes: 9 additions & 1 deletion arch/mips/net/bpf_jit_comp32.c
Original file line number Diff line number Diff line change
Expand Up @@ -1376,12 +1376,20 @@ void build_prologue(struct jit_context *ctx)
const u8 *fp = bpf2mips32[BPF_REG_FP];
int stack, saved, locals, reserved;

/*
* In the unlikely event that the TCC limit is raised to more
* than 16 bits, it is clamped to the maximum value allowed for
* the generated code (0xffff). It is better fail to compile
* instead of degrading gracefully.
*/
BUILD_BUG_ON(MAX_TAIL_CALL_CNT > 0xffff);

/*
* The first two instructions initialize TCC in the reserved (for us)
* 16-byte area in the parent's stack frame. On a tail call, the
* calling function jumps into the prologue after these instructions.
*/
emit(ctx, ori, MIPS_R_T9, MIPS_R_ZERO, min(MAX_TAIL_CALL_CNT, 0xffff));
emit(ctx, ori, MIPS_R_T9, MIPS_R_ZERO, MAX_TAIL_CALL_CNT);
emit(ctx, sw, MIPS_R_T9, 0, MIPS_R_SP);

/*
Expand Down
10 changes: 9 additions & 1 deletion arch/mips/net/bpf_jit_comp64.c
Original file line number Diff line number Diff line change
Expand Up @@ -547,12 +547,20 @@ void build_prologue(struct jit_context *ctx)
u8 zx = bpf2mips64[JIT_REG_ZX];
int stack, saved, locals, reserved;

/*
* In the unlikely event that the TCC limit is raised to more
* than 16 bits, it is clamped to the maximum value allowed for
* the generated code (0xffff). It is better fail to compile
* instead of degrading gracefully.
*/
BUILD_BUG_ON(MAX_TAIL_CALL_CNT > 0xffff);

/*
* The first instruction initializes the tail call count register.
* On a tail call, the calling function jumps into the prologue
* after this instruction.
*/
emit(ctx, ori, tc, MIPS_R_ZERO, min(MAX_TAIL_CALL_CNT, 0xffff));
emit(ctx, ori, tc, MIPS_R_ZERO, MAX_TAIL_CALL_CNT);

/* === Entry-point for tail calls === */

Expand Down
17 changes: 17 additions & 0 deletions include/linux/bpf-cgroup.h
Original file line number Diff line number Diff line change
Expand Up @@ -414,6 +414,11 @@ int cgroup_bpf_prog_detach(const union bpf_attr *attr,
int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int cgroup_bpf_prog_query(const union bpf_attr *attr,
union bpf_attr __user *uattr);

const struct bpf_func_proto *
cgroup_common_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog);
const struct bpf_func_proto *
cgroup_current_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog);
#else

static inline int cgroup_bpf_inherit(struct cgroup *cgrp) { return 0; }
Expand Down Expand Up @@ -444,6 +449,18 @@ static inline int cgroup_bpf_prog_query(const union bpf_attr *attr,
return -EINVAL;
}

static inline const struct bpf_func_proto *
cgroup_common_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
return NULL;
}

static inline const struct bpf_func_proto *
cgroup_current_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
return NULL;
}

static inline int bpf_cgroup_storage_assign(struct bpf_prog_aux *aux,
struct bpf_map *map) { return 0; }
static inline struct bpf_cgroup_storage *bpf_cgroup_storage_alloc(
Expand Down
22 changes: 22 additions & 0 deletions include/linux/bpf.h
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ struct mem_cgroup;
struct module;
struct bpf_func_state;
struct ftrace_ops;
struct cgroup;

extern struct idr btf_idr;
extern spinlock_t btf_idr_lock;
Expand Down Expand Up @@ -1730,7 +1731,14 @@ int bpf_obj_get_user(const char __user *pathname, int flags);
int __init bpf_iter_ ## target(args) { return 0; }

struct bpf_iter_aux_info {
/* for map_elem iter */
struct bpf_map *map;

/* for cgroup iter */
struct {
struct cgroup *start; /* starting cgroup */
enum bpf_cgroup_iter_order order;
} cgroup;
};

typedef int (*bpf_iter_attach_target_t)(struct bpf_prog *prog,
Expand Down Expand Up @@ -1966,6 +1974,15 @@ static inline bool unprivileged_ebpf_enabled(void)
return !sysctl_unprivileged_bpf_disabled;
}

/* Not all bpf prog type has the bpf_ctx.
* For the bpf prog type that has initialized the bpf_ctx,
* this function can be used to decide if a kernel function
* is called by a bpf program.
*/
static inline bool has_current_bpf_ctx(void)
{
return !!current->bpf_ctx;
}
#else /* !CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get(u32 ufd)
{
Expand Down Expand Up @@ -2175,6 +2192,10 @@ static inline bool unprivileged_ebpf_enabled(void)
return false;
}

static inline bool has_current_bpf_ctx(void)
{
return false;
}
#endif /* CONFIG_BPF_SYSCALL */

void __bpf_free_used_btfs(struct bpf_prog_aux *aux,
Expand Down Expand Up @@ -2362,6 +2383,7 @@ extern const struct bpf_func_proto bpf_sock_map_update_proto;
extern const struct bpf_func_proto bpf_sock_hash_update_proto;
extern const struct bpf_func_proto bpf_get_current_cgroup_id_proto;
extern const struct bpf_func_proto bpf_get_current_ancestor_cgroup_id_proto;
extern const struct bpf_func_proto bpf_get_cgroup_classid_curr_proto;
extern const struct bpf_func_proto bpf_msg_redirect_hash_proto;
extern const struct bpf_func_proto bpf_msg_redirect_map_proto;
extern const struct bpf_func_proto bpf_sk_redirect_hash_proto;
Expand Down
28 changes: 28 additions & 0 deletions include/linux/bpf_mem_alloc.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
#ifndef _BPF_MEM_ALLOC_H
#define _BPF_MEM_ALLOC_H
#include <linux/compiler_types.h>
#include <linux/workqueue.h>

struct bpf_mem_cache;
struct bpf_mem_caches;

struct bpf_mem_alloc {
struct bpf_mem_caches __percpu *caches;
struct bpf_mem_cache __percpu *cache;
struct work_struct work;
};

int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu);
void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma);

/* kmalloc/kfree equivalent: */
void *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size);
void bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr);

/* kmem_cache_alloc/free equivalent: */
void *bpf_mem_cache_alloc(struct bpf_mem_alloc *ma);
void bpf_mem_cache_free(struct bpf_mem_alloc *ma, void *ptr);

#endif /* _BPF_MEM_ALLOC_H */
11 changes: 11 additions & 0 deletions include/linux/bpf_verifier.h
Original file line number Diff line number Diff line change
Expand Up @@ -212,6 +212,17 @@ struct bpf_reference_state {
* is used purely to inform the user of a reference leak.
*/
int insn_idx;
/* There can be a case like:
* main (frame 0)
* cb (frame 1)
* func (frame 3)
* cb (frame 4)
* Hence for frame 4, if callback_ref just stored boolean, it would be
* impossible to distinguish nested callback refs. Hence store the
* frameno and compare that to callback_ref in check_reference_leak when
* exiting a callback function.
*/
int callback_ref;
};

/* state of the program:
Expand Down
3 changes: 1 addition & 2 deletions include/linux/filter.h
Original file line number Diff line number Diff line change
Expand Up @@ -900,8 +900,7 @@ int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk);
int sk_reuseport_attach_bpf(u32 ufd, struct sock *sk);
void sk_reuseport_prog_free(struct bpf_prog *prog);
int sk_detach_filter(struct sock *sk);
int sk_get_filter(struct sock *sk, struct sock_filter __user *filter,
unsigned int len);
int sk_get_filter(struct sock *sk, sockptr_t optval, unsigned int len);

bool sk_filter_charge(struct sock *sk, struct sk_filter *fp);
void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp);
Expand Down
4 changes: 2 additions & 2 deletions include/linux/igmp.h
Original file line number Diff line number Diff line change
Expand Up @@ -118,9 +118,9 @@ extern int ip_mc_source(int add, int omode, struct sock *sk,
struct ip_mreq_source *mreqs, int ifindex);
extern int ip_mc_msfilter(struct sock *sk, struct ip_msfilter *msf,int ifindex);
extern int ip_mc_msfget(struct sock *sk, struct ip_msfilter *msf,
struct ip_msfilter __user *optval, int __user *optlen);
sockptr_t optval, sockptr_t optlen);
extern int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf,
struct sockaddr_storage __user *p);
sockptr_t optval, size_t offset);
extern int ip_mc_sf_allow(struct sock *sk, __be32 local, __be32 rmt,
int dif, int sdif);
extern void ip_mc_init_dev(struct in_device *);
Expand Down
6 changes: 3 additions & 3 deletions include/linux/mroute.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ static inline int ip_mroute_opt(int opt)
}

int ip_mroute_setsockopt(struct sock *, int, sockptr_t, unsigned int);
int ip_mroute_getsockopt(struct sock *, int, char __user *, int __user *);
int ip_mroute_getsockopt(struct sock *, int, sockptr_t, sockptr_t);
int ipmr_ioctl(struct sock *sk, int cmd, void __user *arg);
int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
int ip_mr_init(void);
Expand All @@ -29,8 +29,8 @@ static inline int ip_mroute_setsockopt(struct sock *sock, int optname,
return -ENOPROTOOPT;
}

static inline int ip_mroute_getsockopt(struct sock *sock, int optname,
char __user *optval, int __user *optlen)
static inline int ip_mroute_getsockopt(struct sock *sk, int optname,
sockptr_t optval, sockptr_t optlen)
{
return -ENOPROTOOPT;
}
Expand Down
4 changes: 2 additions & 2 deletions include/linux/mroute6.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ struct sock;

#ifdef CONFIG_IPV6_MROUTE
extern int ip6_mroute_setsockopt(struct sock *, int, sockptr_t, unsigned int);
extern int ip6_mroute_getsockopt(struct sock *, int, char __user *, int __user *);
extern int ip6_mroute_getsockopt(struct sock *, int, sockptr_t, sockptr_t);
extern int ip6_mr_input(struct sk_buff *skb);
extern int ip6mr_ioctl(struct sock *sk, int cmd, void __user *arg);
extern int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
Expand All @@ -42,7 +42,7 @@ static inline int ip6_mroute_setsockopt(struct sock *sock, int optname,

static inline
int ip6_mroute_getsockopt(struct sock *sock,
int optname, char __user *optval, int __user *optlen)
int optname, sockptr_t optval, sockptr_t optlen)
{
return -ENOPROTOOPT;
}
Expand Down
4 changes: 2 additions & 2 deletions include/linux/skbuff.h
Original file line number Diff line number Diff line change
Expand Up @@ -1461,8 +1461,8 @@ void skb_flow_dissector_init(struct flow_dissector *flow_dissector,
unsigned int key_count);

struct bpf_flow_dissector;
bool bpf_flow_dissect(struct bpf_prog *prog, struct bpf_flow_dissector *ctx,
__be16 proto, int nhoff, int hlen, unsigned int flags);
u32 bpf_flow_dissect(struct bpf_prog *prog, struct bpf_flow_dissector *ctx,
__be16 proto, int nhoff, int hlen, unsigned int flags);

bool __skb_flow_dissect(const struct net *net,
const struct sk_buff *skb,
Expand Down
5 changes: 5 additions & 0 deletions include/linux/sockptr.h
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,11 @@ static inline int copy_to_sockptr_offset(sockptr_t dst, size_t offset,
return 0;
}

static inline int copy_to_sockptr(sockptr_t dst, const void *src, size_t size)
{
return copy_to_sockptr_offset(dst, 0, src, size);
}

static inline void *memdup_sockptr(sockptr_t src, size_t len)
{
void *p = kmalloc_track_caller(len, GFP_USER | __GFP_NOWARN);
Expand Down
20 changes: 18 additions & 2 deletions include/linux/tnum.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,12 @@ struct tnum {
struct tnum tnum_const(u64 value);
/* A completely unknown value */
extern const struct tnum tnum_unknown;
/* A value that's unknown except that @min <= value <= @max */
/* An unknown value that is a superset of @min <= value <= @max.
*
* Could include values outside the range of [@min, @max].
* For example tnum_range(0, 2) is represented by {0, 1, 2, *3*},
* rather than the intended set of {0, 1, 2}.
*/
struct tnum tnum_range(u64 min, u64 max);

/* Arithmetic and logical ops */
Expand Down Expand Up @@ -73,7 +78,18 @@ static inline bool tnum_is_unknown(struct tnum a)
*/
bool tnum_is_aligned(struct tnum a, u64 size);

/* Returns true if @b represents a subset of @a. */
/* Returns true if @b represents a subset of @a.
*
* Note that using tnum_range() as @a requires extra cautions as tnum_in() may
* return true unexpectedly due to tnum limited ability to represent tight
* range, e.g.
*
* tnum_in(tnum_range(0, 2), tnum_const(3)) == true
*
* As a rule of thumb, if @a is explicitly coded rather than coming from
* reg->var_off, it should be in form of tnum_const(), tnum_range(0, 2**n - 1),
* or tnum_range(2**n, 2**(n+1) - 1).
*/
bool tnum_in(struct tnum a, struct tnum b);

/* Formatting functions. These have snprintf-like semantics: they will write
Expand Down
4 changes: 4 additions & 0 deletions include/net/ip.h
Original file line number Diff line number Diff line change
Expand Up @@ -743,8 +743,12 @@ void ip_cmsg_recv_offset(struct msghdr *msg, struct sock *sk,
int ip_cmsg_send(struct sock *sk, struct msghdr *msg,
struct ipcm_cookie *ipc, bool allow_ipv6);
DECLARE_STATIC_KEY_FALSE(ip4_min_ttl);
int do_ip_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
unsigned int optlen);
int ip_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
unsigned int optlen);
int do_ip_getsockopt(struct sock *sk, int level, int optname,
sockptr_t optval, sockptr_t optlen);
int ip_getsockopt(struct sock *sk, int level, int optname, char __user *optval,
int __user *optlen);
int ip_ra_control(struct sock *sk, unsigned char on,
Expand Down
6 changes: 5 additions & 1 deletion include/net/ipv6.h
Original file line number Diff line number Diff line change
Expand Up @@ -1156,8 +1156,12 @@ struct in6_addr *fl6_update_dst(struct flowi6 *fl6,
*/
DECLARE_STATIC_KEY_FALSE(ip6_min_hopcount);

int do_ipv6_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
unsigned int optlen);
int ipv6_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
unsigned int optlen);
int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
sockptr_t optval, sockptr_t optlen);
int ipv6_getsockopt(struct sock *sk, int level, int optname,
char __user *optval, int __user *optlen);

Expand Down Expand Up @@ -1207,7 +1211,7 @@ int ip6_mc_source(int add, int omode, struct sock *sk,
int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf,
struct sockaddr_storage *list);
int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf,
struct sockaddr_storage __user *p);
sockptr_t optval, size_t ss_offset);

#ifdef CONFIG_PROC_FS
int ac6_proc_init(struct net *net);
Expand Down
4 changes: 4 additions & 0 deletions include/net/ipv6_stubs.h
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,10 @@ struct ipv6_bpf_stub {
const struct in6_addr *daddr, __be16 dport,
int dif, int sdif, struct udp_table *tbl,
struct sk_buff *skb);
int (*ipv6_setsockopt)(struct sock *sk, int level, int optname,
sockptr_t optval, unsigned int optlen);
int (*ipv6_getsockopt)(struct sock *sk, int level, int optname,
sockptr_t optval, sockptr_t optlen);
};
extern const struct ipv6_bpf_stub *ipv6_bpf_stub __read_mostly;

Expand Down
9 changes: 9 additions & 0 deletions include/net/sock.h
Original file line number Diff line number Diff line change
Expand Up @@ -1788,6 +1788,11 @@ static inline void unlock_sock_fast(struct sock *sk, bool slow)
}
}

void sockopt_lock_sock(struct sock *sk);
void sockopt_release_sock(struct sock *sk);
bool sockopt_ns_capable(struct user_namespace *ns, int cap);
bool sockopt_capable(int cap);

/* Used by processes to "lock" a socket state, so that
* interrupts and bottom half handlers won't change it
* from under us. It essentially blocks any incoming
Expand Down Expand Up @@ -1862,9 +1867,13 @@ void sock_pfree(struct sk_buff *skb);
#define sock_edemux sock_efree
#endif

int sk_setsockopt(struct sock *sk, int level, int optname,
sockptr_t optval, unsigned int optlen);
int sock_setsockopt(struct socket *sock, int level, int op,
sockptr_t optval, unsigned int optlen);

int sk_getsockopt(struct sock *sk, int level, int optname,
sockptr_t optval, sockptr_t optlen);
int sock_getsockopt(struct socket *sock, int level, int op,
char __user *optval, int __user *optlen);
int sock_gettstamp(struct socket *sock, void __user *userstamp,
Expand Down
4 changes: 4 additions & 0 deletions include/net/tcp.h
Original file line number Diff line number Diff line change
Expand Up @@ -402,9 +402,13 @@ void tcp_init_sock(struct sock *sk);
void tcp_init_transfer(struct sock *sk, int bpf_op, struct sk_buff *skb);
__poll_t tcp_poll(struct file *file, struct socket *sock,
struct poll_table_struct *wait);
int do_tcp_getsockopt(struct sock *sk, int level,
int optname, sockptr_t optval, sockptr_t optlen);
int tcp_getsockopt(struct sock *sk, int level, int optname,
char __user *optval, int __user *optlen);
bool tcp_bpf_bypass_getsockopt(int level, int optname);
int do_tcp_setsockopt(struct sock *sk, int level, int optname,
sockptr_t optval, unsigned int optlen);
int tcp_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
unsigned int optlen);
void tcp_set_keepalive(struct sock *sk, int val);
Expand Down
Loading

0 comments on commit 2786bcf

Please sign in to comment.