X

News, tips, partners, and perspectives for the Oracle Linux operating system and upstream Linux kernel work

BPF In Depth: BPF Helper Functions

Notes on BPF (2) - BPF helper functions

Oracle Linux kernel developer Alan Maguire presents this six-part series on BPF, wherein he presents an in depth look at the kernel's "Berkeley Packet Filter" -- a useful and extensible kernel function for much more than packet filtering.

Now that we have a list of program types, what can we do within programs we attach? A good place to start with writing BPF programs is to see what helper functions the various BPF program types have available to them. To see some of this, check out

https://github.com/oracle/linux-uek/blob/uek5/master/net/core/filter.c

It contains a set of data structures used by the bpf verifier - struct bpf_verifier_ops. Here's an example for sk_filter programs:

const struct bpf_verifier_ops sk_filter_prog_ops = {
        .get_func_proto         = sk_filter_func_proto,
        .is_valid_access        = sk_filter_is_valid_access,
        .convert_ctx_access     = bpf_convert_ctx_access,
};

"get_func_proto" defines the set of functions supported by the program. The "is_valid_access" function checks if the read/write access for the memory offset is valid. The "convert_ctx_access" function converts accesses from bpf-specific (e.g. struct __sk_buff) structures into real access to the "struct sk_buff". This is all so the verifier can ensure your BPF program is calling valid functions and accessing valid data for the given instrumentation point.

Back to the function prototypes. Firstly, there are a base set of functions available, the prototypes of which are returned by bpf_base_func_proto(). The following descriptions come from Quentin's more recent bpf-next changes which document helpers in include/uapi/linux/bpf.h - https://lwn.net/Articles/751527/ - here we're simply organizing them by the program type(s) that can use them.

void *bpf_map_lookup_elem(struct bpf_map *map, const void *key)

Description

    Perform a lookup in map for an entry associated to key.

Return

    Map value associated to key, or NULL if no entry was found.


int bpf_map_update_elem(struct bpf_map *map, const void *key, const void *value, u64 flags)

Description

    Add or update the value of the entry associated to key in map with value. flags is one of:


    BPF_NOEXIST

                      The entry for key must not exist in the map.

    BPF_EXIST

                      The entry for key must already exist in the map.

    BPF_ANY

                       No condition on the existence of the entry for key.

  

    Flag value BPF_NOEXIST cannot be used for maps of types BPF_MAP_TYPE_ARRAY or BPF_MAP_TYPE_PERCPU_ARRAY  (all elements always exist), the helper would return an error.

Return

    0 on success, or a negative error in case of failure.


int bpf_map_delete_elem(struct bpf_map *map, const void *key)

Description

             Delete entry with key from map.

Return

              0 on success, or a negative error in case of failure.

u32 bpf_get_prandom_u32(void)

Description

              Get a pseudo-random number.


              From a security point of view, this helper uses its own pseudo-random internal state, and cannot be used to infer the seed of other random functions in the kernel. However, it is essential to note that the generator used by the helper is not cryptographically secure.

Return

              A random 32-bit unsigned value.

u32 bpf_get_smp_processor_id(void)

Description

              Get the SMP (symmetric multiprocessing) processor id. Note that all programs run with preemption disabled, which means that the SMP processor id is stable during all the execution of the program.

Return

              The SMP id of the processor running the program.

int bpf_get_numa_node_id(void)

Description

              Return the id of the current NUMA node.

Return

              The id of current NUMA node.


int bpf_tail_call(void *ctx, struct bpf_map *prog_array_map, u32 index)

Description

              This special helper is used to trigger a "tail call", or in other words, to jump into another eBPF program. The same stack frame is used (but values on stack and in registers for the caller are not accessible to the callee). This mechanism allows

              for program chaining, either for raising the maximum number of available eBPF instructions, or to execute given programs in conditional blocks. For security reasons, there is an upper limit to the number of successive tail calls that can be

              performed.


              Upon call of this helper, the program attempts to jump into a program referenced at index in prog_array_map, a special map of type BPF_MAP_TYPE_PROG_ARRAY, and passes ctx, a pointer to the context.


              If the call succeeds, the kernel immediately runs the first instruction of the new program. This is not a function call, and it never returns to the previous program. If the call fails, then the helper has no effect, and the caller continues

              to run its subsequent instructions. A call can fail if the destination program for the jump does not exist (i.e. index is higher than the number of entries in prog_array_map), or if the maximum number of tail calls has been reached for this

              chain of programs. This limit is defined in the kernel by the macro MAX_TAIL_CALL_CNT (not accessible to user space), which is currently set to 32.

Return

              0 on success, or a negative error in case of failure.

u64 bpf_ktime_get_ns(void)

Description

              Return the time elapsed since system boot, in nanoseconds.

Return

              Current *ktime*.


int bpf_trace_printk(const char *fmt, u32 fmt_size, ...)

Description

              This helper is a "printk()-like" facility for debugging. It prints a message defined by format fmt (of size fmt_size) to file /sys/kernel/debug/tracing/trace from DebugFS, if available. It can take up to three additional u64 arguments (as in eBPF

              helpers, the total number of arguments is limited to five).


              Each time the helper is called, it appends a line to the trace. The format of the trace is customizable, and the exact output one will get depends on the options set in /sys/kernel/debug/tracing/trace_options (see also the README file under

              the same directory). However, it usually defaults to something like:


                      telnet-470   [001] .N.. 419421.045894: 0x00000001: <formatted msg>


              In the above:


                      * ``telnet`` is the name of the current task.

                      * ``470`` is the PID of the current task.

                      * ``001`` is the CPU number on which the task is running

                      * In ``.N..``, each character refers to a set of options (whether irqs are enabled, scheduling options, whether hard/softirqs are running, level of preempt_disabled respectively). N means that TIF_NEED_RESCHED and PREEMPT_NEED_RESCHED are set.

                      * ``419421.045894`` is a timestamp.

                      * ``0x00000001`` is a fake value used by BPF for the instruction pointer register.

                      * ``<formatted msg>`` is the message formatted with fmt


              The conversion specifiers supported by fmt are similar, but more limited than for printk(). They are %d, %i, %u, %x, %ld, %li, %lu, %lx, %lld, %lli, %llu, %llx, %p, %s. No modifier (size of field, padding with zeroes, etc.) is available, and the

              helper will return -EINVAL (but print nothing) if it encounters an unknown specifier.


              Also, note that bpf_trace_printk() is slow, and should only be used for debugging purposes. For this reason, a notice block (spanning several lines) is printed to kernel logs and states that the helper should not be used "for production use"

              the first time this helper is used (or more precisely when the trace_printk () buffers are allocated). For passing values to user space, perf events should be preferred.

Return

              The number of bytes written to the buffer, or a negative error in case of failure.

Additionally, for each class of instrumentation target we see a _func_proto() function which enumerates the additional functions available, along with the base set. We will describe these functions, grouped by the program types that support them.

Socket-related BPF programs support the generic set of operations above, and a set of program-specific functions.

1.1 sk_filter programs

int bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset, void *to, u32 len)

Description

              This helper was provided as an easy way to load data from a packet. It can be used to load len bytes from offset from the packet associated to skb, into the buffer pointed to by to.

              Since Linux 4.7, usage of this helper has mostly been replaced by "direct packet access", enabling packet data to be manipulated with skb->data and skb→data_end pointing respectively to the first byte of packet data and to

              the byte after the last byte of packet data. However, it remains useful if one wishes to read large quantities of data at once from a packet into the eBPF stack.

Return

              0 on success, or a negative error in case of failure.


u64 bpf_get_socket_cookie(struct sk_buff *skb)

Description

              If the struct sk_buff * pointed by skb has a known socket, retrieve the cookie (generated by the kernel) of this socket. If no cookie has been set yet, generate a new cookie. Once generated, the socket cookie remains stable for the life of the

              socket. This helper can be useful for monitoring per socket networking traffic statistics as it provides a unique socket identifier per namespace.

Return

              A 8-byte long non-decreasing number on success, or 0 if the socket field is missing inside skb.

u32 bpf_get_socket_uid(struct sk_buff *skb)

Return

                The owner UID of the socket associated to skb. If the socket is NULL, or if it is not a full socket (i.e. if it is a time-wait or a request socket instead), overflowuid value is returned (note that overflowuid might also be the actual UID value for the socket).

1.2 sock_ops programs

int bpf_setsockopt(struct bpf_sock_ops *bpf_socket, int level, int optname, char *optval, int optlen)

Description

              Emulate a call to setsockopt() on the socket associated to bpf_socket, which must be a full socket. The level at which the option resides and the name optname of the option must be specified, see setsockopt(2) for more information.

              The option value of length optlen is pointed by optval.


              This helper actually implements a subset of setsockopt(). It supports the following levels:


              * SOL_SOCKET, which supports the following optnames:

                SO_RCVBUF, SO_SNDBUF, SO_MAX_PACING_RATE, SO_PRIORITY, SO_RCVLOWAT, SO_MARK.

              * IPPROTO_TCP, which supports the following optnames:

                TCP_CONGESTION, TCP_BPF_IW, TCP_BPF_SNDCWND_CLAMP.

              * IPPROTO_IP, which supports optname IP_TOS.

              * IPPROTO_IPV6, which supports optname IPV6_TCLASS.

Return

              0 on success, or a negative error in case of failure.


int bpf_sock_map_update(struct bpf_sock_ops *skops, struct bpf_map *map, void *key, u64 flags)

Description

              Add an entry to, or update a map referencing sockets. The skops is used as a new value for the entry associated to key. flags is one of:


              BPF_NOEXIST

                      The entry for key must not exist in the map.

              BPF_EXIST

                      The entry for key must already exist in the map.

              BPF_ANY

                      No condition on the existence of the entry for key.


              If the map has eBPF programs (parser and verdict), those will be inherited by the socket being added. If the socket is already attached to eBPF programs, this results in an error.

Return

              0 on success, or a negative error in case of failure.

1.3 sk_skb programs

In addition to the base set, the following are supported:

int bpf_skb_store_bytes(struct sk_buff *skb, u32 offset, const void *from, u32 len, u64 flags)

Description

              Store len bytes from address from into the packet associated to skb, at offset. flags are a combination of BPF_F_RECOMPUTE_CSUM (automatically recompute the checksum for the packet after storing the bytes) and

              BPF_F_INVALIDATE_HASH (set skb->hash, skb->swhash and skb->l4hash to 0).


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in combination with

              direct packet access.

Return

              0 on success, or a negative error in case of failure.


int bpf_skb_pull_data(struct sk_buff *skb, u32 len)

Description

              Pull in non-linear data in case the skb is non-linear and not all of len are part of the linear section. Make len bytes from skb readable and writable. If a zero value is passed for len, then the whole length of the skb is pulled.


              This helper is only needed for reading and writing with direct packet access.


              For direct packet access, testing that offsets to access are within packet boundaries (test on skb->data_end) is susceptible to fail if offsets are invalid, or if the requested data is in non-linear parts of the skb. On failure the

              program can just bail out, or in the case of a non-linear buffer, use a helper to make the data available. The bpf_skb_load_bytes() helper is a first solution to access the data. Another one consists in using bpf_skb_pull_data

              to pull in once the non-linear parts, then retesting and eventually access the data.


              At the same time, this also makes sure the skb is uncloned, which is a necessary condition for direct write. As this needs to be an invariant for the write part only, the verifier detects writes and adds a prologue that is calling

              bpf_skb_pull_data() to effectively unclone the skb from the very beginning in case it is indeed cloned.


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in

              combination with direct packet access.

Return

              0 on success, or a negative error in case of failure.


int bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)

Description

              Resize (trim or grow) the packet associated to skb to the new len. The flags are reserved for future usage, and must be left at zero.


              The basic idea is that the helper performs the needed work to change the size of the packet, then the eBPF program rewrites the rest via helpers like bpf_skb_store_bytes(), bpf_l3_csum_replace(),  and others. This helper is a slow

              path utility intended for replies with control messages. And because it is targeted for slow path, the helper itself can afford to be slow: it implicitly linearizes, unclones and drops offloads from the skb..


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in combination with

              direct packet access.

Return

              0 on success, or a negative error in case of failure.


int bpf_skb_change_head(struct sk_buff *skb, u32 len, u64 flags)

Description

              Grows headroom of packet associated to skb and adjusts the offset of the MAC header accordingly, adding len bytes of space. It automatically extends and reallocates memory as required.


              This helper can be used on a layer 3 *skb* to push a MAC header for redirection into a layer 2 device.


              All values for flags are reserved for future usage, and must be left at zero.


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in combination with

              direct packet access.

Return

              0 on success, or a negative error in case of failure.


int bpf_sk_redirect_map(struct bpf_map *map, u32 key, u64 flags)

Description

              Redirect the packet to the socket referenced by map (of type BPF_MAP_TYPE_SOCKMAP) at index key. Both ingress and egress interfaces can be used for redirection. The BPF_F_INGRESS value in flags is used to make the

              distinction (ingress path is selected if the flag is present, egress path otherwise). This is the only flag supported for now.

Return

              SK_PASS on success, or SK_DROP on error.

bpf_skb_load_bytes, bpf_get_socket_cookie, bpf_get_socket_uid are also supported. See above for descriptions of these.

2. tc (traffic control) subsystem program functions

In addition to the base function set, the following are supported:

s64 bpf_csum_diff(__be32 *from, u32 from_size, __be32 *to, u32 to_size, __wsum seed)

Description

              Compute a checksum difference, from the raw buffer pointed by from, of length from_size (that must be a multiple of 4), towards the raw buffer pointed by to, of size to_size (same remark). An optional seed can be added to the value

              (this can be cascaded, the seed may come from a previous call to the helper).


              This is flexible enough to be used in several ways:


              * With from_size == 0, to_size > 0 and seed set to checksum, it can be used when pushing new data.

              * With from_size > 0, *to_size* == 0 and seed set to checksum, it can be used when removing data from a packet.

              * With from_size > 0,  to_size > 0 and seed set to 0, it can be used to compute a diff. Note that from_size and to_size do not need to be equal.


              This helper can be used in combination with bpf_l3_csum_replace() and bpf_l4_csum_replace(), to which one can feed in the difference computed with bpf_csum_diff().

Return

              The checksum result, or a negative error code in case of failure.


s64 bpf_csum_update(struct sk_buff *skb, __wsum csum)

Description

              Add the checksum csum into skb->csum in case the driver has supplied a checksum for the entire packet into that field. Return an error otherwise. This helper is intended to be used in combination with bpf_csum_diff(), in particular

              when the checksum needs to be updated after data has been written into the packet through direct packet access.

Return

              The checksum on success, or a negative error code in case of failure.


int bpf_l3_csum_replace(struct sk_buff *skb, u32 offset, u64 from, u64 to, u64 size)

Description

              Recompute the layer 3 (e.g. IP) checksum for the packet associated to skb. Computation is incremental, so the helper must know the former value of the header field that was modified (from), the new value of this field (to), and the

              number of bytes (2 or 4) for this field, stored in size. Alternatively, it is possible to store the difference between the previous and the new values of the header field in to, by setting from and size to 0. For both methods, offset

              indicates the location of the IP checksum within the packet.


              This helper works in combination with bpf_csum_diff(), which does not update the checksum in-place, but offers more flexibility and can handle sizes larger than 2 or 4 for the checksum to update.


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in combination with

              direct packet access.

Return

              0 on success, or a negative error in case of failure.


 int bpf_l4_csum_replace(struct sk_buff *skb, u32 offset, u64 from, u64 to, u64 flags)

Description

              Recompute the layer 4 (e.g. TCP, UDP or ICMP) checksum for the packet associated to skb. Computation is incremental, so the helper must know the former value of the header field that was modified (from), the new value of this field (to), and

              the number of bytes (2 or 4) for this field, stored on the lowest four bits of flags. Alternatively, it is possible to store the difference between the previous and the new values of the header field in to, by setting from and the four lowest bits of flags

              to 0. For both methods, offset indicates the location of the IP checksum within the packet. In addition to the size of the field, flags can be added (bitwise OR) actual flags. With BPF_F_MARK_MANGLED_0, a null checksum is left untouched (unless

              BPF_F_MARK_ENFORCE is added as well), and for updates resulting in a null checksum the value is set to CSUM_MANGLED_0 instead. Flag BPF_F_PSEUDO_HDR indicates the checksum is to be computed against a pseudo-header.


              This helper works in combination with bpf_csum_diff(), which does not update the checksum in-place, but offers more flexibility and can handle sizes larger than 2 or 4 for the checksum to update.


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in combination with

              direct packet access.

Return

              0 on success, or a negative error in case of failure.



int bpf_clone_redirect(struct sk_buff *skb, u32 ifindex, u64 flags)

Description

              Clone and redirect the packet associated to skb to another net device of index ifindex. Both ingress and egress interfaces can be used for redirection. The BPF_F_INGRESS value in flags is used to make the distinction (ingress path

              is selected if the flag is present, egress path otherwise). This is the only flag supported for now.


              In comparison with bpf_redirect() helper, bpf_clone_redirect() has the associated cost of duplicating the packet buffer, but this can be executed out of the eBPF program. Conversely, bpf_redirect() is more efficient, but it is handled through an

              action code where the redirection happens only after the eBPF program has returned.


               A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in combination with

              direct packet access.

Return

              0 on success, or a negative error in case of failure.


int bpf_redirect(u32 ifindex, u64 flags)

Description

              Redirect the packet to another net device of index ifindex. This helper is somewhat similar to bpf_clone_redirect(), except that the packet is not cloned, which provides increased performance.


              Except for XDP, both ingress and egress interfaces can be used for redirection. The BPF_F_INGRESS value in flags is used to make the distinction (ingress path is selected if the flag is present, egress path otherwise). Currently, XDP only

              supports redirection to the egress interface, and accepts no flag at all.


              The same effect can be attained with the more generic bpf_redirect_map(), which requires specific maps to be used but offers better performance.

Return

              For XDP, the helper returns XDP_REDIRECT on success or XDP_ABORTED on error. For other program types, the values are TC_ACT_REDIRECT on success or TC_ACT_SHOT on error.



u32 bpf_get_cgroup_classid(struct sk_buff *skb)

Description

              Retrieve the classid for the current task, i.e. for the net_cls cgroup to which skb belongs.


              This helper can be used on TC egress path, but not on ingress.


              The net_cls cgroup provides an interface to tag network packets based on a user-provided identifier for all traffic coming from the tasks belonging to the related cgroup. See also the related kernel documentation, available from the Linux

              sources in file Documentation/cgroup-v1/net_cls.txt.


              The Linux kernel has two versions for cgroups: there are cgroups v1 and cgroups v2. Both are available to users, who can use a mixture of them, but note that the net_cls cgroup is for cgroup v1 only. This makes it incompatible with BPF

              programs run on cgroups, which is a cgroup-v2-only feature (a socket can only hold data for one version of cgroups at a time).


              This helper is only available is the kernel was compiled with the CONFIG_CGROUP_NET_CLASSID configuration option set to "y" or to "m".

Return

              The classid, or 0 for the default unconfigured classid.


int bpf_skb_under_cgroup(struct sk_buff *skb, struct bpf_map *map, u32 index)

Description

              Check whether skb is a descendant of the cgroup2 held by map of type BPF_MAP_TYPE_CGROUP_ARRAY, at index.

Return

              The return value depends on the result of the test, and can be:


              * 0, if the skb failed the cgroup2 descendant test.

              * 1, if the skb succeeded the cgroup2 descendant test.

              * A negative error code, if an error occurred.


int bpf_skb_vlan_push(struct sk_buff *skb, __be16 vlan_proto, u16 vlan_tci)

Description

              Push a vlan_tci (VLAN tag control information) of protocol vlan_proto to the packet associated to skb, then update the checksum. Note that if vlan_proto is different from ETH_P_8021Q and ETH_P_8021AD, it is considered to

              be ETH_P_8021Q.


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in

              combination with direct packet access.

Return

              0 on success, or a negative error in case of failure.


int bpf_skb_vlan_pop(struct sk_buff *skb)

Description

              Pop a VLAN header from the packet associated to skb.


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in

              combination with direct packet access.

Return

              0 on success, or a negative error in case of failure.


int bpf_skb_change_proto(struct sk_buff *skb, __be16 proto, u64 flags)

Description

              Change the protocol of the skb to proto. Currently supported are transition from IPv4 to IPv6, and from IPv6 to IPv4. The helper takes care of the groundwork for the transition, including resizing the socket buffer. The eBPF

              program is expected to fill the new headers, if any, via skb_store_bytes() and to recompute the checksums with bpf_l3_csum_replace() and bpf_l4_csum_replace(). The main case for this helper is to perform NAT64

              operations out of an eBPF program.


              Internally, the GSO type is marked as dodgy so that headers are checked and segments are recalculated by the GSO/GRO engine. The size for GSO target is adapted as well.


              All values for *flags* are reserved for future usage, and must be left at zero.


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be performed again, if the helper is used in

              combination with direct packet access.

Return

              0 on success, or a negative error in case of failure.


int bpf_skb_change_type(struct sk_buff *skb, u32 type)

Description

              Change the packet type for the packet associated to skb. This comes down to setting skb->pkt_type to type, except the eBPF program does not have a write access to skb→pkt_type beside this helper. Using a helper here allows

              for graceful handling of errors.


              The major use case is to change incoming skbs to PACKET_HOST in a programmatic way instead of having to recirculate via redirect(..., BPF_F_INGRESS), for example.


              Note that type only allows certain values. At this time, they are:


              PACKET_HOST

                      Packet is for us.

              PACKET_BROADCAST

                      Send packet to all.

              PACKET_MULTICAST

                      Send packet to group.

              PACKET_OTHERHOST

                      Send packet to someone else.

Return

              0 on success, or a negative error in case of failure.


int bpf_skb_get_tunnel_key(struct sk_buff *skb, struct bpf_tunnel_key *key, u32 size, u64 flags)

Description

              Get tunnel metadata. This helper takes a pointer key to an empty struct bpf_tunnel_key of size, that will be filled with tunnel metadata for the packet associated to skb. The flags can be set to BPF_F_TUNINFO_IPV6, which

              indicates that the tunnel is based on IPv6 protocol instead of IPv4.


              The struct bpf_tunnel_key is an object that generalizes the principal parameters used by various tunneling protocols into a single struct. This way, it can be used to easily make a decision based on the contents of the encapsulation

              header, "summarized" in this struct. In particular, it holds the IP address of the remote end (IPv4 or IPv6, depending on the case) in key->remote_ipv4 or key->remote_ipv6. Also, this struct exposes the key->tunnel_id, which is

              generally mapped to a VNI (Virtual Network Identifier), making it programmable together with the bpf_skb_set_tunnel_key() helper.


              Let's imagine that the following code is part of a program attached to the TC ingress interface, on one end of a GRE tunnel, and is supposed to filter out all messages coming from remote ends with IPv4 address other than 10.0.0.1:


              

                      int ret;

                      struct bpf_tunnel_key key = {};


                      ret = bpf_skb_get_tunnel_key(skb, &key, sizeof(key), 0);

                      if (ret < 0)

                              return TC_ACT_SHOT;     // drop packet


                      if (key.remote_ipv4 != 0x0a000001)

                              return TC_ACT_SHOT;     // drop packet


                      return TC_ACT_OK;               // accept packet


              This interface can also be used with all encapsulation devices that can operate in "collect metadata" mode: instead of having one network device per specific configuration, the "collect  metadata" mode only requires a single device

              where the configuration can be extracted from this helper.


              This can be used together with various tunnels such as VXLan, Geneve, GRE or IP in IP (IPIP).

Return

              0 on success, or a negative error in case of failure.



int bpf_skb_set_tunnel_key(struct sk_buff *skb, struct bpf_tunnel_key *key, u32 size, u64 flags)

Description

              Populate tunnel metadata for packet associated to skb. The tunnel metadata is set to the contents of key, of size. The flags can be set to a combination of the following values:


              BPF_F_TUNINFO_IPV6

                      Indicate that the tunnel is based on IPv6 protocol instead of IPv4.

              BPF_F_ZERO_CSUM_TX

                      For IPv4 packets, add a flag to tunnel metadata indicating that checksum computation should be skipped and checksum set to zeroes.

              BPF_F_DONT_FRAGMENT

                      Add a flag to tunnel metadata indicating that the packet should not be fragmented.

              BPF_F_SEQ_NUMBER

                      Add a flag to tunnel metadata indicating that a sequence number should be added to tunnel header before sending the packet. This flag was added for GRE encapsulation, but might be used with other protocols

                      as well in the future.


              Here is a typical usage on the transmit path:


                      struct bpf_tunnel_key key;

                           populate key ...

                      bpf_skb_set_tunnel_key(skb, &key, sizeof(key), 0);

                      bpf_clone_redirect(skb, vxlan_dev_ifindex, 0);


              See also the description of the bpf_skb_get_tunnel_key() helper for additional information.

Return

              0 on success, or a negative error in case of failure.


int bpf_skb_get_tunnel_opt(struct sk_buff *skb, u8 *opt, u32 size)

Description

              Retrieve tunnel options metadata for the packet associated to skb, and store the raw tunnel option data to the buffer opt of size.


              This helper can be used with encapsulation devices that can operate in "collect metadata" mode (please refer to the related note in the description of bpf_skb_get_tunnel_key() for more details). A particular example where this can

               be used is in combination with the Geneve encapsulation protocol, where it allows for pushing (with bpf_skb_get_tunnel_opt() helper) and retrieving arbitrary TLVs (Type-Length-Value headers) from the eBPF program. This allows for

               full customization of these headers.

Return

              The size of the option data retrieved.


int bpf_skb_set_tunnel_opt(struct sk_buff *skb, u8 *opt, u32 size)

Description

              Set tunnel options metadata for the packet associated to skb to the option data contained in the raw buffer opt of size.


              See also the description of the bpf_skb_get_tunnel_opt() helper for additional information.

Return

              0 on success, or a negative error in case of failure.




u32 bpf_get_route_realm(struct sk_buff *skb)

Description

              Retrieve the realm or the route, that is to say the tclassid field of the destination for the skb. The indentifier retrieved is a user-provided tag, similar to the one used with the net_cls cgroup (see description for

              bpf_get_cgroup_classid() helper), but here this tag is held by a route (a destination entry), not by a task.


              Retrieving this identifier works with the clsact TC egress hook (see also tc-bpf(8)), or alternatively on conventional classful egress qdiscs, but not on TC ingress path. In case of clsact TC egress hook, this has

              the advantage that, internally, the destination entry has not been dropped yet in the transmit path. Therefore, the destination entry does not need to be artificially held via netif_keep_dst() for a classful

              qdisc until the skb is freed.


              This helper is available only if the kernel was compiled with CONFIG_IP_ROUTE_CLASSID configuration option.

Return

              The realm of the route for the packet associated to skb, or 0 if none was found.


u32 bpf_get_hash_recalc(struct sk_buff *skb)

Description

              Retrieve the hash of the packet, skb->hash. If it is not set, in particular if the hash was cleared due to mangling, recompute this hash. Later accesses to the hash can be done directly with skb->hash.


              Calling bpf_set_hash_invalid(), changing a packet prototype with bpf_skb_change_proto(), or calling bpf_skb_store_bytes() with the BPF_F_INVALIDATE_HASH are actions susceptible to clear the hash and to trigger a new computation

              for the next call to bpf_get_hash_recalc().

Return

              The 32-bit hash.



void bpf_set_hash_invalid(struct sk_buff *skb)

Description

              Invalidate the current skb->hash. It can be used after mangling on headers through direct packet access, in order to indicate that the hash is outdated and to trigger a recalculation the next time the kernel tries to access this

              hash or when the **bpf_get_hash_recalc**\ () helper is called.


u32 bpf_set_hash(struct sk_buff *skb, u32 hash)

Description

              Set the full hash for skb (set the field skb→hash) to value hash.

Return

              0


int bpf_perf_event_output(struct pt_reg *ctx, struct bpf_map *map, u64 flags, void *data, u64 size)

Description

              Write raw data blob into a special BPF perf event held by map of type BPF_MAP_TYPE_PERF_EVENT_ARRAY. This perf event must have the following attributes: PERF_SAMPLE_RAW as sample_type, PERF_TYPE_SOFTWARE as type, and

              PERF_COUNT_SW_BPF_OUTPUT as config.


              The flags are used to indicate the index in map for which the value must be put, masked with BPF_F_INDEX_MASK. Alternatively, flags can be set to BPF_F_CURRENT_CPU to indicate that the index of the current CPU core should be used.


              The value to write, of size, is passed through eBPF stack and pointed by data.


              The context of the program ctx needs also be passed to the helper.


              On user space, a program willing to read the values needs to call perf_event_open() on the perf event (either for one or for all CPUs) and to store the file descriptor into the map. This must be done before the eBPF program can send data

              into it. An example is available in file samples/bpf/trace_output_user.c in the Linux kernel source tree (the eBPF program counterpart is in samples/bpf/trace_output_kern.c).


              bpf_perf_event_output() achieves better performance than bpf_trace_printk() for sharing data with user space, and is much better suitable for streaming data from eBPF programs.


              Note that this helper is not restricted to tracing use cases and can be used with programs attached to TC or XDP as well, where it allows for passing data to user space listeners. Data can be:


              * Only custom structs,

              * Only the packet payload, or

              * A combination of both.

Return

              0 on success, or a negative error in case of failure.

bpf_skb_store_bytes,: bpf_skb_load_bytes, bpf_skb_pull_data, bpf_skb_change_tail, bpf_get_socket_cookie, bpf_get_socket_uid:are also supported. See above for descriptions.

3. xdp : Xpress Data Path program functions

In addition to the base set, bpf_perf_event_output, bpf_get_smp_processor_id, bpf_redirect and bpf_redirect_map are all supported as described above.

int bpf_xdp_adjust_head(struct xdp_buff *xdp_md, int delta)

 Description

              Adjust (move) xdp_md->data by delta bytes. Note that it is possible to use a negative value for delta. This helper can be used to prepare the packet for pushing or popping headers.


              A call to this helper is susceptible to change the underlaying packet buffer. Therefore, at load time, all checks on pointers previously done by the verifier are invalidated and must be

              performed again, if the helper is used in combination with direct packet access.

Return

              0 on success, or a negative error in case of failure.

4. kprobes, tracepoints and perf events program functions

To figure out which helper functions are supported for these program types, we need to look at kernel/trace/bpf_trace.c. Here a common set of verifier ops valid for all these program types is defined in tracing_func_proto(). This is the equivalent of the base function prototype in filter.c.

The base set of functions for for BPF filters are supported here too; bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_tail_call, bpf_trace_printk, bpf_get_smp_processor_id, bpf_get_numa_node_id. bpf_get_prandom_u32. In addition bpf_perf_event_read, bpf_perf_event_output are all valid, and defined above. Other functions (not previously described) are :

int bpf_get_stackid(struct pt_reg *ctx, struct bpf_map *map, u64 flags)
Description
            Walk a user or a kernel stack and return its id. To achieve this, the helper needs ctx, which is a pointer to the context on which the tracing program is executed, and a pointer to a map of type BPF_MAP_TYPE_STACK_TRACE.  The last argument, flags, holds the number of stack

            frames to skip (from 0 to 255), masked with BPF_F_SKIP_FIELD_MASK. The next bits can be used to set a combination of the following flags:


            BPF_F_USER_STACK

                   Collect a user space stack instead of a kernel stack.

            BPF_F_FAST_STACK_CMP

                   Compare stacks by hash only.

            BPF_F_REUSE_STACKID

                   If two different stacks hash into the same *stackid*, discard the old one.  The stack id retrieved is a 32 bit long integer handle which can be further combined with other data (including other stack ids) and used as a key into maps. This can be useful for

                   generating a variety of graphs (such as flame graphs or off-cpu graphs).  For walking a stack, this helper is an improvement over bpf_probe_read(), which can be used with unrolled loops but is not efficient and consumes a lot of eBPF instructions. Instead, bpf_get_stackid() can

                   collect up to PERF_MAX_STACK_DEPTH both kernel and user frames. Note that  this limit can be controlled with the sysctl program, and that it should be manually increased in order to profile long user stacks (such as stacks for Java programs). To do so, use:


             # sysctl kernel.perf_event_max_stack=<new value>

Return

            The positive or null stack id on success, or a negative error  in case of failure.


int bpf_probe_read(void *dst, u32 size, const void *src)

Description

             For tracing programs, safely attempt to read size bytes from address src and store the data in dst.

Return

              0 on success, or a negative error in case of failure.


u64 bpf_get_current_pid_tgid(void)

Return

              A 64-bit integer containing the current tgid and pid, and created as such:

              current_task->tgid << 32 | current_task->pid.


u64 bpf_get_current_uid_gid(void)

Return

              A 64-bit integer containing the current GID and UID, and created as such: current_gid << 32 | current_uid.


int bpf_get_current_comm(char *buf, u32 size_of_buf)

Description

              Copy the comm attribute of the current task into buf of size_of_buf. The comm attribute contains the name of the executable (excluding the path) for the current task. The size_of_buf must be strictly positive. On success, the

              helper makes sure that the buf is NUL-terminated. On failure, it is filled with zeroes.

Return

              0 on success, or a negative error in case of failure.


u64 bpf_get_current_task(void)

      Return

              A pointer to the current task struct.


int bpf_probe_write_user(void *dst, const void *src, u32 len)

Description

              Attempt in a safe way to write len bytes from the buffer src to dst in memory. It only works for threads that are in user context, and dst must be a valid user space address.


              This helper should not be used to implement any kind of security mechanism because of TOC-TOU attacks, but rather to debug, divert, and manipulate execution of semi-cooperative processes.


              Keep in mind that this feature is meant for experiments, and it has a risk of crashing the system and running programs. Therefore, when an eBPF program using this helper is attached, a warning

              including PID and process name is printed to kernel logs.

Return

              0 on success, or a negative error in case of failure.


int bpf_current_task_under_cgroup(struct bpf_map *map, u32 index)

Description

              Check whether the probe is being run is the context of a given subset of the cgroup2 hierarchy. The cgroup2 to test is held by map of type BPF_MAP_TYPE_CGROUP_ARRAY, at index.

      Return

              The return value depends on the result of the test, and can be:


              * 0, if the skb task belongs to the cgroup2.

              * 1, if the skb task does not belong to the cgroup2.

              * A negative error code, if an error occurred.


.

int bpf_probe_read_str(void *dst, int size, const void *unsafe_ptr)

Description

              Copy a NUL terminated string from an unsafe address unsafe_ptr to dst. The size should include the terminating NUL byte. In case the string length is smaller than size, the target is not padded with further NUL bytes. If the

              string length is larger than *size*, just *size*-1 bytes are copied and the last byte is set to NUL.


              On success, the length of the copied string is returned. This makes this helper useful in tracing programs for reading strings, and more importantly to get its length at runtime. See the following snippet:



                      SEC("kprobe/sys_open")

                      void bpf_sys_open(struct pt_regs *ctx)

                      {

                              char buf[PATHLEN]; // PATHLEN is defined to 256

                              int res = bpf_probe_read_str(buf, sizeof(buf),

                                                           ctx->di);


                              // Consume buf, for example push it to

                              // userspace via bpf_perf_event_output(); we

                              // can use res (the string length) as event

                              // size, after checking its boundaries.

                      }


              In comparison, using bpf_probe_read() helper here instead to read the string would require to estimate the length at compile time, and would often result in copying more memory than necessary.


              Another useful use case is when parsing individual process arguments or individual environment variables navigating current->mm->arg_start and current->mm->env_start using this helper and the return value,

              one can quickly iterate at the right offset of the memory area.

Return

              On success, the strictly positive length of the string, including the trailing NUL character. On error, a negative value.

For cgroup sock/skb programs, in addition to the base set, one additional function is supported:

bpf_get_current_uid_gid:is supported, and defined above.

6. Lightweight tunnel program functions

For Lightweight tunnel in/out/xmit, in addition to the base set of functions,

bpf_skb_load_bytes:, bpf_skb_pull_data, bpf_csum_diff, bpf_get_cgroup_classid: bpf_perf_event_output, bpf_get_route_realm, bpf_get_hash_recalc, bpf_perf_event_output, bpf_get_smp_processor_id, bpf_skb_under_cgroup:

are all supported, and defined above.

For Lightweight tunnel xmit only:

bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_redirect, bpf_clone_redirect, bpf_skb_change_tail, bpf_skb_change_head, bpf_skb_store_bytes, bpf_csum_update, bpf_l3_csum_replace,

bpf_l4_csum_replace, bpf_set_hash_invalid are all supported and defined above.

Summary

We've described the various program types, and the functions they support. However before we can start writing BPF programs, we need to talk about BPF maps - a key data structure for sharing information that can be used (among other things) to share information between BPF programs and user-space.

Learning more about BPF

Thanks for reading this installment of our series on BPF. We hope you found it educational and useful. Questions or comments? Use the comments field below!

Stay tuned for the next installment in this series, BPF and Userspace.

Previously:

  1. BPF program types

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha
Oracle

Integrated Cloud Applications & Platform Services