Categories
Misc

Developing Applications with NVIDIA BlueField DPU and NVIDIA DOCA Libraries

NVIDIA DOCA libraries simplify the development process of BlueField DPU applicationsThe development process for DPUs can get complex. This is where NVIDIA DOCA comes in. With several built-in libraries that allows for plug-n-play and simple application development. NVIDIA DOCA libraries simplify the development process of BlueField DPU applications

In this post, I take you through the creation of the FRR DOCA dataplane plugin and show you how to offload PBR rules using the new DOCA flow library. In the previous post, you saw the creation of a FRR dataplane plugin to accelerate PBR rules on BlueField using the DPDK rte_flow library. For part 1, see Developing Applications with NVIDIA BlueField DPU and DPDK.

Adding the DOCA dataplane plugin to Zebra 

I still used the DPDK APIs for hardware initialization, but then used the DOCA flow APIs for setting up the dataplane flow pipeline. To do that I had to link the DPDK (libdpdk.pc) and DOCA flow (doca-flow.pc) shared libraries to the DOCA dataplane plugin.

root@dpu-arm:~# export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/mellanox/dpdk/lib/aarch 
64-linux-gnu/pkgconfig 
root@dpu-arm:~# pkg-config --libs doca-flow 
-ldoca_flow 
root@dpu-arm:~# pkg-config --cflags doca-flow 
-DALLOW_EXPERIMENTAL_API -include rte_config.h -mcpu=cortex-a72 -DALLOW_EXPERIMENTAL_API -I/opt/mellanox/dpdk/include/dpdk -I/opt/mellanox/dpdk/include/dpdk/../aarch64-linux-gnu/dpdk -I/opt/mellanox/dpdk/include/dpdk -I/usr/include/libnl3 
root@dpu-arm:~# 

I added the pkg check-and-define macro for DPDK and DOCA flow in the FRR makefile (configure.ac).

if test "$enable_dp_doca" = "yes"; then 
  PKG_CHECK_MODULES([DOCA], [libdpdk doca-flow], [ 
    AC_DEFINE([HAVE_DOCA], [1], [Enable DOCA backend]) 
    DOCA=true 
  ], [ 
    AC_MSG_ERROR([configuration specifies --enable-dp-doca but DOCA libs were not found]) 
  ]) 
fi

I included both the DPDK and DOCA flow libs and cflags into the zebra-dp-doca make macro (zebra/subdir.am).

zebra_zebra_dplane_doca_la_CFLAGS = $(DOCA_CFLAGS) 
zebra_zebra_dplane_doca_la_LIBADD  = $(DOCA_LIBS) 

The DOCA dataplane plugin can be enabled when the FRR service is started using /etc/frr/daemons.

zebra_options= " -M dplane_doca -A 127.0.0.1"

Hardware initialization and port mapping 

Using the DPDK APIs, rte_eal_init and rte_eth_dev_info_get, for initializing the hardware and for setting up the Zebra interface to DPDK port mapping. This workflow is the same as with the DPDK dataplane plugin in the previous section.

root@dpu-arm:~# vtysh -c "show dplane doca port" 
Total ports: 6 cores: 8 
Port Device           IfName           IfIndex          sw,domain,port 
0    0000:03:00.0     p0               4                0000:03:00.0,0,65535 
1    0000:03:00.0     pf0hpf           6                0000:03:00.0,0,4095 
2    0000:03:00.0     pf0vf0           15               0000:03:00.0,0,4096 
3    0000:03:00.0     pf0vf1           16               0000:03:00.0,0,4097 
4    0000:03:00.1     p1               5                0000:03:00.1,1,65535 
5    0000:03:00.1     pf1hpf           7                0000:03:00.1,1,20479 
root@dpu-arm:~#

DOCA flow initialization 

To use doca-flow for programming PBR rules, I had to initialize the doca-flow and doca-flow-port databases. This initialization was done after the hardware was initialized using rte_eal_init.

I used doca_flow_init for initializing the doca-flow library with the flow and queue count config.

struct doca_flow_cfg flow_cfg; 

memset(&flow_cfg, 0, sizeof(flow_cfg)); 
flow_cfg.total_sessions = ZD_DOCA_FLOW_MAX; 
flow_cfg.queues = doca_ctx->nb_cores;  

doca_flow_init(&flow_cfg, &err); 

As I used DPDK to set up the hardware ports, I had to install them in the doca-flow-port database with dpdk_port-id.

struct doca_flow_port_cfg port_cfg; 

 memset(&port_cfg, 0, sizeof(port_cfg)); 
port_cfg.port_id = dpdk_port_id; 
port_cfg.type = DOCA_FLOW_PORT_DPDK_BY_ID; 
snprintf(port_id_str, ZD_PORT_STR_MAX, "%u", port_cfg.port_id); 
port_cfg.devargs = port_id_str; 

doca_port = doca_flow_port_start(&port_cfg, &err);

Program PBR rule using doca-flow APIs 

DOCA flows are programmed with a series of data structures for the match, action, forward, and monitor attributes.

struct doca_flow_match match, match_mask; 
struct doca_flow_actions actions; 
struct doca_flow_fwd fwd; 
struct doca_flow_monitor monitor;

Flow match 

This is specified as a match and match-mask. Match-mask is optional and is auto-filled by the doca-flow library if not specified.

memset(&match, 0, sizeof(match)); 
memset(&match_mask, 0, sizeof(match_mask));  

match.out_src_ip.type = DOCA_FLOW_IP4_ADDR; 
match.out_src_ip.ipv4_addr = src_ip; 
match_mask.out_src_ip.ipv4_addr = src_ip_mask; 

match.out_dst_ip.type = DOCA_FLOW_IP4_ADDR; 
match.out_dst_ip.ipv4_addr = dst_ip; 
match_mask.out_src_ip.ipv4_addr = dst_ip_mask; 
 
match.out_l4_type = ip_proto;  
 
match.out_src_port = RTE_BE16 (l4_src_port); 
match_mask.out_src_port =  UINT16_MAX; 

match.out_dst_port =  RTE_BE16 (l4_dst_port); 
match_mask.out_dst_port =  UINT16_MAX; 

I skipped populating fields such as eth or eth-mask. This is because the doca-flow library can auto-populate such fields to RTE_ETHER_TYPE_IPV4 or RTE_ETHER_TYPE_IPV6, based on other match fields, dst_ip or src_ip.

Flow actions 

To route the packet, I had to change the destination MAC address to the gateway (leaf2) MAC, decrement the TTL, and change the source MAC address. This was originally discussed in part 1, Developing Applications with NVIDIA BlueField DPU and DPDK

memset(&actions, 0, sizeof(actions)); 

 actions.dec_ttl = true; 
 memcpy(actions.mod_src_mac, uplink_mac, DOCA_ETHER_ADDR_LEN); 
 memcpy(actions.mod_dst_mac, gw_mac, DOCA_ETHER_ADDR_LEN); 

Flow forward 

Then, I set the output port to the uplink. 

memset(&fwd, 0, sizeof(fwd)); 
 
fwd.type = DOCA_FLOW_FWD_PORT; 
fwd.port_id = out_port_id; 

Flow monitoring 

I set up flow counters for troubleshooting.

memset(&monitor, 0, sizeof(monitor));  

monitor.flags |= DOCA_FLOW_MONITOR_COUNT; 

DOCA flow pipes and entries 

Flow creation is a two-step process:

  1. Create a flow pipe.
  2. Add a flow entry to the flow pipe. 

The first step creates a software template for a lookup stage. The second step uses the template to program the flow in the hardware. 

Pipes are useful when you must program many similar flows. For such a case, you can set up a single match template (pipe) and indicate which match-field must be updated at the time of flow entry creation (for example, a layer 4 destination port). Subsequent flow entries need only populate the match fields that vary from the pipe (the layer 4 destination port). 

In the case of PBR, each flow pattern is unique, so I created a separate pipe and entry for each PBR rule using the flow attributes that I already populated. 

struct doca_flow_pipe_cfg pipe_cfg;  

pipe_cfg.name = "pbr"; 
pipe_cfg.port = in_dport->doca_port; 
pipe_cfg.match = &match; 
pipe_cfg.match_mask = &match_mask; 
pipe_cfg.actions = &actions; 
pipe_cfg.monitor = &monitor; 
pipe_cfg.is_root = true; 

flow_pipe = doca_flow_create_pipe(&pipe_cfg, &fwd, NULL, &err); 
flow_entry = doca_flow_pipe_add_entry(0, flow_pipe, &match, &actions, &monitor, &fwd, &err);

Flow deletion 

The flow pipe and entry creation APIs return pipe and flow pointers that must be cached for subsequent deletion. 

doca_flow_pipe_rm_entry(0, flow_entry); 
doca_flow_destroy_pipe(port_id, flow_pipe); 

Flow statistics 

At the time of flow creation, I set the DOCA_FLOW_MONITOR_COUNT flag. I queried the flow stats using doca_flow_query.

struct doca_flow_query query

// hit counters – query.total_pkts and query.total_bytes 
memset(&query, 0, sizeof(query)); 
doca_flow_query(flow_entry,  &query); 

Verifying hardware acceleration 

The FRR-PBR rule configuration and traffic generation is the same as with dpdk-plugin. The traffic is forwarded by the DPU hardware as expected and can be verified using the flow counters.

root@dpu-arm:~# vtysh -c "show dplane doca pbr flow" 
Rules if pf0vf0 
  Seq 1 pri 300 
  SRC IP Match: 172.20.0.8/32 
  DST IP Match: 172.30.0.8/32 
  IP protocol Match: 17 
  DST Port Match: 53 
  Tableid: 10000 
  Action: nh: 192.168.20.250 intf: p0 
  Action: mac: 00:00:5e:00:01:fa 
  DOCA flow: installed 0xffff28005150 
  DOCA stats: packets 202 bytes 24644 
root@dpu-arm:~# 

It can also be verified using hardware entries:

root@dpu-arm:~# ~/mlx_steering_dump/mlx_steering_dump_parser.py -p `pidof zebra`  - 
f /tmp/dpdkDump 
domain 0xe294002, table 0xaaab07648b10, matcher 0xffff28012c30, rule 0xffff28014040 
   match: outer_l3_type: 0x1, outer_ip_dst_addr: 172.30.0.8, outer_l4_type: 0x2, metadata_reg_c_0: 0x00030000, outer_l4_dport: 0x0035, outer_ip_src_addr: 172.20.0.8 
   action: MODIFY_HDR(hdr(dec_ip4_ttl)), rewrite index 0x0 & VPORT, num 0xffff & CTR(hits(352), bytes(42944)), index 0x806200

FRR now has a second dataplane plugin for hardware acceleration of PBR rules, using doca-flow.

Application development takeaways 

In this series, you saw how a DPU networking application can be hardware-accelerated with four steps using rte_flow or doca_flow:

  • Link the DOCA/DPDK libraries to the application. 
  • Initialize the hardware.
  • Setup the application to hardware port mapping.
  • Program flows for steering the traffic.

As more elements are offloaded on the DPU, the development process can get complex with increasing source lines of code (SLOC). That’s where DOCA abstractions help: 

  • DOCA comes with several built-in libraries such as doca-dpi, gRPC, Firefly time synchronization, and more. These libraries enable quick plug-n-play for your application.
  • DOCA constructs such as doca_pipe enable you to templatize your pipeline, eliminating boilerplate code and optimizing flow insertion.
  • Upcoming DOCA libraries, such as the hardware-accelerated LPM (Longest prefix match) make building switch pipelines easier. This is particularly relevant to the sample application that you saw in this series, FRR, which is commonly deployed for building a LPM routing table (or RIB) with BGP.
  • With DOCA, you can also leapfrog into the exciting world of GPU + DPU development on the converged accelerators.
Showcasing the BlueField-2 data processing unit as a converged accelerator
Figure 1. Converged Accelerator

Are you ready to take your application development to dizzying heights? Sign up for the DOCA Early Access developer program to start building today.

For more information, see the following resources: 

Leave a Reply

Your email address will not be published. Required fields are marked *