mirror of
https://github.com/eunomia-bpf/bpf-developer-tutorial.git
synced 2026-02-03 02:04:30 +08:00
rename README to chinese documents
This commit is contained in:
@@ -1,16 +1,16 @@
|
||||
# eBPF入门实践教程十四:记录 TCP 连接状态与 TCP RTT
|
||||
# eBPF Tutorial by Example 14: Recording TCP Connection Status and TCP RTT
|
||||
|
||||
eBPF (扩展的伯克利数据包过滤器) 是一项强大的网络和性能分析工具,被广泛应用在 Linux 内核上。eBPF 使得开发者能够动态地加载、更新和运行用户定义的代码,而无需重启内核或更改内核源代码。
|
||||
eBPF (Extended Berkeley Packet Filter) is a powerful network and performance analysis tool widely used in the Linux kernel. eBPF allows developers to dynamically load, update, and run user-defined code without restarting the kernel or changing the kernel source code.
|
||||
|
||||
在我们的 eBPF 入门实践教程系列的这一篇,我们将介绍两个示例程序:`tcpstates` 和 `tcprtt`。`tcpstates` 用于记录 TCP 连接的状态变化,而 `tcprtt` 则用于记录 TCP 的往返时间 (RTT, Round-Trip Time)。
|
||||
In this article of our eBPF Tutorial by Example series, we will introduce two sample programs: `tcpstates` and `tcprtt`. `tcpstates` is used to record the state changes of TCP connections, while `tcprtt` is used to record the Round-Trip Time (RTT) of TCP.
|
||||
|
||||
## `tcprtt` 与 `tcpstates`
|
||||
## `tcprtt` and `tcpstates`
|
||||
|
||||
网络质量在当前的互联网环境中至关重要。影响网络质量的因素有许多,包括硬件、网络环境、软件编程的质量等。为了帮助用户更好地定位网络问题,我们引入了 `tcprtt` 这个工具。`tcprtt` 可以监控 TCP 链接的往返时间,从而评估网络质量,帮助用户找出可能的问题所在。
|
||||
Network quality is crucial in the current Internet environment. There are many factors that affect network quality, including hardware, network environment, and the quality of software programming. To help users better locate network issues, we introduce the tool `tcprtt`. `tcprtt` can monitor the Round-Trip Time of TCP connections, evaluate network quality, and help users identify potential problems.
|
||||
|
||||
当 TCP 链接建立时,`tcprtt` 会自动根据当前系统的状况,选择合适的执行函数。在执行函数中,`tcprtt` 会收集 TCP 链接的各项基本信息,如源地址、目标地址、源端口、目标端口、耗时等,并将这些信息更新到直方图型的 BPF map 中。运行结束后,`tcprtt` 会通过用户态代码,将收集的信息以图形化的方式展示给用户。
|
||||
When a TCP connection is established, `tcprtt` automatically selects the appropriate execution function based on the current system conditions. In the execution function, `tcprtt` collects various basic information of the TCP connection, such as source address, destination address, source port, destination port, and time elapsed, and updates this information to a histogram-like BPF map. After the execution is completed, `tcprtt` presents the collected information graphically to users through user-mode code.
|
||||
|
||||
`tcpstates` 则是一个专门用来追踪和打印 TCP 连接状态变化的工具。它可以显示 TCP 连接在每个状态中的停留时长,单位为毫秒。例如,对于一个单独的 TCP 会话,`tcpstates` 可以打印出类似以下的输出:
|
||||
`tcpstates` is a tool specifically designed to track and print changes in TCP connection status. It can display the duration of TCP connections in each state, measured in milliseconds. For example, for a single TCP session, `tcpstates` can print output similar to the following:
|
||||
|
||||
```sh
|
||||
SKADDR C-PID C-COMM LADDR LPORT RADDR RPORT OLDSTATE -> NEWSTATE MS
|
||||
@@ -21,13 +21,13 @@ ffff9fd7e8192000 0 swapper/5 100.66.100.185 63446 52.33.159.26 80 FI
|
||||
ffff9fd7e8192000 0 swapper/5 100.66.100.185 63446 52.33.159.26 80 FIN_WAIT2 -> CLOSE 0.006
|
||||
```
|
||||
|
||||
以上输出中,最多的时间被花在了 ESTABLISHED 状态,也就是连接已经建立并在传输数据的状态,这个状态到 FIN_WAIT1 状态(开始关闭连接的状态)的转变过程中耗费了 176.042 毫秒。
|
||||
In the above output, the most time is spent in the ESTABLISHED state, which indicates that the connection has been established and data transmission is in progress. The transition from this state to the FIN_WAIT1 state (the beginning of connection closure) took 176.042 milliseconds.
|
||||
|
||||
在我们接下来的教程中,我们会更深入地探讨这两个工具,解释它们的实现原理,希望这些内容对你在使用 eBPF 进行网络和性能分析方面的工作有所帮助。
|
||||
In our upcoming tutorials, we will delve deeper into these two tools, explaining their implementation principles, and hopefully, these contents will help you in your work with eBPF for network and performance analysis.
|
||||
|
||||
## tcpstate
|
||||
## tcpstate eBPF code
|
||||
|
||||
由于篇幅所限,这里我们主要讨论和分析对应的 eBPF 内核态代码实现。以下是 tcpstate 的 eBPF 代码:
|
||||
Due to space constraints, here we mainly discuss and analyze the corresponding eBPF kernel-mode code implementation. The following is the eBPF code for tcpstate:
|
||||
|
||||
```c
|
||||
const volatile bool filter_by_sport = false;
|
||||
@@ -44,7 +44,8 @@ struct {
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_HASH);
|
||||
__uint(max_entries, MAX_ENTRIES);
|
||||
__type(key, __u16);
|
||||
...
|
||||
```__type(key, __u16);
|
||||
__type(value, __u16);
|
||||
} dports SEC(".maps");
|
||||
|
||||
@@ -108,7 +109,6 @@ int handle_set_state(struct trace_event_raw_inet_sock_set_state *ctx)
|
||||
bpf_probe_read_kernel(&event.saddr, sizeof(event.saddr), &sk->__sk_common.skc_v6_rcv_saddr.in6_u.u6_addr32);
|
||||
bpf_probe_read_kernel(&event.daddr, sizeof(event.daddr), &sk->__sk_common.skc_v6_daddr.in6_u.u6_addr32);
|
||||
}
|
||||
|
||||
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, &event, sizeof(event));
|
||||
|
||||
if (ctx->newstate == TCP_CLOSE)
|
||||
@@ -120,23 +120,25 @@ int handle_set_state(struct trace_event_raw_inet_sock_set_state *ctx)
|
||||
}
|
||||
```
|
||||
|
||||
`tcpstates`主要依赖于 eBPF 的 Tracepoints 来捕获 TCP 连接的状态变化,从而跟踪 TCP 连接在每个状态下的停留时间。
|
||||
The `tcpstates` program relies on eBPF Tracepoints to capture the state changes of TCP connections, in order to track the time spent in each state of the TCP connection.
|
||||
|
||||
### 定义 BPF Maps
|
||||
### Define BPF Maps
|
||||
|
||||
在`tcpstates`程序中,首先定义了几个 BPF Maps,它们是 eBPF 程序和用户态程序之间交互的主要方式。`sports`和`dports`分别用于存储源端口和目标端口,用于过滤 TCP 连接;`timestamps`用于存储每个 TCP 连接的时间戳,以计算每个状态的停留时间;`events`则是一个 perf_event 类型的 map,用于将事件数据发送到用户态。
|
||||
In the `tcpstates` program, several BPF Maps are defined, which are the primary way of interaction between the eBPF program and the user-space program. `sports` and `dports` are used to store the source and destination ports for filtering TCP connections; `timestamps` is used to store the timestamps for each TCP connection to calculate the time spent in each state; `events` is a map of type `perf_event`, used to send event data to the user-space.
|
||||
|
||||
### 追踪 TCP 连接状态变化
|
||||
### Trace TCP Connection State Changes
|
||||
|
||||
程序定义了一个名为`handle_set_state`的函数,该函数是一个 tracepoint 类型的程序,它将被挂载到`sock/inet_sock_set_state`这个内核 tracepoint 上。每当 TCP 连接状态发生变化时,这个 tracepoint 就会被触发,然后执行`handle_set_state`函数。
|
||||
The program defines a function called `handle_set_state`, which is a program of type tracepoint and is mounted on the `sock/inet_sock_set_state` kernel tracepoint. Whenever the TCP connection state changes, this tracepoint is triggered and the `handle_set_state` function is executed.
|
||||
|
||||
在`handle_set_state`函数中,首先通过一系列条件判断确定是否需要处理当前的 TCP 连接,然后从`timestamps`map 中获取当前连接的上一个时间戳,然后计算出停留在当前状态的时间。接着,程序将收集到的数据放入一个 event 结构体中,并通过`bpf_perf_event_output`函数将该 event 发送到用户态。
|
||||
In the `handle_set_state` function, it first determines whether the current TCP connection needs to be processed through a series of conditional judgments, then retrieves the previous timestamp of the current connection from the `timestamps` map, and calculates the time spent in the current state. Then, the program places the collected data in an event structure and sends the event to the user-space using the `bpf_perf_event_output` function.
|
||||
|
||||
### 更新时间戳
|
||||
### Update Timestamps
|
||||
|
||||
最后,根据 TCP 连接的新状态,程序将进行不同的操作:如果新状态为 TCP_CLOSE,表示连接已关闭,程序将从`timestamps`map 中删除该连接的时间戳;否则,程序将更新该连接的时间戳。
|
||||
Finally, based on the new state of the TCP connection, the program performs different operations: if the new state is TCP_CLOSE, it means the connection has been closed and the program deletes the timestamp of that connection from the `timestamps` map; otherwise, the program updates the timestamp of the connection.
|
||||
|
||||
用户态的部分主要是通过 libbpf 来加载 eBPF 程序,然后通过 perf_event 来接收内核中的事件数据:
|
||||
## User-Space Processing for tcpstate
|
||||
|
||||
The user-space part is mainly about loading the eBPF program using libbpf and receiving event data from the kernel using perf_event:
|
||||
|
||||
```c
|
||||
static void handle_event(void* ctx, int cpu, void* data, __u32 data_sz) {
|
||||
@@ -166,33 +168,28 @@ static void handle_event(void* ctx, int cpu, void* data, __u32 data_sz) {
|
||||
} else {
|
||||
printf(
|
||||
"%-16llx %-7d %-10.10s %-15s %-5d %-15s %-5d %-11s -> %-11s %.3f\n",
|
||||
e->skaddr, e->pid, e->task, saddr, e->sport, daddr, e->dport,
|
||||
tcp_states[e->oldstate], tcp_states[e->newstate],
|
||||
(double)e->delta_us / 1000);
|
||||
}
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
`handle_event`就是这样一个回调函数,它会被 perf_event 调用,每当内核有新的事件到达时,它就会处理这些事件。
|
||||
handle_event` is a callback function that is called by perf_event. It handles new events that arrive in the kernel.
|
||||
|
||||
在`handle_event`函数中,我们首先通过`inet_ntop`函数将二进制的 IP 地址转换成人类可读的格式,然后根据是否需要输出宽格式,分别打印不同的信息。这些信息包括了事件的时间戳、源 IP 地址、源端口、目标 IP 地址、目标端口、旧状态、新状态以及在旧状态停留的时间。
|
||||
In the `handle_event` function, we first use the `inet_ntop` function to convert the binary IP address to a human-readable format. Then, based on whether the wide format is needed or not, we print different information. This information includes the timestamp of the event, source IP address, source port, destination IP address, destination port, old state, new state, and the time spent in the old state.
|
||||
|
||||
这样,用户就可以清晰地看到 TCP 连接状态的变化,以及每个状态的停留时间,从而帮助他们诊断网络问题。
|
||||
This allows users to see the changes in TCP connection states and the duration of each state, helping them diagnose network issues.
|
||||
|
||||
总结起来,用户态部分的处理主要涉及到了以下几个步骤:
|
||||
In summary, the user-space part of the processing involves the following steps:
|
||||
|
||||
1. 使用 libbpf 加载并运行 eBPF 程序。
|
||||
2. 设置回调函数来接收内核发送的事件。
|
||||
3. 处理接收到的事件,将其转换成人类可读的格式并打印。
|
||||
1. Use libbpf to load and run the eBPF program.
|
||||
2. Set up a callback function to receive events sent by the kernel.
|
||||
3. Process the received events, convert them into a human-readable format, and print them.
|
||||
|
||||
以上就是`tcpstates`程序用户态部分的主要实现逻辑。通过这一章的学习,你应该已经对如何在用户态处理内核事件有了更深入的理解。在下一章中,我们将介绍更多关于如何使用 eBPF 进行网络监控的知识。
|
||||
The above is the main implementation logic of the user-space part of the `tcpstates` program. Through this chapter, you should have gained a deeper understanding of how to handle kernel events in user space. In the next chapter, we will introduce more knowledge about using eBPF for network monitoring.
|
||||
|
||||
### tcprtt
|
||||
### tcprtt kernel eBPF code
|
||||
|
||||
在本章节中,我们将分析`tcprtt` eBPF 程序的内核态代码。`tcprtt`是一个用于测量 TCP 往返时间(Round Trip Time, RTT)的程序,它将 RTT 的信息统计到一个 histogram 中。
|
||||
In this section, we will analyze the kernel BPF code of the `tcprtt` eBPF program. `tcprtt` is a program used to measure TCP Round Trip Time (RTT) and stores the RTT information in a histogram.
|
||||
|
||||
```c
|
||||
|
||||
/// @sample {"interval": 1000, "type" : "log2_hist"}
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_HASH);
|
||||
@@ -246,44 +243,50 @@ int BPF_PROG(tcp_rcv, struct sock *sk)
|
||||
}
|
||||
```
|
||||
|
||||
首先,我们定义了一个 hash 类型的 eBPF map,名为`hists`,它用来存储 RTT 的统计信息。在这个 map 中,键是 64 位整数,值是一个`hist`结构,这个结构包含了一个数组,用来存储不同 RTT 区间的数量。
|
||||
The code above declares a map called `hists`, which is a hash map used to store the histogram data. The `hists` map has a maximum number of entries defined as `MAX_ENTRIES`.
|
||||
|
||||
接着,我们定义了一个 eBPF 程序,名为`tcp_rcv`,这个程序会在每次内核中处理 TCP 收包的时候被调用。在这个程序中,我们首先根据过滤条件(源/目标 IP 地址和端口)对 TCP 连接进行过滤。如果满足条件,我们会根据设置的参数选择相应的 key(源 IP 或者目标 IP 或者 0),然后在`hists` map 中查找或者初始化对应的 histogram。
|
||||
The function `BPF_PROG(tcp_rcv, struct sock *sk)` is the entry point of the eBPF program for handling the `tcp_rcv_established` event. Within this function, the program retrieves various information from the network socket and checks if filtering conditions are met. Then, it performs operations on the histogram data structure. Finally, the program calculates the slot for the RTT value and updates the histogram accordingly.
|
||||
|
||||
接下来,我们读取 TCP 连接的`srtt_us`字段,这个字段表示了平滑的 RTT 值,单位是微秒。然后我们将这个 RTT 值转换为对数形式,并将其作为 slot 存储到 histogram 中。
|
||||
This is the main code logic of the `tcprtt` eBPF program in kernel mode. The eBPF program measures the RTT of TCP connections and maintains a histogram to collect and analyze the RTT data.Instructions:
|
||||
|
||||
如果设置了`show_ext`参数,我们还会将 RTT 值和计数器累加到 histogram 的`latency`和`cnt`字段中。
|
||||
First, we define a hash type eBPF map called `hists`, which is used to store statistics information about RTT. In this map, the key is a 64-bit integer, and the value is a `hist` structure that contains an array to store the count of different RTT intervals.
|
||||
|
||||
通过以上的处理,我们可以对每个 TCP 连接的 RTT 进行统计和分析,从而更好地理解网络的性能状况。
|
||||
Next, we define an eBPF program called `tcp_rcv` which will be called every time a TCP packet is received in the kernel. In this program, we first filter TCP connections based on filtering conditions (source/destination IP address and port). If the conditions are met, we select the corresponding key (source IP, destination IP, or 0) based on the set parameters, and then look up or initialize the corresponding histogram in the `hists` map.
|
||||
|
||||
总结起来,`tcprtt` eBPF 程序的主要逻辑包括以下几个步骤:
|
||||
Then, we read the `srtt_us` field of the TCP connection, which represents the smoothed RTT value in microseconds. We convert this RTT value to a logarithmic form and store it as a slot in the histogram.
|
||||
|
||||
1. 根据过滤条件对 TCP 连接进行过滤。
|
||||
2. 在`hists` map 中查找或者初始化对应的 histogram。
|
||||
3. 读取 TCP 连接的`srtt_us`字段,并将其转换为对数形式,存储到 histogram 中。
|
||||
4. 如果设置了`show_ext`参数,将 RTT 值和计数器累加到 histogram 的`latency`和`cnt`字段中。
|
||||
If the `show_ext` parameter is set, we also increment the RTT value and the counter in the `latency` and `cnt` fields of the histogram.
|
||||
|
||||
tcprtt 挂载到了内核态的 tcp_rcv_established 函数上:
|
||||
With the above processing, we can analyze and track the RTT of each TCP connection to better understand the network performance.
|
||||
|
||||
In summary, the main logic of the `tcprtt` eBPF program includes the following steps:
|
||||
|
||||
1. Filter TCP connections based on filtering conditions.
|
||||
2. Look up or initialize the corresponding histogram in the `hists` map.
|
||||
3. Read the `srtt_us` field of the TCP connection, convert it to a logarithmic form, and store it in the histogram.
|
||||
4. If the `show_ext` parameter is set, increment the RTT value and the counter in the `latency` and `cnt` fields of the histogram.
|
||||
|
||||
`tcprtt` is attached to the kernel's `tcp_rcv_established` function:
|
||||
|
||||
```c
|
||||
void tcp_rcv_established(struct sock *sk, struct sk_buff *skb);
|
||||
```
|
||||
|
||||
这个函数是在内核中处理TCP接收数据的主要函数,主要在TCP连接处于`ESTABLISHED`状态时被调用。这个函数的处理逻辑包括一个快速路径和一个慢速路径。快速路径在以下几种情况下会被禁用:
|
||||
This function is the main function in the kernel for processing received TCP data and is called when a TCP connection is in the `ESTABLISHED` state. The processing logic of this function includes a fast path and a slow path. The fast path is disabled in the following cases:
|
||||
|
||||
- 我们宣布了一个零窗口 - 零窗口探测只能在慢速路径中正确处理。
|
||||
- 收到了乱序的数据包。
|
||||
- 期待接收紧急数据。
|
||||
- 没有剩余的缓冲区空间。
|
||||
- 接收到了意外的TCP标志/窗口值/头部长度(通过检查TCP头部与预设标志进行检测)。
|
||||
- 数据在两个方向上都在传输。快速路径只支持纯发送者或纯接收者(这意味着序列号或确认值必须保持不变)。
|
||||
- 接收到了意外的TCP选项。
|
||||
- We have advertised a zero window - zero window probing can only be handled correctly in the slow path.
|
||||
- Out-of-order data packets received.
|
||||
- Expecting to receive urgent data.
|
||||
- No remaining buffer space.
|
||||
- Received unexpected TCP flags/window values/header lengths (detected by checking TCP header against the expected flags).
|
||||
- Data is being transmitted in both directions. The fast path only supports pure senders or pure receivers (meaning the sequence number or acknowledgement value must remain unchanged).
|
||||
- Received unexpected TCP options.
|
||||
|
||||
当这些条件不满足时,它会进入一个标准的接收处理过程,这个过程遵循RFC793来处理所有情况。前三种情况可以通过正确的预设标志设置来保证,剩下的情况则需要内联检查。当一切都正常时,快速处理过程会在`tcp_data_queue`函数中被开启。
|
||||
When these conditions are not met, it enters a standard receive processing, which follows RFC 793 to handle all cases. The first three cases can be ensured by setting the correct expected flags, while the remaining cases require inline checks. When everything is normal, the fast processing path is invoked in the `tcp_data_queue` function.
|
||||
|
||||
## 编译运行
|
||||
## Compilation and Execution
|
||||
|
||||
对于 tcpstates,可以通过以下命令编译和运行 libbpf 应用:
|
||||
For `tcpstates`, you can compile and run the libbpf application with the following command:
|
||||
|
||||
```console
|
||||
$ make
|
||||
@@ -295,8 +298,8 @@ $ make
|
||||
$ sudo ./tcpstates
|
||||
SKADDR PID COMM LADDR LPORT RADDR RPORT OLDSTATE -> NEWSTATE MS
|
||||
ffff9bf61bb62bc0 164978 node 192.168.88.15 0 52.178.17.2 443 CLOSE -> SYN_SENT 0.000
|
||||
ffff9bf61bb62bc0 0 swapper/0 192.168.88.15 41596 52.178.17.2 443 SYN_SENT -> ESTABLISHED 225.794
|
||||
ffff9bf61bb62bc0 0 swapper/0 192.168.88.15 41596 52.178.17.2 443 ESTABLISHED -> CLOSE_WAIT 901.454
|
||||
ffff9bf61bb62bc0 0 swapper/0 192.168.88.15 41596 52.178.17.2 443 SYN_SENT -> ESTABLISHED 225.794".
|
||||
"ffff9bf61bb62bc0 0 swapper/0 192.168.88.15 41596 52.178.17.2 443 ESTABLISHED -> CLOSE_WAIT 901.454
|
||||
ffff9bf61bb62bc0 164978 node 192.168.88.15 41596 52.178.17.2 443 CLOSE_WAIT -> LAST_ACK 0.793
|
||||
ffff9bf61bb62bc0 164978 node 192.168.88.15 41596 52.178.17.2 443 LAST_ACK -> LAST_ACK 0.086
|
||||
ffff9bf61bb62bc0 228759 kworker/u6 192.168.88.15 41596 52.178.17.2 443 LAST_ACK -> CLOSE 0.193
|
||||
@@ -305,7 +308,7 @@ ffff9bf6d8ee88c0 229832 redis-serv 0.0.0.0 6379 0.0.0.0 0
|
||||
ffff9bf7109d6900 88750 node 127.0.0.1 39755 127.0.0.1 50966 ESTABLISHED -> FIN_WAIT1 0.000
|
||||
```
|
||||
|
||||
对于 tcprtt,我们可以使用 eunomia-bpf 编译运行这个例子:
|
||||
For tcprtt, we can use eunomia-bpf to compile and run this example:
|
||||
|
||||
Compile:
|
||||
|
||||
@@ -313,7 +316,7 @@ Compile:
|
||||
docker run -it -v `pwd`/:/src/ ghcr.io/eunomia-bpf/ecc-`uname -m`:latest
|
||||
```
|
||||
|
||||
或者
|
||||
Or
|
||||
|
||||
```console
|
||||
$ ecc tcprtt.bpf.c tcprtt.h
|
||||
@@ -322,13 +325,12 @@ Generating export types...
|
||||
Packing ebpf object and config into package.json...
|
||||
```
|
||||
|
||||
运行:
|
||||
Run:
|
||||
|
||||
```console
|
||||
$ sudo ecli run package.json -h
|
||||
A simple eBPF program
|
||||
|
||||
|
||||
Usage: package.json [OPTIONS]
|
||||
|
||||
Options:
|
||||
@@ -344,8 +346,8 @@ Options:
|
||||
-h, --help Print help
|
||||
-V, --version Print version
|
||||
|
||||
Built with eunomia-bpf framework.
|
||||
See https://github.com/eunomia-bpf/eunomia-bpf for more information.
|
||||
Built with eunomia-bpf framework.".
|
||||
```See https://github.com/eunomia-bpf/eunomia-bpf for more information.
|
||||
|
||||
$ sudo ecli run package.json
|
||||
key = 0
|
||||
@@ -380,26 +382,27 @@ cnt = 0
|
||||
32 -> 63 : 0 | |
|
||||
64 -> 127 : 0 | |
|
||||
128 -> 255 : 0 | |
|
||||
256 -> 511 : 0 | |
|
||||
512 -> 1023 : 11 |*************************** |
|
||||
256 -> 511 : 0 | |512 -> 1023 : 11 |*************************** |
|
||||
1024 -> 2047 : 1 |** |
|
||||
2048 -> 4095 : 0 | |
|
||||
4096 -> 8191 : 16 |****************************************|
|
||||
8192 -> 16383 : 4 |********** |
|
||||
```
|
||||
|
||||
完整源代码:
|
||||
Complete source code:
|
||||
|
||||
- <https://github.com/eunomia-bpf/bpf-developer-tutorial/tree/main/src/14-tcpstates>
|
||||
|
||||
参考资料:
|
||||
References:
|
||||
|
||||
- [tcpstates](https://github.com/iovisor/bcc/blob/master/tools/tcpstates_example.txt)
|
||||
- [tcprtt](https://github.com/iovisor/bcc/blob/master/tools/tcprtt.py)
|
||||
- [libbpf-tools/tcpstates](<https://github.com/iovisor/bcc/blob/master/libbpf-tools/tcpstates.bpf.c>)
|
||||
|
||||
## 总结
|
||||
## Summary
|
||||
|
||||
通过本篇 eBPF 入门实践教程,我们学习了如何使用tcpstates和tcprtt这两个 eBPF 示例程序,监控和分析 TCP 的连接状态和往返时间。我们了解了tcpstates和tcprtt的工作原理和实现方式,包括如何使用 BPF map 存储数据,如何在 eBPF 程序中获取和处理 TCP 连接信息,以及如何在用户态应用程序中解析和显示 eBPF 程序收集的数据。
|
||||
In this eBPF introductory tutorial, we learned how to use the tcpstates and tcprtt eBPF example programs to monitor and analyze the connection states and round-trip time of TCP. We understood the working principles and implementation methods of tcpstates and tcprtt, including how to store data using BPF maps, how to retrieve and process TCP connection information in eBPF programs, and how to parse and display the data collected by eBPF programs in user-space applications.
|
||||
|
||||
如果您希望学习更多关于 eBPF 的知识和实践,可以访问我们的教程代码仓库 <https://github.com/eunomia-bpf/bpf-developer-tutorial> 或网站 <https://eunomia.dev/zh/tutorials/> 以获取更多示例和完整的教程。接下来的教程将进一步探讨 eBPF 的高级特性,我们会继续分享更多有关 eBPF 开发实践的内容。
|
||||
If you would like to learn more about eBPF knowledge and practices, you can visit our tutorial code repository at <https://github.com/eunomia-bpf/bpf-developer-tutorial> or website <https://eunomia.dev/tutorials/> for more examples and complete tutorials. The upcoming tutorials will further explore advanced features of eBPF, and we will continue to share more content about eBPF development practices.
|
||||
|
||||
> The original link of this article: <https://eunomia.dev/tutorials/14-tcpstates>
|
||||
|
||||
Reference in New Issue
Block a user