mirror of
https://github.com/eunomia-bpf/bpf-developer-tutorial.git
synced 2026-02-03 02:04:30 +08:00
add TODO for unfinished files
This commit is contained in:
7
10-hardirqs/.gitignore
vendored
Normal file
7
10-hardirqs/.gitignore
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
.vscode
|
||||
package.json
|
||||
*.o
|
||||
*.skel.json
|
||||
*.skel.yaml
|
||||
package.yaml
|
||||
ecli
|
||||
@@ -1,4 +1,4 @@
|
||||
# eBPF 入门开发实践指南十:在 eBPF 中使用 kprobe 监测捕获 unlink 系统调用
|
||||
# eBPF 入门开发实践指南十:在 eBPF 中使用 hardirqs 或 softirqs 捕获中断事件
|
||||
|
||||
eBPF (Extended Berkeley Packet Filter) 是 Linux 内核上的一个强大的网络和性能分析工具。它允许开发者在内核运行时动态加载、更新和运行用户定义的代码。
|
||||
|
||||
@@ -10,19 +10,9 @@ hardirqs 是 bcc-tools 工具包的一部分,该工具包是一组用于在 Li
|
||||
hardirqs 是一种用于跟踪和分析 Linux 内核中的中断处理程序的工具。它使用 BPF(Berkeley Packet Filter)程序来收集有关中断处理程序的数据,
|
||||
并可用于识别内核中的性能问题和其他与中断处理相关的问题。
|
||||
|
||||
## 使用方法
|
||||
|
||||
sudo hardirqs:该命令会显示有关内核中断处理程序的信息,包括每个处理程序的名称、统计信息和其他相关数据。
|
||||
hardirqs 提供了多种选项,您可以根据需要使用它们来控制 hardirqs 的输出。一些常用的选项包括:
|
||||
-h:显示帮助信息,包括所有可用选项的描述和示例。
|
||||
-p PID:限制输出仅显示指定进程的中断处理程序。
|
||||
-t:在输出中显示时间戳,以毫秒为单位。
|
||||
-d:以持续的方式运行 hardirqs,并在输出中显示中断处理程序的实时数据。
|
||||
-l:在输出中显示中断处理程序的完整路径。
|
||||
|
||||
## 实现原理
|
||||
|
||||
在 Linux 内核中,每个中断处理程序都有一个唯一的名称,称为中断向量。hardirqs 通过检查每个中断处理程序的中断向量,来监控内核中的中断处理程序。当内核接收到一个中断时,它会查找与该中断相关的中断处理程序,并执行该程序。hardirqs 通过检查内核中执行的中断处理程序,来监控内核中的中断处理程序。另外,hardirqs 还可以通过注入 BPF 程序到内核中,来捕获内核中的中断处理程序。这样,hardirqs 就可以监控内核中执行的中断处理程序,并收集有关它们的信息。
|
||||
在 Linux 内核中,每个中断处理程序都有一个唯一的名称,称为中断向量。hardirqs 通过检查每个中断处理程序的中断向量,来监控内核中的中断处理程序。当内核接收到一个中断时,它会查找与该中断相关的中断处理程序,并执行该程序。hardirqs 通过检查内核中执行的中断处理程序,来监控内核中的中断处理程序。另外,hardirqs 还可以通过注入 BPF 程序到内核中,来捕获内核中的中断处理程序。这样,hardirqs 就可以监控内核中执行的中断处理程序,并收集有关它们的信息。
|
||||
|
||||
## 代码实现
|
||||
|
||||
@@ -166,7 +156,7 @@ char LICENSE[] SEC("license") = "GPL";
|
||||
要编译这个程序,请使用 ecc 工具:
|
||||
|
||||
```console
|
||||
$ ecc kprobe-link.bpf.c
|
||||
$ ecc hardirqs.bpf.c
|
||||
Compiling bpf object...
|
||||
Packing ebpf object and config into package.json...
|
||||
```
|
||||
@@ -174,5 +164,11 @@ Packing ebpf object and config into package.json...
|
||||
然后运行:
|
||||
|
||||
```console
|
||||
sudo ecli package.json
|
||||
sudo ecli ./package.json
|
||||
```
|
||||
|
||||
## 总结
|
||||
|
||||
更多的例子和详细的开发指南,请参考 eunomia-bpf 的官方文档:<https://github.com/eunomia-bpf/eunomia-bpf>
|
||||
|
||||
完整的教程和源代码已经全部开源,可以在 <https://github.com/eunomia-bpf/bpf-developer-tutorial> 中查看。
|
||||
|
||||
7
11-bootstrap/.gitignore
vendored
Normal file
7
11-bootstrap/.gitignore
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
.vscode
|
||||
package.json
|
||||
*.o
|
||||
*.skel.json
|
||||
*.skel.yaml
|
||||
package.yaml
|
||||
ecli
|
||||
@@ -1,152 +1,144 @@
|
||||
# eBPF 入门实践教程
|
||||
# eBPF入门实践教程:使用 libbpf-bootstrap 开发程序统计 TCP 连接延时
|
||||
|
||||
## 备注
|
||||
## 背景
|
||||
|
||||
对于使用 `libbpf-bootstrap` 的开发,具体见 [tcpconnlat-libbpf-bootstrap.md](tcpconnlat-libbpf-bootstrap.md)
|
||||
在互联网后端日常开发接口的时候中,不管你使用的是C、Java、PHP还是Golang,都避免不了需要调用mysql、redis等组件来获取数据,可能还需要执行一些rpc远程调用,或者再调用一些其它restful api。 在这些调用的底层,基本都是在使用TCP协议进行传输。这是因为在传输层协议中,TCP协议具备可靠的连接,错误重传,拥塞控制等优点,所以目前应用比UDP更广泛一些。但相对而言,tcp 连接也有一些缺点,例如建立连接的延时较长等。因此也会出现像 QUIC ,即 快速UDP网络连接 ( Quick UDP Internet Connections )这样的替代方案。
|
||||
|
||||
## origin
|
||||
tcp 连接延时分析对于网络性能分析优化或者故障排查都能起到不少作用。
|
||||
|
||||
origin from:
|
||||
## tcpconnlat 的实现原理
|
||||
|
||||
<https://github.com/iovisor/bcc/blob/master/libbpf-tools/tcpconnlat.bpf.c>
|
||||
tcpconnlat 这个工具跟踪执行活动TCP连接的内核函数 (例如,通过connect()系统调用),并显示本地测量的连接的延迟(时间),即从发送 SYN 到响应包的时间。
|
||||
|
||||
## Compile and Run
|
||||
### tcp 连接原理
|
||||
|
||||
Compile:
|
||||
tcp 连接的整个过程如图所示:
|
||||
|
||||
```shell
|
||||
docker run -it -v `pwd`/:/src/ yunwei37/ebpm:latest
|
||||

|
||||
|
||||
在这个连接过程中,我们来简单分析一下每一步的耗时:
|
||||
|
||||
1. 客户端发出SYNC包:客户端一般是通过connect系统调用来发出 SYN 的,这里牵涉到本机的系统调用和软中断的 CPU 耗时开销
|
||||
2. SYN传到服务器:SYN从客户端网卡被发出,这是一次长途远距离的网络传输
|
||||
3. 服务器处理SYN包:内核通过软中断来收包,然后放到半连接队列中,然后再发出SYN/ACK响应。主要是 CPU 耗时开销
|
||||
4. SYC/ACK传到客户端:长途网络跋涉
|
||||
5. 客户端处理 SYN/ACK:客户端内核收包并处理SYN后,经过几us的CPU处理,接着发出 ACK。同样是软中断处理开销
|
||||
6. ACK传到服务器:长途网络跋涉
|
||||
7. 服务端收到ACK:服务器端内核收到并处理ACK,然后把对应的连接从半连接队列中取出来,然后放到全连接队列中。一次软中断CPU开销
|
||||
8. 服务器端用户进程唤醒:正在被accpet系统调用阻塞的用户进程被唤醒,然后从全连接队列中取出来已经建立好的连接。一次上下文切换的CPU开销
|
||||
|
||||
在客户端视角,在正常情况下一次TCP连接总的耗时也就就大约是一次网络RTT的耗时。但在某些情况下,可能会导致连接时的网络传输耗时上涨、CPU处理开销增加、甚至是连接失败。这种时候在发现延时过长之后,就可以结合其他信息进行分析。
|
||||
|
||||
### ebpf 实现原理
|
||||
|
||||
在 TCP 三次握手的时候,Linux 内核会维护两个队列,分别是:
|
||||
|
||||
- 半连接队列,也称 SYN 队列;
|
||||
- 全连接队列,也称 accepet 队列;
|
||||
|
||||
服务端收到客户端发起的 SYN 请求后,内核会把该连接存储到半连接队列,并向客户端响应 SYN+ACK,接着客户端会返回 ACK,服务端收到第三次握手的 ACK 后,内核会把连接从半连接队列移除,然后创建新的完全的连接,并将其添加到 accept 队列,等待进程调用 accept 函数时把连接取出来。
|
||||
|
||||
我们的 ebpf 代码实现在 <https://github.com/yunwei37/Eunomia/blob/master/bpftools/tcpconnlat/tcpconnlat.bpf.c> 中:
|
||||
|
||||
它主要使用了 trace_tcp_rcv_state_process 和 kprobe/tcp_v4_connect 这样的跟踪点:
|
||||
|
||||
```c
|
||||
|
||||
SEC("kprobe/tcp_v4_connect")
|
||||
int BPF_KPROBE(tcp_v4_connect, struct sock *sk)
|
||||
{
|
||||
return trace_connect(sk);
|
||||
}
|
||||
|
||||
SEC("kprobe/tcp_v6_connect")
|
||||
int BPF_KPROBE(tcp_v6_connect, struct sock *sk)
|
||||
{
|
||||
return trace_connect(sk);
|
||||
}
|
||||
|
||||
SEC("kprobe/tcp_rcv_state_process")
|
||||
int BPF_KPROBE(tcp_rcv_state_process, struct sock *sk)
|
||||
{
|
||||
return handle_tcp_rcv_state_process(ctx, sk);
|
||||
}
|
||||
```
|
||||
|
||||
Run:
|
||||
在 trace_connect 中,我们跟踪新的 tcp 连接,记录到达时间,并且把它加入 map 中:
|
||||
|
||||
```shell
|
||||
sudo ./ecli run package.json
|
||||
```c
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_HASH);
|
||||
__uint(max_entries, 4096);
|
||||
__type(key, struct sock *);
|
||||
__type(value, struct piddata);
|
||||
} start SEC(".maps");
|
||||
|
||||
static int trace_connect(struct sock *sk)
|
||||
{
|
||||
u32 tgid = bpf_get_current_pid_tgid() >> 32;
|
||||
struct piddata piddata = {};
|
||||
|
||||
if (targ_tgid && targ_tgid != tgid)
|
||||
return 0;
|
||||
|
||||
bpf_get_current_comm(&piddata.comm, sizeof(piddata.comm));
|
||||
piddata.ts = bpf_ktime_get_ns();
|
||||
piddata.tgid = tgid;
|
||||
bpf_map_update_elem(&start, &sk, &piddata, 0);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
TODO: support union in C
|
||||
在 handle_tcp_rcv_state_process 中,我们跟踪接收到的 tcp 数据包,从 map 从提取出对应的 connect 事件,并且计算延迟:
|
||||
|
||||
## details in bcc
|
||||
```c
|
||||
static int handle_tcp_rcv_state_process(void *ctx, struct sock *sk)
|
||||
{
|
||||
struct piddata *piddatap;
|
||||
struct event event = {};
|
||||
s64 delta;
|
||||
u64 ts;
|
||||
|
||||
Demonstrations of tcpconnect, the Linux eBPF/bcc version.
|
||||
if (BPF_CORE_READ(sk, __sk_common.skc_state) != TCP_SYN_SENT)
|
||||
return 0;
|
||||
|
||||
This tool traces the kernel function performing active TCP connections
|
||||
(eg, via a connect() syscall; accept() are passive connections). Some example
|
||||
output (IP addresses changed to protect the innocent):
|
||||
piddatap = bpf_map_lookup_elem(&start, &sk);
|
||||
if (!piddatap)
|
||||
return 0;
|
||||
|
||||
```console
|
||||
# ./tcpconnect
|
||||
PID COMM IP SADDR DADDR DPORT
|
||||
1479 telnet 4 127.0.0.1 127.0.0.1 23
|
||||
1469 curl 4 10.201.219.236 54.245.105.25 80
|
||||
1469 curl 4 10.201.219.236 54.67.101.145 80
|
||||
1991 telnet 6 ::1 ::1 23
|
||||
2015 ssh 6 fe80::2000:bff:fe82:3ac fe80::2000:bff:fe82:3ac 22
|
||||
ts = bpf_ktime_get_ns();
|
||||
delta = (s64)(ts - piddatap->ts);
|
||||
if (delta < 0)
|
||||
goto cleanup;
|
||||
|
||||
event.delta_us = delta / 1000U;
|
||||
if (targ_min_us && event.delta_us < targ_min_us)
|
||||
goto cleanup;
|
||||
__builtin_memcpy(&event.comm, piddatap->comm,
|
||||
sizeof(event.comm));
|
||||
event.ts_us = ts / 1000;
|
||||
event.tgid = piddatap->tgid;
|
||||
event.lport = BPF_CORE_READ(sk, __sk_common.skc_num);
|
||||
event.dport = BPF_CORE_READ(sk, __sk_common.skc_dport);
|
||||
event.af = BPF_CORE_READ(sk, __sk_common.skc_family);
|
||||
if (event.af == AF_INET) {
|
||||
event.saddr_v4 = BPF_CORE_READ(sk, __sk_common.skc_rcv_saddr);
|
||||
event.daddr_v4 = BPF_CORE_READ(sk, __sk_common.skc_daddr);
|
||||
} else {
|
||||
BPF_CORE_READ_INTO(&event.saddr_v6, sk,
|
||||
__sk_common.skc_v6_rcv_saddr.in6_u.u6_addr32);
|
||||
BPF_CORE_READ_INTO(&event.daddr_v6, sk,
|
||||
__sk_common.skc_v6_daddr.in6_u.u6_addr32);
|
||||
}
|
||||
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU,
|
||||
&event, sizeof(event));
|
||||
|
||||
cleanup:
|
||||
bpf_map_delete_elem(&start, &sk);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
This output shows four connections, one from a "telnet" process, two from
|
||||
"curl", and one from "ssh". The output details shows the IP version, source
|
||||
address, destination address, and destination port. This traces attempted
|
||||
connections: these may have failed.
|
||||
|
||||
The overhead of this tool should be negligible, since it is only tracing the
|
||||
kernel functions performing connect. It is not tracing every packet and then
|
||||
filtering.
|
||||
|
||||
The -t option prints a timestamp column:
|
||||
|
||||
```console
|
||||
# ./tcpconnect -t
|
||||
TIME(s) PID COMM IP SADDR DADDR DPORT
|
||||
31.871 2482 local_agent 4 10.103.219.236 10.251.148.38 7001
|
||||
31.874 2482 local_agent 4 10.103.219.236 10.101.3.132 7001
|
||||
31.878 2482 local_agent 4 10.103.219.236 10.171.133.98 7101
|
||||
90.917 2482 local_agent 4 10.103.219.236 10.251.148.38 7001
|
||||
90.928 2482 local_agent 4 10.103.219.236 10.102.64.230 7001
|
||||
90.938 2482 local_agent 4 10.103.219.236 10.115.167.169 7101
|
||||
```
|
||||
|
||||
The output shows some periodic connections (or attempts) from a "local_agent"
|
||||
process to various other addresses. A few connections occur every minute.
|
||||
|
||||
The -d option tracks DNS responses and tries to associate each connection with
|
||||
the a previous DNS query issued before it. If a DNS response matching the IP
|
||||
is found, it will be printed. If no match was found, "No DNS Query" is printed
|
||||
in this column. Queries for 127.0.0.1 and ::1 are automatically associated with
|
||||
"localhost". If the time between when the DNS response was received and a
|
||||
connect call was traced exceeds 100ms, the tool will print the time delta
|
||||
after the query name. See below for www.domain.com for an example.
|
||||
|
||||
```console
|
||||
# ./tcpconnect -d
|
||||
PID COMM IP SADDR DADDR DPORT QUERY
|
||||
1543 amazon-ssm-a 4 10.66.75.54 176.32.119.67 443 ec2messages.us-west-1.amazonaws.com
|
||||
1479 telnet 4 127.0.0.1 127.0.0.1 23 localhost
|
||||
1469 curl 4 10.201.219.236 54.245.105.25 80 www.domain.com (123.342ms)
|
||||
1469 curl 4 10.201.219.236 54.67.101.145 80 No DNS Query
|
||||
1991 telnet 6 ::1 ::1 23 localhost
|
||||
2015 ssh 6 fe80::2000:bff:fe82:3ac fe80::2000:bff:fe82:3ac 22 anotherhost.org
|
||||
```
|
||||
|
||||
The -L option prints a LPORT column:
|
||||
|
||||
```console
|
||||
# ./tcpconnect -L
|
||||
PID COMM IP SADDR LPORT DADDR DPORT
|
||||
3706 nc 4 192.168.122.205 57266 192.168.122.150 5000
|
||||
3722 ssh 4 192.168.122.205 50966 192.168.122.150 22
|
||||
3779 ssh 6 fe80::1 52328 fe80::2 22
|
||||
```
|
||||
|
||||
The -U option prints a UID column:
|
||||
|
||||
```console
|
||||
# ./tcpconnect -U
|
||||
UID PID COMM IP SADDR DADDR DPORT
|
||||
0 31333 telnet 6 ::1 ::1 23
|
||||
0 31333 telnet 4 127.0.0.1 127.0.0.1 23
|
||||
1000 31322 curl 4 127.0.0.1 127.0.0.1 80
|
||||
1000 31322 curl 6 ::1 ::1 80
|
||||
```
|
||||
|
||||
The -u option filtering UID:
|
||||
|
||||
```console
|
||||
# ./tcpconnect -Uu 1000
|
||||
UID PID COMM IP SADDR DADDR DPORT
|
||||
1000 31338 telnet 6 ::1 ::1 23
|
||||
1000 31338 telnet 4 127.0.0.1 127.0.0.1 23
|
||||
```
|
||||
|
||||
To spot heavy outbound connections quickly one can use the -c flag. It will
|
||||
count all active connections per source ip and destination ip/port.
|
||||
|
||||
```console
|
||||
# ./tcpconnect.py -c
|
||||
Tracing connect ... Hit Ctrl-C to end
|
||||
^C
|
||||
LADDR RADDR RPORT CONNECTS
|
||||
192.168.10.50 172.217.21.194 443 70
|
||||
192.168.10.50 172.213.11.195 443 34
|
||||
192.168.10.50 172.212.22.194 443 21
|
||||
[...]
|
||||
```
|
||||
|
||||
The --cgroupmap option filters based on a cgroup set. It is meant to be used
|
||||
with an externally created map.
|
||||
|
||||
```console
|
||||
# ./tcpconnect --cgroupmap /sys/fs/bpf/test01
|
||||
```
|
||||
|
||||
For more details, see docs/special_filtering.md
|
||||
|
||||
## eBPF入门实践教程:使用 libbpf-bootstrap 开发程序统计 TCP 连接延时
|
||||
|
||||
## 来源
|
||||
|
||||
修改自 <https://github.com/iovisor/bcc/blob/master/libbpf-tools/tcpconnlat.bpf.c>
|
||||
|
||||
## 编译运行
|
||||
|
||||
- ```git clone https://github.com/libbpf/libbpf-bootstrap libbpf-bootstrap-cloned```
|
||||
@@ -166,6 +158,8 @@ PID COMM IP SADDR DADDR DPORT LAT(ms)
|
||||
222774 ssh 4 192.168.88.15 1.15.149.151 22 25.31
|
||||
```
|
||||
|
||||
对于输出的详细解释,详见 [README.md](README.md)
|
||||
## 总结
|
||||
|
||||
对于源代码的详解,具体见 [tcpconnlat.md](tcpconnlat.md)
|
||||
通过上面的实验,我们可以看到,tcpconnlat 工具的实现原理是基于内核的TCP连接的跟踪,并且可以跟踪到 tcp 连接的延迟时间;除了命令行使用方式之外,还可以将其和容器、k8s 等元信息综合起来,通过 `prometheus` 和 `grafana` 等工具进行网络性能分析。
|
||||
|
||||
来源:<https://github.com/iovisor/bcc/blob/master/libbpf-tools/tcpconnlat.bpf.c>
|
||||
|
||||
@@ -1,85 +1,4 @@
|
||||
# eBPF 入门实践教程
|
||||
|
||||
## origin
|
||||
|
||||
origin from:
|
||||
|
||||
<https://github.com/iovisor/bcc/blob/master/libbpf-tools/tcpconnlat.bpf.c>
|
||||
|
||||
## Compile and Run
|
||||
|
||||
Compile:
|
||||
|
||||
```shell
|
||||
docker run -it -v `pwd`/:/src/ yunwei37/ebpm:latest
|
||||
```
|
||||
|
||||
Run:
|
||||
|
||||
```shell
|
||||
sudo ./ecli run package.json
|
||||
```
|
||||
|
||||
## details in bcc
|
||||
|
||||
Demonstrations of tcpstates, the Linux BPF/bcc version.
|
||||
|
||||
tcpstates prints TCP state change information, including the duration in each
|
||||
state as milliseconds. For example, a single TCP session:
|
||||
|
||||
```console
|
||||
# tcpstates
|
||||
SKADDR C-PID C-COMM LADDR LPORT RADDR RPORT OLDSTATE -> NEWSTATE MS
|
||||
ffff9fd7e8192000 22384 curl 100.66.100.185 0 52.33.159.26 80 CLOSE -> SYN_SENT 0.000
|
||||
ffff9fd7e8192000 0 swapper/5 100.66.100.185 63446 52.33.159.26 80 SYN_SENT -> ESTABLISHED 1.373
|
||||
ffff9fd7e8192000 22384 curl 100.66.100.185 63446 52.33.159.26 80 ESTABLISHED -> FIN_WAIT1 176.042
|
||||
ffff9fd7e8192000 0 swapper/5 100.66.100.185 63446 52.33.159.26 80 FIN_WAIT1 -> FIN_WAIT2 0.536
|
||||
ffff9fd7e8192000 0 swapper/5 100.66.100.185 63446 52.33.159.26 80 FIN_WAIT2 -> CLOSE 0.006
|
||||
^C
|
||||
```
|
||||
|
||||
This showed that the most time was spent in the ESTABLISHED state (which then
|
||||
transitioned to FIN_WAIT1), which was 176.042 milliseconds.
|
||||
|
||||
The first column is the socked address, as the output may include lines from
|
||||
different sessions interleaved. The next two columns show the current on-CPU
|
||||
process ID and command name: these may show the process that owns the TCP
|
||||
session, depending on whether the state change executes synchronously in
|
||||
process context. If that's not the case, they may show kernel details.
|
||||
|
||||
## eBPF入门实践教程:使用 libbpf-bootstrap 开发程序统计 TCP 连接延时
|
||||
|
||||
## 来源
|
||||
|
||||
修改自 <https://github.com/iovisor/bcc/blob/master/libbpf-tools/tcpstates.bpf.c>
|
||||
|
||||
## 编译运行
|
||||
|
||||
- ```git clone https://github.com/libbpf/libbpf-bootstrap libbpf-bootstrap-cloned```
|
||||
- 将 [libbpf-bootstrap](libbpf-bootstrap)目录下的文件复制到 ```libbpf-bootstrap-cloned/examples/c```下
|
||||
- 修改 ```libbpf-bootstrap-cloned/examples/c/Makefile``` ,在其 ```APPS``` 项后添加 ```tcpstates```
|
||||
- 在 ```libbpf-bootstrap-cloned/examples/c``` 下运行 ```make tcpstates```
|
||||
- ```sudo ./tcpstates```
|
||||
|
||||
## 效果
|
||||
|
||||
```plain
|
||||
root@yutong-VirtualBox:~/libbpf-bootstrap/examples/c# ./tcpstates
|
||||
SKADDR PID COMM LADDR LPORT RADDR RPORT OLDSTATE -> NEWSTATE MS
|
||||
ffff9bf61bb62bc0 164978 node 192.168.88.15 0 52.178.17.2 443 CLOSE -> SYN_SENT 0.000
|
||||
ffff9bf61bb62bc0 0 swapper/0 192.168.88.15 41596 52.178.17.2 443 SYN_SENT -> ESTABLISHED 225.794
|
||||
ffff9bf61bb62bc0 0 swapper/0 192.168.88.15 41596 52.178.17.2 443 ESTABLISHED -> CLOSE_WAIT 901.454
|
||||
ffff9bf61bb62bc0 164978 node 192.168.88.15 41596 52.178.17.2 443 CLOSE_WAIT -> LAST_ACK 0.793
|
||||
ffff9bf61bb62bc0 164978 node 192.168.88.15 41596 52.178.17.2 443 LAST_ACK -> LAST_ACK 0.086
|
||||
ffff9bf61bb62bc0 228759 kworker/u6 192.168.88.15 41596 52.178.17.2 443 LAST_ACK -> CLOSE 0.193
|
||||
ffff9bf6d8ee88c0 229832 redis-serv 0.0.0.0 6379 0.0.0.0 0 CLOSE -> LISTEN 0.000
|
||||
ffff9bf6d8ee88c0 229832 redis-serv 0.0.0.0 6379 0.0.0.0 0 LISTEN -> CLOSE 1.763
|
||||
ffff9bf7109d6900 88750 node 127.0.0.1 39755 127.0.0.1 50966 ESTABLISHED -> FIN_WAIT1 0.000
|
||||
```
|
||||
|
||||
对于输出的详细解释,详见 [README.md](README.md)
|
||||
|
||||
## ```tcpstates.bpf.c``` 的解释
|
||||
# eBPF入门实践教程:使用 libbpf-bootstrap 开发程序统计 TCP 连接延时
|
||||
|
||||
```tcpstates``` 是一个追踪当前系统上的TCP套接字的TCP状态的程序,主要通过跟踪内核跟踪点 ```inet_sock_set_state``` 来实现。统计数据通过 ```perf_event```向用户态传输。
|
||||
|
||||
@@ -153,7 +72,7 @@ int handle_set_state(struct trace_event_raw_inet_sock_set_state *ctx)
|
||||
|
||||
根据这个TCP链接的新状态,决定是更新下时间戳记录还是不再记录它的时间戳。
|
||||
|
||||
### 对于用户态程序
|
||||
## 用户态程序
|
||||
|
||||
```c
|
||||
while (!exiting) {
|
||||
@@ -209,3 +128,33 @@ static void handle_lost_events(void* ctx, int cpu, __u64 lost_cnt) {
|
||||
```
|
||||
|
||||
收到事件后所调用对应的处理函数并进行输出打印。
|
||||
|
||||
## 编译运行
|
||||
|
||||
- ```git clone https://github.com/libbpf/libbpf-bootstrap libbpf-bootstrap-cloned```
|
||||
- 将 [libbpf-bootstrap](libbpf-bootstrap)目录下的文件复制到 ```libbpf-bootstrap-cloned/examples/c```下
|
||||
- 修改 ```libbpf-bootstrap-cloned/examples/c/Makefile``` ,在其 ```APPS``` 项后添加 ```tcpstates```
|
||||
- 在 ```libbpf-bootstrap-cloned/examples/c``` 下运行 ```make tcpstates```
|
||||
- ```sudo ./tcpstates```
|
||||
|
||||
## 效果
|
||||
|
||||
```plain
|
||||
root@yutong-VirtualBox:~/libbpf-bootstrap/examples/c# ./tcpstates
|
||||
SKADDR PID COMM LADDR LPORT RADDR RPORT OLDSTATE -> NEWSTATE MS
|
||||
ffff9bf61bb62bc0 164978 node 192.168.88.15 0 52.178.17.2 443 CLOSE -> SYN_SENT 0.000
|
||||
ffff9bf61bb62bc0 0 swapper/0 192.168.88.15 41596 52.178.17.2 443 SYN_SENT -> ESTABLISHED 225.794
|
||||
ffff9bf61bb62bc0 0 swapper/0 192.168.88.15 41596 52.178.17.2 443 ESTABLISHED -> CLOSE_WAIT 901.454
|
||||
ffff9bf61bb62bc0 164978 node 192.168.88.15 41596 52.178.17.2 443 CLOSE_WAIT -> LAST_ACK 0.793
|
||||
ffff9bf61bb62bc0 164978 node 192.168.88.15 41596 52.178.17.2 443 LAST_ACK -> LAST_ACK 0.086
|
||||
ffff9bf61bb62bc0 228759 kworker/u6 192.168.88.15 41596 52.178.17.2 443 LAST_ACK -> CLOSE 0.193
|
||||
ffff9bf6d8ee88c0 229832 redis-serv 0.0.0.0 6379 0.0.0.0 0 CLOSE -> LISTEN 0.000
|
||||
ffff9bf6d8ee88c0 229832 redis-serv 0.0.0.0 6379 0.0.0.0 0 LISTEN -> CLOSE 1.763
|
||||
ffff9bf7109d6900 88750 node 127.0.0.1 39755 127.0.0.1 50966 ESTABLISHED -> FIN_WAIT1 0.000
|
||||
```
|
||||
|
||||
对于输出的详细解释,详见 [README.md](README.md)
|
||||
|
||||
## 总结
|
||||
|
||||
这里的代码修改自 <https://github.com/iovisor/bcc/blob/master/libbpf-tools/tcpstates.bpf.c>
|
||||
|
||||
@@ -1,116 +1,23 @@
|
||||
## eBPF 入门实践教程:编写 eBPF 程序 Tcprtt 测量 TCP 连接的往返时间
|
||||
# eBPF 入门实践教程:编写 eBPF 程序 Tcprtt 测量 TCP 连接的往返时间
|
||||
|
||||
## 背景
|
||||
|
||||
### 背景
|
||||
网络质量在互联网社会中是一个很重要的因素。导致网络质量差的因素有很多,可能是硬件因素导致,也可能是程序
|
||||
写的不好导致。为了能更好地定位网络问题,`tcprtt` 工具被提出。它可以监测TCP链接的往返时间,从而分析
|
||||
网络质量,帮助用户定位问题来源。
|
||||
|
||||
### 实现原理
|
||||
`tcprtt` 在tcp链接建立的执行点下挂载了执行函数。
|
||||
```c
|
||||
SEC("fentry/tcp_rcv_established")
|
||||
int BPF_PROG(tcp_rcv, struct sock *sk)
|
||||
{
|
||||
const struct inet_sock *inet = (struct inet_sock *)(sk);
|
||||
struct tcp_sock *ts;
|
||||
struct hist *histp;
|
||||
u64 key, slot;
|
||||
u32 srtt;
|
||||
|
||||
if (targ_sport && targ_sport != inet->inet_sport)
|
||||
return 0;
|
||||
if (targ_dport && targ_dport != sk->__sk_common.skc_dport)
|
||||
return 0;
|
||||
if (targ_saddr && targ_saddr != inet->inet_saddr)
|
||||
return 0;
|
||||
if (targ_daddr && targ_daddr != sk->__sk_common.skc_daddr)
|
||||
return 0;
|
||||
|
||||
if (targ_laddr_hist)
|
||||
key = inet->inet_saddr;
|
||||
else if (targ_raddr_hist)
|
||||
key = inet->sk.__sk_common.skc_daddr;
|
||||
else
|
||||
key = 0;
|
||||
histp = bpf_map_lookup_or_try_init(&hists, &key, &zero);
|
||||
if (!histp)
|
||||
return 0;
|
||||
ts = (struct tcp_sock *)(sk);
|
||||
srtt = BPF_CORE_READ(ts, srtt_us) >> 3;
|
||||
if (targ_ms)
|
||||
srtt /= 1000U;
|
||||
slot = log2l(srtt);
|
||||
if (slot >= MAX_SLOTS)
|
||||
slot = MAX_SLOTS - 1;
|
||||
__sync_fetch_and_add(&histp->slots[slot], 1);
|
||||
if (targ_show_ext) {
|
||||
__sync_fetch_and_add(&histp->latency, srtt);
|
||||
__sync_fetch_and_add(&histp->cnt, 1);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("kprobe/tcp_rcv_established")
|
||||
int BPF_KPROBE(tcp_rcv_kprobe, struct sock *sk)
|
||||
{
|
||||
const struct inet_sock *inet = (struct inet_sock *)(sk);
|
||||
u32 srtt, saddr, daddr;
|
||||
struct tcp_sock *ts;
|
||||
struct hist *histp;
|
||||
u64 key, slot;
|
||||
|
||||
if (targ_sport) {
|
||||
u16 sport;
|
||||
bpf_probe_read_kernel(&sport, sizeof(sport), &inet->inet_sport);
|
||||
if (targ_sport != sport)
|
||||
return 0;
|
||||
}
|
||||
if (targ_dport) {
|
||||
u16 dport;
|
||||
bpf_probe_read_kernel(&dport, sizeof(dport), &sk->__sk_common.skc_dport);
|
||||
if (targ_dport != dport)
|
||||
return 0;
|
||||
}
|
||||
bpf_probe_read_kernel(&saddr, sizeof(saddr), &inet->inet_saddr);
|
||||
if (targ_saddr && targ_saddr != saddr)
|
||||
return 0;
|
||||
bpf_probe_read_kernel(&daddr, sizeof(daddr), &sk->__sk_common.skc_daddr);
|
||||
if (targ_daddr && targ_daddr != daddr)
|
||||
return 0;
|
||||
|
||||
if (targ_laddr_hist)
|
||||
key = saddr;
|
||||
else if (targ_raddr_hist)
|
||||
key = daddr;
|
||||
else
|
||||
key = 0;
|
||||
histp = bpf_map_lookup_or_try_init(&hists, &key, &zero);
|
||||
if (!histp)
|
||||
return 0;
|
||||
ts = (struct tcp_sock *)(sk);
|
||||
bpf_probe_read_kernel(&srtt, sizeof(srtt), &ts->srtt_us);
|
||||
srtt >>= 3;
|
||||
if (targ_ms)
|
||||
srtt /= 1000U;
|
||||
slot = log2l(srtt);
|
||||
if (slot >= MAX_SLOTS)
|
||||
slot = MAX_SLOTS - 1;
|
||||
__sync_fetch_and_add(&histp->slots[slot], 1);
|
||||
if (targ_show_ext) {
|
||||
__sync_fetch_and_add(&histp->latency, srtt);
|
||||
__sync_fetch_and_add(&histp->cnt, 1);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
```
|
||||
当有tcp链接建立时,该工具会自动根据当前系统的支持情况,选择合适的执行函数。
|
||||
在执行函数中,`tcprtt`会收集tcp链接的各项基本底薪,包括地址,源端口,目标端口,耗时
|
||||
等等,并将其更新到直方图的map中。运行结束后通过用户态代码,展现给用户。
|
||||
|
||||
### Eunomia中使用方式
|
||||
## 编写 eBPF 程序
|
||||
|
||||
TODO
|
||||
|
||||
### 总结
|
||||
## 编译运行
|
||||
|
||||
`tcprtt` 通过直方图的形式,可以轻松展现当前系统中网络抖动的情况,方便开发者快速定位系统网络问题
|
||||
TODO
|
||||
|
||||
## 总结
|
||||
|
||||
TODO
|
||||
|
||||
@@ -1,80 +1,27 @@
|
||||
## eBPF 入门实践教程:编写 eBPF 程序 Memleak 监控内存泄漏
|
||||
# eBPF 入门实践教程:编写 eBPF 程序 Memleak 监控内存泄漏
|
||||
|
||||
### 背景
|
||||
## 背景
|
||||
|
||||
内存泄漏对于一个程序而言是一个很严重的问题。倘若放任一个存在内存泄漏的程序运行,久而久之
|
||||
系统的内存会慢慢被耗尽,导致程序运行速度显著下降。为了避免这一情况,`memleak`工具被提出。
|
||||
它可以跟踪并匹配内存分配和释放的请求,并且打印出已经被分配资源而又尚未释放的堆栈信息。
|
||||
|
||||
### 实现原理
|
||||
## 实现原理
|
||||
|
||||
`memleak` 的实现逻辑非常直观。它在我们常用的动态分配内存的函数接口路径上挂载了ebpf程序,
|
||||
同时在free上也挂载了ebpf程序。在调用分配内存相关函数时,`memleak` 会记录调用者的pid,分配得到
|
||||
内存的地址,分配得到的内存大小等基本数据。在free之后,`memeleak`则会去map中删除记录的对应的分配
|
||||
信息。对于用户态常用的分配函数 `malloc`, `calloc` 等,`memleak`使用了 uporbe 技术实现挂载,对于
|
||||
内核态的函数,比如 `kmalloc` 等,`memleak` 则使用了现有的 tracepoint 来实现。
|
||||
`memleak`主要的挂载点为
|
||||
```c
|
||||
SEC("uprobe/malloc")
|
||||
|
||||
SEC("uretprobe/malloc")
|
||||
## 编写 eBPF 程序
|
||||
|
||||
SEC("uprobe/calloc")
|
||||
TODO
|
||||
|
||||
SEC("uretprobe/calloc")
|
||||
## 编译运行
|
||||
|
||||
SEC("uprobe/realloc")
|
||||
TODO
|
||||
|
||||
SEC("uretprobe/realloc")
|
||||
## 总结
|
||||
|
||||
SEC("uprobe/memalign")
|
||||
|
||||
SEC("uretprobe/memalign")
|
||||
|
||||
SEC("uprobe/posix_memalign")
|
||||
|
||||
SEC("uretprobe/posix_memalign")
|
||||
|
||||
SEC("uprobe/valloc")
|
||||
|
||||
SEC("uretprobe/valloc")
|
||||
|
||||
SEC("uprobe/pvalloc")
|
||||
|
||||
SEC("uretprobe/pvalloc")
|
||||
|
||||
SEC("uprobe/aligned_alloc")
|
||||
|
||||
SEC("uretprobe/aligned_alloc")
|
||||
|
||||
SEC("uprobe/free")
|
||||
|
||||
SEC("tracepoint/kmem/kmalloc")
|
||||
|
||||
SEC("tracepoint/kmem/kfree")
|
||||
|
||||
|
||||
SEC("tracepoint/kmem/kmalloc_node")
|
||||
|
||||
SEC("tracepoint/kmem/kmem_cache_alloc")
|
||||
|
||||
SEC("tracepoint/kmem/kmem_cache_alloc_node")
|
||||
|
||||
SEC("tracepoint/kmem/kmem_cache_free")
|
||||
|
||||
SEC("tracepoint/kmem/mm_page_alloc")
|
||||
|
||||
SEC("tracepoint/kmem/mm_page_free")
|
||||
|
||||
SEC("tracepoint/percpu/percpu_alloc_percpu")
|
||||
|
||||
SEC("tracepoint/percpu/percpu_free_percpu")
|
||||
|
||||
```
|
||||
|
||||
### Eunomia中使用方式
|
||||
|
||||
|
||||
### 总结
|
||||
`memleak` 实现了对内存分配系列函数的监控追踪,可以避免程序发生严重的内存泄漏事故,对于开发者而言
|
||||
具有极大的帮助。
|
||||
TODO
|
||||
|
||||
@@ -1,48 +1,23 @@
|
||||
## eBPF 入门实践教程:编写 eBPF 程序 Biopattern: 统计随机/顺序磁盘 I/O
|
||||
# eBPF 入门实践教程:编写 eBPF 程序 Biopattern: 统计随机/顺序磁盘 I/O
|
||||
|
||||
### 背景
|
||||
## 背景
|
||||
|
||||
Biopattern 可以统计随机/顺序磁盘I/O次数的比例。
|
||||
|
||||
### 实现原理
|
||||
TODO
|
||||
|
||||
## 实现原理
|
||||
|
||||
Biopattern 的ebpf代码在 tracepoint/block/block_rq_complete 挂载点下实现。在磁盘完成IO请求
|
||||
后,程序会经过此挂载点。Biopattern 内部存有一张以设备号为主键的哈希表,当程序经过挂载点时, Biopattern
|
||||
会获得操作信息,根据哈希表中该设备的上一次操作记录来判断本次操作是随机IO还是顺序IO,并更新操作计数。
|
||||
|
||||
```c
|
||||
SEC("tracepoint/block/block_rq_complete")
|
||||
int handle__block_rq_complete(struct trace_event_raw_block_rq_complete *ctx)
|
||||
{
|
||||
sector_t *last_sectorp, sector = ctx->sector;
|
||||
struct counter *counterp, zero = {};
|
||||
u32 nr_sector = ctx->nr_sector;
|
||||
dev_t dev = ctx->dev;
|
||||
## 编写 eBPF 程序
|
||||
|
||||
if (targ_dev != -1 && targ_dev != dev)
|
||||
return 0;
|
||||
|
||||
counterp = bpf_map_lookup_or_try_init(&counters, &dev, &zero);
|
||||
if (!counterp)
|
||||
return 0;
|
||||
if (counterp->last_sector) {
|
||||
if (counterp->last_sector == sector)
|
||||
__sync_fetch_and_add(&counterp->sequential, 1);
|
||||
else
|
||||
__sync_fetch_and_add(&counterp->random, 1);
|
||||
__sync_fetch_and_add(&counterp->bytes, nr_sector * 512);
|
||||
}
|
||||
counterp->last_sector = sector + nr_sector;
|
||||
return 0;
|
||||
}
|
||||
|
||||
```
|
||||
当用户停止Biopattern后,用户态程序会读取获得的计数信息,并将其输出给用户。
|
||||
|
||||
### Eunomia中使用方式
|
||||
|
||||
尚未集成
|
||||
TODO
|
||||
|
||||
### 总结
|
||||
|
||||
Biopattern 可以展现随机/顺序磁盘I/O次数的比例,对于开发者把握整体I/O情况有较大帮助。
|
||||
Biopattern 可以展现随机/顺序磁盘I/O次数的比例,对于开发者把握整体I/O情况有较大帮助。
|
||||
|
||||
TODO
|
||||
|
||||
@@ -1 +1,3 @@
|
||||
# 更多的参考资料
|
||||
# 更多的参考资料
|
||||
|
||||
TODO
|
||||
|
||||
@@ -1,6 +1,18 @@
|
||||
## eBPF 入门实践教程:
|
||||
# eBPF 入门实践教程:使用 LSM 进行安全检测防御
|
||||
|
||||
## run
|
||||
## 背景
|
||||
|
||||
TODO
|
||||
|
||||
## LSM 概述
|
||||
|
||||
TODO
|
||||
|
||||
## 编写 eBPF 程序
|
||||
|
||||
TODO
|
||||
|
||||
## 编译运行
|
||||
|
||||
```console
|
||||
docker run -it -v `pwd`/:/src/ yunwei37/ebpm:latest
|
||||
@@ -20,6 +32,8 @@ Run:
|
||||
sudo ecli examples/bpftools/lsm-connect/package.json
|
||||
```
|
||||
|
||||
## reference
|
||||
## 总结
|
||||
|
||||
<https://github.com/leodido/demo-cloud-native-ebpf-day>
|
||||
TODO
|
||||
|
||||
参考:<https://github.com/leodido/demo-cloud-native-ebpf-day>
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
## eBPF 入门实践教程:
|
||||
# eBPF 入门实践教程:使用 eBPF 进行 tc 流量控制
|
||||
|
||||
## tc 程序示例
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
#include <bpf/bpf_tracing.h>
|
||||
|
||||
#define TC_ACT_OK 0
|
||||
#define ETH_P_IP 0x0800 /* Internet Protocol packet */
|
||||
#define ETH_P_IP 0x0800 /* Internet Protocol packet */
|
||||
|
||||
/// @tchook {"ifindex":1, "attach_point":"BPF_TC_INGRESS"}
|
||||
/// @tcopts {"handle":1, "priority":1}
|
||||
@@ -76,10 +76,14 @@ Successfully started! Please run `sudo cat /sys/kernel/debug/tracing/trace_pipe`
|
||||
The `tc` output in `/sys/kernel/debug/tracing/trace_pipe` should look
|
||||
something like this:
|
||||
|
||||
```
|
||||
```console
|
||||
$ sudo cat /sys/kernel/debug/tracing/trace_pipe
|
||||
node-1254811 [007] ..s1 8737831.671074: 0: Got IP packet: tot_len: 79, ttl: 64
|
||||
sshd-1254728 [006] ..s1 8737831.674334: 0: Got IP packet: tot_len: 79, ttl: 64
|
||||
sshd-1254728 [006] ..s1 8737831.674349: 0: Got IP packet: tot_len: 72, ttl: 64
|
||||
node-1254811 [007] ..s1 8737831.674550: 0: Got IP packet: tot_len: 71, ttl: 64
|
||||
```
|
||||
|
||||
## 总结
|
||||
|
||||
TODO
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
# TODO
|
||||
|
||||
1
9-runqlat/.gitignore
vendored
1
9-runqlat/.gitignore
vendored
@@ -4,3 +4,4 @@ package.json
|
||||
*.skel.json
|
||||
*.skel.yaml
|
||||
package.yaml
|
||||
ecli
|
||||
@@ -189,7 +189,7 @@ struct hist {
|
||||
|
||||
这是一个 Linux 内核 BPF 程序,旨在收集和报告运行队列的延迟。BPF 是 Linux 内核中一项技术,它允许将程序附加到内核中的特定点并进行安全高效的执行。这些程序可用于收集有关内核行为的信息,并实现自定义行为。这个 BPF 程序使用 BPF maps 来收集有关任务何时从内核的运行队列中排队和取消排队的信息,并记录任务在被安排执行之前在运行队列上等待的时间。然后,它使用这些信息生成直方图,显示不同组任务的运行队列延迟分布。这些直方图可用于识别和诊断内核调度行为中的性能问题。
|
||||
|
||||
## Compile and Run
|
||||
## 编译运行
|
||||
|
||||
Compile:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user