序言

IPv4 只有一种动态地址分配方式,即 DHCP,但 IPv6 就有 SLAAC 和 DHCPv6 两种分配方式,同时 DHCPv6 还存在 PD (Prefix Delegation) 的扩展。这三种分配方式之间又存在交互,使得 IPv6 分配过程中出现的问题远比 IPv4 多。大多数可以搜到的教程只从表面解决了问题,对于其后的技术细节模棱两可,而没有从根本上厘清 IPv6 与 IPv4 的差异,

此文旨在从相关基础概念出发,授人以渔地讲清楚 IPv6 三种地址分配方式的工作原理,帮助彻底解决 IPv6 分配中的疑难杂症。

IPv6 基础概念

LLA 其实在 IPv4 中就已存在,当 DHCP 没有正常工作时,一些操作系统就会为网络接口分配一个 169.254.0.0/16 的地址,用于临时的点对点通信。但 LLA 在 IPv4 中并不重要,只扮演一个可有可无的备用角色,只有当 DHCP 故障时才会出现,因而绝大部分人(包括笔者)直到 IPv6 普及时才了解到 LLA 的存在。

IPv6 LLA (fe80::/8) 继承了 IPv4 LLA 点对点通信的基本功能,但更进一步承担了 NDP (Neighbor Discovery Protocol,邻居发现协议) 以及 SLAAC (Stateless Address Autoconfiguration,无状态地址自动配置) 的重要功能。理解它才能理解 SLAAC 的工作原理。

举例来说,当两个网口通过网线直接相连后,就会分别自动生成 IPv6 LLA,如 fe80::dfc2:d2aa:c86f:171e/64fe80::da8f:9d5b:57e3:c6a6/64,两者都可以 ping 通对方的 LLA。在 Linux 上通过 ip -6 route 命令,可以查到自动配置的 LLA 路由项:

1
fe80::/64 dev eth0 proto kernel metric 1024 pref medium

IPv6 LLA 使用特定的算法从 MAC 地址中生成,即 EUI-64,例如网口的 MAC 地址为 70:07:12:34:56:78 时,生成的 EUI-64 为 7207:12ff:fe34:5678,LLA 则为 fe80:7207:12ff:fe34:5678/64(EUI-64 加上 fe80 的前缀)。具体的生成方式下图所示:

IPv6 LLA 生成过程,图源 https://www.networkacademy.io/ccna/ipv6/stateless-address-autoconfiguration-slaac

一般而言,路由器不会转发 LLA 地址的流量,它仅用于链路点对点通信

GUA (Global Unicast Address,全局单播地址)

IPv6 GUA (2000::/3) 可以对应到 IPv4 “公网 IP”的概念。理论上它是全球唯一的,并且可以用于公网通信。一个配置良好的网络架构应当能使每个设备都获取到 IPv6 GUA,以最大程度上发挥 IPv6 的 P2P 通信优势。

私有地址

fc00::/7 被定义为 IPv6 的私有地址,类似于 IPv4 中 的 10.0.0.0/8172.16.0.0/12192.168.0.0/16,用于局域网通信。与 LLA 不同的是,它可以被路由器转发。

由于 IPv6 被设计为全球每个设备都能分到 GUA,私有地址在 IPv6 中的作用被大大削弱。当无法做到为每个设备分配 GUA 时(如一些校园网环境),在内网分配 IPv6 私有地址可以作为替代方案,让内网设备可以访问 IPv6。

组播 (Multicast)

IPv6 组播地址(ff00::/8)与 IPv4 组播地址(224.0.0.0/4)类似,用于网段内的一对多通信。SLAAC 和 DHCPv6 都依赖组播工作。常用的组播地址有:

  • ff02::1:本地链路所有节点;
  • ff02::2:本地链路所有路由器。

NDP (Neighbor Discovery Protocol,邻居发现协议)

NDP 工作于 ICMPv6 之上,类似于 IPv4 ARP,用于发现数据链路层中其他节点和相应的 IPv6 地址,并确定可用路由和维护关于可用路径和其他活动节点的信息可达性。SLAAC 基于 NDP 工作,涉及的报文类型有:

  1. RS(Router Solicitation)和 RA(Router Advertisement):用于配置 IPv6 地址及路由;
  2. NS(Neighbor Solicitation)和 NA(Neighbor Advertisement):用于查找链路上其他设备的 MAC 地址。

SLAAC (Stateless Address Autoconfiguration, 无状态地址自动配置)

SLAAC 是 RFC 4862 中定义的 IPv6 地址分配方式,也是推荐的分配方式。事实上 Android 只支持 SLAAC IPv6 分配。

SLAAC 最大的特点就是无状态(stateless),即不需要一个中心化的服务器来负责分配。下面笔者用一个例子说面 SLAAC 的过程。

假设路由器上的 lan0 网口和主机上的 eth0 网口相连,lan0 的 LLA 是 fe80::1/64eth0 的 MAC 地址为 70:07:12:34:56:78。同时,路由器持有 2001:db8::/64 的 GUA 前缀,即这个子网下所有 GUA 都会被上级路由器路由到此路由器的 wan 网口。SLAAC 的流程如下:

  1. eth0 根据 MAC 地址生成 EUI-64 7207:12ff:fe34:5678 和 LLA fe80:7207:12ff:fe34:5678/64

  2. 主机执行 DAD(Duplicated Address Detection)确保 LLA 在本地链路中唯一。其和地址分配无关,因而在此略过,有兴趣的读者可以自行查阅相关资料;

  3. 主机通过 eth0 LLA 发送 RS 消息。RS 使用组播地址 ff02::2 发送给本地链路所有的路由器。

  4. 路由器回复 RA 消息给 eth0 LLA。RA 中包含前缀 2001:db8::/64、有效期和 MTU 等信息。

  5. 主机收到 RA,将前缀和 EUI-64 组合成 2001:db8::7207:12ff:fe34:5678/64 分配给 eth0,并添加路由表:

    1
    2
    2001:db8::/64 dev eth0 proto ra metric 1024 expires 2591993sec pref medium
    default via fe80::1 dev eth0 proto static metric 1024 onlink pref medium
  6. 主机进行 DAD 检测,并使用 NA 消息向链路上的邻居通告新地址的使用。

SLAAC 流程,图源 https://www.networkacademy.io/ccna/ipv6/stateless-address-autoconfiguration-slaac

SLAAC 看起来很美好,但有个重要缺陷:不支持 DNS 信息的下发,主机必须通过其他方式(通常是 DHCPv6)获取 DNS。RA 中有两个标志位用以解决此问题:

  • M (Managed Address Configuration):可以通过 DHCPv6 获取地址信息;
  • O (Other Configuration):可以通过 DHCPv6 获取其他信息(如 DNS)。

而更新的 RFC 6106 则通过在 RA 中添加 RDNSS(Recursive DNS Server)和 DNSSL(DNS Search List),支持了 DNS 信息的下发。各操作系统对于 RDNSS 的支持度见 Comparison of IPv6 support in operating systems。在实际使用中,绝大部分情况下只需要配置 IPv4 DNS(通过 DHCPv4 获得),因而 RDNSS 扩展的意义并不大。

以上基于 EUI-64 的 SLAAC 地址配置存在的问题是,它生成的地址是固定并且可预测的,这会带来安全性和隐私问题。RFC 4941 定义的 IPv6 SLAAC 隐私扩展解决了这一问题。它在 SLAAC 时同时生成随机的、定期更换的地址,以解决隐私问题。同时 EUI-64 生成的地址也被保留,用于外部传入连接。在启用隐私扩展的情况下,Linux 中生成的 IPv6 地址例如(从上到下分别是隐私地址、EUI-64 GUA、LLA):

1
2
3
4
5
6
7
8
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP group default qlen 1000
link/ether 70:07:12:34:56:78 brd ff:ff:ff:ff:ff:ff
inet6 2001:db8::dead:beef:aaaa:bbbb/64 scope global temporary dynamic
valid_lft 2591998sec preferred_lft 604798sec
inet6 2001:db8::7207:12ff:fe34:5678/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 2591998sec preferred_lft 604798sec
inet6 fe80:7207:12ff:fe34:5678/64 scope link
valid_lft forever preferred_lft forever

DHCPv6

DHCPv6 和 DHCPv4 的运行方式整体相同,主机发送 ff02::1:2 UDP 端口 547 的组播消息,DHCPv6 server 回复地址和 DNS 等信息。

有所不同的是,DHCPv6 可以在有状态或者无状态的模式下运行,两者的区别在于是否获取地址。当搭配 SLAAC 使用时,主机只需要从 DHCPv6 获取 DNS 等信息,因而可以使用无状态 DHCPv6。

DHCPv6 PD (Prefix Delegation, 前缀委托)

PD 是 RFC 3633 定义的 DHCPv6 扩展。它用于在网络中分发 IPv6 前缀。

在启用 PD 扩展的情况下,DHCP server 向主机发送一个 IPv6 子网前缀(如 2001:db8::/56)的使用权,并添加路由表以确保将此子网下的地址全部路由到请求前缀的主机。主机可以再对此子网进行划分和分配。

一个典型的 DHCPv6 PD 使用场景是家庭 ISP 网络接入。家庭网关路由器向 ISP DHCP server 请求 IPv6 前缀,然后再通过 SLAAC 在家庭内网中分发此前缀子网中的地址。

总结

本文简要介绍了 IPv6 地址分配中涉及的一些概念,并阐述了 SLAAC、DHCPv6、DHCPv6 PD 的工作原理。在简化地址管理这一方面,IPv6 可以说做得并不成功,多种标准并存,且存在不同的组合形式,让客户端会有不小的概率无法正确获取 IPv6。

在实际情况中,我们最常预见的 IPv6 分配情况有三种:

  • 纯 SLAAC:一般校园网(教育网)属于此类。在实际使用中,笔者发现存在错误配置的内网主机胡乱发送 RA 的情况,导致整个内网所有主机的 IPv6 都错误配置。与此同时,在这种模式下,自行接入的路由器将无法再向下级设备分发 SLAAC GUA,因为 SLAAC 基于的本地链路组播数据包无法被路由器转发(可以通过 IPv6 桥接或者 NAT6 解决,此处不展开说明)。
  • 纯 DHCPv6:一些企业内网会使用此模式,因为 DHCPv6 可以集中管理。这种模式最大的问题是 Android 不支持 DHCPv6。但在其他操作系统下,此模式运行较为稳定。
  • SLAAC + DHCPv6 PD:这是家庭 ISP 网络接入最常见的模式,大部分家用路由器都对此做了适配,可以做到开箱即用。

参考

The Problem

Now we have an image representing a graph, as shown in the figure below:

Suppose we already know the category of each pixel: background, node, or edge. How can we extract the graph topology from it and represent the graph by an adjacency matrix?

Challenges in Classical Algorithm

TODO

What about Neural Network?

We can use a simple algorithm to extract the position of each node. Suppose the position of a node is $\mathbf{P}(x,y)$, and there are $N$ nodes in total.

Then, the task is to fill in the $N\times N$ adjacency matrix with $0$ or $1$. As we can see, this can be converted into a binary classification problem.

we can train a neural network $\mathbf{f}$, which takes 3 input: the image $I$, the position of a node pair $\left( \mathbf{P}_ 1, \mathbf{P}_ 2
\right)$. It outputs $O\in{0,1}$, indicating whether there is a direct connection between the node pair, i.e.,

$$O=\mathbf{f}(\mathbf{I}, \mathbf{P}_ 1, \mathbf{P}_ 2).$$

The dataset can be synthesized by a simple program, and we can use any classification network (e.g., EfficientNet) as our network architecture.

The problem is how to feed $\left( \mathbf{P}_ 1, \mathbf{P}_ 2
\right)$​ into the network. We can add an additional “mask channel” to the image, where the pixels belonging to the two input nodes are marked as 1, and the others as 0. Finally, we input this 4-channel “image” into the network.

Other Notes

TODO

Preface

There have been many excellent works on LLM serving, mainly focusing on improving the throughput. Meanwhile, in practical applications, latency is equally important for LLM serving. However, currently few works focus on improvement of LLM serving latency, especially the latency optimization under SLA constraint.

This blog attempts to summarize the basic concepts and problems in this direction, and give some novel research directions based on some analysis of latency in LLM serving.

Latency Metrics

In LLM serving, we mainly focus on three latency metrics:

  • TBT ($t_ {tbt}$): Time Between Tokens.
  • TTFT ($t_ {ttft}$): Time to First Token.
  • TE2E ($t_ {e2e}$): Time of End-to-end.

In practice, rather than the average or median latency, we usually consider the latency SLA, which means that 50%, 90%, and 99% of data should fall below certain thresholds.

Where The Latency Comes From?

As shown in the figure above, the current popular LLM serving systems (such as vLLM, DeepSpeed) adopt an iteration-level scheduling strategy. The processing of each request is divided into the prefilling stage (prompt inference) and the generation stage (auto-regressive token-by-token generation). For systems such as Sarathi-Serve, the prompt is chunked to improve throughput, thus adding a chunked prefilling stage.

The LLM serving system maintains 3 queues to store requests in these 3 states. The scheduler runs in a loop, and in each iteration, it selects requests from these 3 queues with a certain strategy, and combines them into a batch for the inference engine.

In such systems, the latency of requests mainly comes from 2 aspects: queue latency and inference latency. Assuming the latencies for a request from being added into the prefilling queue, chunked prefilling queue, generation queue to being selected by scheduler are $t_ {qp}$, $t_ {qc}$, $t_ {qg}$ respectively, and inference latency of engine if $t_ {inf}$.
We get:

$$\begin{aligned}
t_ {ttft} &= t_ {qp} + (N_ {chunk} - 1) \cdot t_ {qc} + N_ {chunk} \cdot t_ {inf}, \\
t_ {tbt} &= t_ {qg} + t_ {inf}, \\
t_ {e2e} &= t_ {ttft} + N_{token} \cdot t_ {tbt},
\end{aligned}$$

where $N_ {chunk}$ is the chunk number of a prefilling request, $N_ {chunk}=1$ means no chunking. $N_ {token}$ is the total token number generated by a request.

Obviously, $t_ {inf}$ is not a fixed value. It’s related with the ingredient of the batch. We can denote it as:

$$t_ {inf} = f\left( B_ {p}, B_ {c}, B_ {g}, \mathbf{L}_ {p}, L_ {chunk} \right),$$

where $B_p$, $B_c$, $B_g$ indicates the number of non-chunked prefilling request, chunked prefilling request, generation request respectively. Vector $\mathbf{L}_ {p}$ means the prompt length of each non-chunked prefilling request in the batch.
$L_ {chunk}$ is the chunk size.

How to Improve It?

Based on the above analysis, we can find that reducing latency mainly involves reducing both queue latency and inference latency. In fact, some techniques, such as iteration-level scheduling and chunked prefilling, can be seen as improvements to queue latency.

On the other hand, improvement of inference latency have not received much attention. One reason is that, for inference engines, there is a trade-off between latency and throughput.
Generally speaking, higher batch size means higher throughput, but also higher inference latency. Techniques such as quantization and Paged Attention focus on more efficient memory usage to increase batch size, but inference latency may also increase accordingly (TODO: add an example), which means $t_ {tbt}$ and $t_ {ttft}$ may be increased, and SLA requirements are broken.

Therefore, there is an opportunity to improve inference latency in current LLM serving systems. The target may be an SLA-aware scheduler, which can maximize throughput without breaking SLA requirements. It should be able to dynamically decide the batch size and batch composition instead of just deploying a static prefilling-prioritize or generation-prioritize strategy.

I believe the key to this design is to predict $t_ {inf}$ to provide latency optimization guidance for the scheduler. Prediction based on profiling results may be a simple approach, but a performance model based on GPU computation capability and memory bandwidth might be more general.

Once we can predict $t_ {inf}$, $t_ {qp}$, $t_ {qc}$, and $t_ {qg}$ can also be predicted using mathematical tools such as Queueing Theory (e.g., Poisson distribution), allowing us to optimize serving for the following scenarios:

  1. When the request arrival rate is less than the maximum throughput: we can appropriately reduce batch size to improve $t_ {tbt}$.
  2. When the request arrival rate is greater than the maximum throughput: we can adjust the batch composition dynamically based on queue length, or drop some requests to avoid starvation.
  3. When the request arrival rate suddenly increases: we can adjust the batch composition to avoid breaking the SLA of $t_ {ttft}$.

In summary, this SLA-aware scheduler should provide better results than a static scheduler by considering arrival rate, queue length, and predicted $t_ {inf}$.

Some Meaningful Experiment Result

TODO

Introduction

Quantization is a commonly used acceleration technique in NN inference. The primary computational workloads in NNs come from Convolution, Linear Layers, and Attention, which are implemented by GEMM in the lower level. This blog aims to discuss the principles of quantization from the matrix multiplication perspective and to explain why some quantization methods are impractical. It also aims to review several LLM quantization methods from this perspective.

I define practical quantization as follows:

  1. Operation can still be performed using GEMM after quantization. This requires both mathematical feasibility and hardware support. It is a fundamental requirement for achieving acceleration.
  2. Quantization must lead to actual acceleration. Acceleration can arise from higher INT8 hardware throughput, or from the memory bandwidth saved by smaller memory footprint. Importantly, the benefits of acceleration must outweigh the quantization overhead.

Let’s do some math

Suppose an operator can be expressed in the form of matrix multiplication:
$$\mathbf{Y}=\mathbf{X} \mathbf{W}^\top,$$
where $\mathbf{X} \in \mathbb{R}^{N \times C}$, $\mathbf{Y} \in \mathbb{R}^{N \times D}$, $\mathbf{W} \in \mathbb{R}^{D \times C}$, while their quantized versions are denoted as $\hat{\mathbf{X}}$, $\hat{\mathbf{Y}}$, $\hat{\mathbf{W}}$. Our goal is to ensure that operations can still be performed using GEMM after quantization, i.e.:
$$\hat{\mathbf{Y}}=\hat{\mathbf{X}} \hat{\mathbf{W}}^\top.$$

Let the per-element quantization functions for $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{W}$ be denoted as $p_{nc}(\cdot)$, $q_{nd}(\cdot)$, $r_{dc}(\cdot)$ respectively:
$$\begin{aligned}
\hat{x}_ {nc} &= p_ {nc}(x_{nc}), \\
\hat{y}_ {nd} &= q_ {nd}(y_{nd}), \\
\hat{w}_ {dc} &= r_ {dc}(w_{dc}).
\end{aligned}$$
The corresponding dequantization functions are denoted as $p_ {nc}^{-1}(\cdot)$, $q_ {nd}^{-1}(\cdot)$, $r_ {dc}^{-1}(\cdot)$, i.e.:
$$\begin{aligned}
y_ {nd}
&= \sum_ {c=1}^{C} x_ {nc} w_ {dc}, \\
q_ {nd}^{-1}(\hat{y}_ {nd}) &= \sum_ {c=1}^{C} p_ {nc}^{-1}(\hat{x}_ {nc}) \cdot r_ {dc}^{-1}(\hat{w}_ {dc}).
\end{aligned}$$
The above formulas set the basic constraints that practical quantization should satisfy mathematically.

Some basic quantization methods

With this basic constraints, we can now discuss several fundamental quantization methods, including per-element, per-channel, per-token, and per-tensor quantization.

Per-element and Per-channel

In the basic constraints mentioned above, the dequantization function $q_ {nd}^{-1}(\cdot)$ on the left-hand side does not depend on $c$. Clearly, if the right-hand side quantization functions $p_ {nc}^{-1}(\cdot)$ and $r_ {dc}^{-1}(\cdot)$ depend on $c$, this constraint will be violated. This implies that these two conditions cannot be satisfied at the same time:

  1. Computation can be done by GEMM.
  2. Different quantization functions can be applied in different channels of $\mathbf{X}$ and $\mathbf{W}$.

In other words, this indicates that per-element and per-channel quantization cannot be accelerated using GEMM. They are impractical.

Per-token and per-tensor

From the above discussion, we know that practical quantization needs to satisfy at least:
$$\begin{aligned}
p_ {n}(\cdot) &= p_ {nc} (\cdot), \quad \forall n, c, \\
r_ {d}(\cdot) &= r_ {dc} (\cdot), \quad \forall d, c.
\end{aligned}$$
That is, the quantization function is same for all channels. Therefore, the basic constraint can be formulated as:
$$q_ {nd}^{-1}(\hat{y}_ {nd}) = \sum_ {c=1}^{C_ i} p_ {n}^{-1}(\hat{x}_ {nc}) \cdot r_ {d}^{-1}(\hat{w}_ {dc}),$$
Thus, we get per-channel quantization. If we further assume:
$$\begin{aligned}
p(\cdot) &= p_ {nc} (\cdot), \quad \forall n, c, \\
r(\cdot) &= r_ {dc} (\cdot), \quad \forall d, c.
\end{aligned}$$
That is, the quantization function is same for all elements in both $\mathbf{X}$ and $\mathbf{W}$. Therefore, the basic constraint can be formulated as:
$$q_ {nd}^{-1}(\hat{y}_ {nd}) = q^{-1}(\hat{y}_ {nd}) = \sum_ {c=1}^{C_i} p^{-1}(\hat{x}_ {nc}) \cdot r^{-1}(\hat{w}_ {dc}).$$
We thus obtain per-tensor quantization. While both of these quantization methods have theoretical feasibility, the practical values of them are still limited by hardware support (as discussed in the next section).

For convenience, the following discussion focuses only on per-token quantization. Per-tensor quantization can be seen as a special case of per-token quantization. The most commonly used quantization method in practice is symmetric uniform quantization, which scales the value range using multiplication, i.e.:
$$\begin{aligned}
\hat{x}_ {nc} &= p_ {n}(x_ {nc}) = p_ n x_ {nc}, \\
\hat{w}_ {nd} &= r_ {d}(w_ {dc}) = r_ d w_ {dc}, \\
\hat{y}_ {dc} &= q_ {nd}(y_ {nd}) = p_ n r_ d y_ {nd}.
\end{aligned}$$

We can formulate per-token symmetric uniform quantization by matrix multiplication:
$$\begin{aligned}
\hat{\mathbf{X}} &= \text{diag}(p_1,\cdots,p_ N)\cdot \mathbf{X} = \begin{pmatrix}
p_ 1 & \cdots & p_ 1 \\
\vdots & \ddots & \vdots \\
p_ N & \cdots & p_ N
\end{pmatrix} \otimes \mathbf{X}, \\
\hat{\mathbf{W}} &= \text{diag}(r_1,\cdots,r_ D)\cdot \mathbf{W} = \begin{pmatrix}
r_ 1 & \cdots & r_ D \\
\vdots & \ddots & \vdots \\
r_ 1 & \cdots & r_ D
\end{pmatrix} \otimes \mathbf{W}, \\
\hat{\mathbf{Y}} &= \text{diag}(p_1,\cdots,p_ N)\cdot \mathbf{Y} \cdot \text{diag}(r_1,\cdots,r_ D) = \begin{pmatrix}
p_ 1 r_ 1 & \cdots & p_ 1 r_ D \\
\vdots & \ddots & \vdots \\
p_ N r_ 1 & \cdots & p_ N r_ D
\end{pmatrix} \otimes \mathbf{Y},
\end{aligned}$$
where $\otimes$ represents element-wise matrix multiplication. It can be observed that both quantization and dequantization can be efficiently implemented using element-wise matrix multiplication with dimension broadcasting. The following figure illustrates the computation process by an example:

Hardware requirements

Hardware support still need to be considered when we try to utilize GEMM for quantization. For example, on NVIDIA GPUs, Tensor Core supports matrix multiplication for FP16 and INT8, but it doesn’t support mixed precision matrix multiplication for FP16/INT8. This means that W8A8 quantization can benefit from Tensor Core, but W8A16 and W16A8 quantization lack hardware support and may not achieve real acceleration on NVIDIA GPUs. Many W8A16 and W16A8 quantization methods actually perform dequantization before GEMM and then use FP16 for computation. The actual acceleration effects of these methods require further discussion (see below).

Performance analysis

The above discussion only shows that per-token quantization can leverage GEMM. The following words will show whether it can provide actual acceleration.

We compare the following three setups:

  1. Unquantized, using FP16 for both storage and computation.
  2. W8A8 quantization, with I/O activations stored in FP16. This is the approach used by some works like LLM.int8(). To avoid additional CUDA kernel launch overhead, we assume that quantization and dequantization are fused with GEMM.
  3. W8A16 quantization, internally converting weights to FP16 for computation. Kernel fusion is also applied here.

Without loss of generality, we can assume that the hardware INT8 throughput is $2\times$ than that of FP16. We can set normalized operations of one INT8 operation is $1$, while $2$ for FP16. We can list the following table:

Method FP16 W8A8 (FP16 activations I/O) W8A16
GEMM OPs $2NCD$ $NCD$ $2NCD$
GEMM mem I/O $2(NC+CD+ND)$ $2NC+CD+2N D$ $2NC+CD+2ND$
quant/dequant OPs $0$ $2NC+4ND$ $2CD$
quant/dequant Mem I/O $0$ $2(N+C_o)$ $2D$
total OPs $2NC D$ $NC D+2NC+4N D$ $2NCD+2CD$
total mem I/O $2(NC+C D+N D)$ $2NC+C D+2N D+2(N+C_o)$ $2NC+CD+2ND+2D$
total arithmetic intensity (OPs:I/O) $\cfrac{1}{1/N+1/C+1/D}$ $\cfrac{1+2/D+4/C}{2/N+1/C+2/D+2/(NC)+2/(CD)}$ $\cfrac{1+2/N}{1/(2N)+1/C+1/D+1/(NC)}$
total arithmetic intensity (second-order approximation) $\cfrac{1}{1/N+1/C+1/D}$ $\cfrac{1}{2/N+1/C+2/D}$ $\cfrac{1}{1/(2N)+1/C+1/D}$

Analyzing the table above, we can draw the following conclusions:

  1. W8A8 quantization (with FP16 activations I/O) reduces the operations by almost half compared to FP16, but it decreases the total arithmetic intensity. Therefore, in memory-bound scenarios, W8A8 quantization may not achieve a $2\times$ throughput improvement (ZeroQuant addresses this issue, as discussed below). But it can still lead to a significant throughput improvement when memory bandwidth is sufficient.
  2. W8A16 quantization maintains a similar operations compared to FP16, but it slightly increases the total arithmetic intensity (more increase when $N$ is large). Therefore, it also has practical value in memory-bound scenarios, especially since activations in LLMs are typically harder to be quantized than weights.

Some LLM Quantization works

LLM.int8()

LLM.int8() actually employs selective per-token quantization. It stores weights and activations in FP16 and then applies different strategies for different tokens, as illustrated below:

LLM.int8()

  • For tokens suitable for quantization, it applies per-token INT8 quantization to weights and activations, computes results using INT8 GEMM, and then dequantizes them to FP16.
  • For tokens with outliers, it directly computed the FP16 GEMM.

The results from these two parts can be combined to form the final result.

SmoothQuant

While per-channel quantization may not be practical, for LLM activation quantization, the main challenge arises from activations, where values with larger magnitudes may appear on some channels, as shown below:

SmoothQuant observed that these outliers occur consistently in specific channels, while outliers are rare in weights (thus easier to quantize). Therefore, it proposes to “balance” the quantization difficulty between activations and weights by introducing a per-channel scaling factor:

SmoothQuant

This “balance” can be formulated as:
$$\begin{aligned}
\mathbf{Y}
&= \mathbf{X}\mathbf{W}^\top \\
&= \mathbf{X} \cdot \text{diag}(s_ 1,\cdots,s_ C) \cdot \text{diag}(s_ 1,\cdots,s_ C)^{-1} \cdot \mathbf{W}^\top \\
& = \left( \mathbf{X} \cdot \text{diag}(s_ 1,\cdots,s_ C) \right) \cdot \left( \mathbf{W}\cdot \text{diag}(s_ 1,\cdots,s_ C)^{-1} \right)^\top.
\end{aligned}$$
By selecting appropriate scaling factors $\text{diag}(s_ 1,\cdots,s_ C)$, we can achieve the goal of balancing outlier values in activations, and then we can quantize $\mathbf{X} \cdot \text{diag}(s_ 1,\cdots,s_ C)$ and $\mathbf{W}\cdot \text{diag}(s_ 1,\cdots,s_ C)^{-1}$. The following figure give an example:

SmoothQuant example

SmoothQuant is an excellent alternative to per-channel quantization, as demonstrated in the paper by its impressive performance in quantizing LLM to W8A8.

ZeroQuant

In the above performance analysis of W8A8, we found that using FP16 for activations I/O reduces the overall arithmetic intensity after quantization, which may harm the throughput improvement in memory-bound scenarios. ZeroQuant addresses this issue by fusing the quantization into the previous operator and fusing the dequantization after GEMM, as shown in the figure below.

ZeroQuant

Thus, the activations I/O between operators are still INT8, which reduces the total memory I/O to $NC+CD+ND+2(N+D)$, boosting arithmetic intensity to original FP16 level , and fully leveraging the high throughput of INT8.

Conclusion

This blog provides a matrix multiplication perspective for quantization, indicating some fundamental requirements for practical quantization and explaining why per-channel quantization in impractical. It also discusses several examples of LLM per-token quantization, including LLM.int8(), SmoothQuant, and ZeroQuant.
They are all practical and demonstrate significant acceleration in real-world scenarios.

前言

本文是我在实践中总结出的生产场景下 10 Gbps 网络下的 NFS 性能调优指南,特别是针对大量小文件(Lots of Small Files, LOSF)读写的优化。

调优

硬件

网络硬件方面,带宽延迟两者都很重要。

要保证 NFS 的性能,高带宽网络是必要的,10 Gbps 对于生产场景来说是基础要求,更高速的 InfiniBand 或者 RoCE 网络则可按照需求和预算进行选择。

对于大量小文件(Lots of Small Files, LOSF)场景来说,延迟比带宽更重要。很多性能调优教程都忽略了这一点,只关注了连续读写的性能,即使测试了 4K 随机读写,也使用了错误的测试方法(下文给出了正确的测试方法)。

延迟的重要性体现在,如果程序对于小文件的访问是内秉串行化的,延迟会决定串行化 IOPS 的上限。0.1 ms 的延迟决定了串行化的 IOPS 上限是 10k,而 1 ms 的延迟对应的上限则是 1k。

内秉串行化访问的场景非常多。例如,把家目录放置于 NFS 上,oh-my-zsh 的加载、python 包的加载都是内秉串行化的。1ms 的网络延迟会让这些程序慢到不可接受(例如 import torch 的执行需要 30s 以上)。

使用合格的企业级交换机、恰当配置的网络拓扑,可以尽量降低延迟。同时,光模块、光转电口模块的质量也有可能极大影响延迟(我原来使用的中科光电光转电口模块会引入 0.1ms 的额外延迟,导致 IOPS 下降了 2/3)。

需要注意的是,RDMA 尽管理论上能降低延迟,但实际测试中发现 10 Gbps 以太网和 100 Gbps InfiniBand 的串行化 IOPS 差距并不大,预算有限时只使用以太网也足够。

TODO: 巨型帧

Linux Kernel

内核网络参数需要进行调整,以适应高速网络:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Ref: https://gist.github.com/mizanRahman/40ba603759bfb5153189ccdc9dbbd1e4

# Disable TCP slow start on idle connections
net.ipv4.tcp_slow_start_after_idle = 0

# Increase Linux autotuning TCP buffer limits
# Set max to 16MB for 1GE and 32M (33554432) or 54M (56623104) for 10GE
# Don't set tcp_mem itself! Let the kernel scale it based on RAM.
net.core.rmem_max = 56623104
net.core.wmem_max = 56623104
net.core.rmem_default = 56623104
net.core.wmem_default = 56623104
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 56623104
net.ipv4.tcp_wmem = 4096 65536 56623104

# TCP Congestion Control
net.ipv4.tcp_congestion_control = bbr
net.core.default_qdisc = cake

在服务端和客户端都需要应用这套设置,可以写入 /etc/sysctl.conf 中以持久化。

Server Side

NFS server 的线程数可以尽量调大点,服务器负载比较高时可以提升性能,我直接设成了服务器的线程数。修改 /etc/nfs.conf

1
2
[nfsd]
threads=128

以下几个 NFS server 参数需要调整:

  • async:将同步 IO 操作视为异步。同步读写为主的负载可以大幅提升性能,但服务器崩溃时可能造成数据丢失,对数据完整性有极高要求的情况下不推荐使用;
  • no_subtree_check:对性能没有大影响,但在某些情况下可以提升可靠性(同时有轻微的安全风险)。参见 [1]。

Client Side

没有特殊的理由时应该默认使用最新的 NFSv4.2,NFSv3 使用 UDP 作为底层传输方式时,在高速网络下会因为 UDP 包序列号问题导致数据损坏,参见 [2]。

以下几个 NFS client 参数需要调整:

  • proto=rdma:网络支持 RDMA 时设置;
  • nocto:关闭 close-to-open 缓存一致性语义。NFS 默认行为是关闭文件时会把所有更改写回到服务器。如果对于多客户端之间的文件一致性要求比较高,不推荐使用此选项;
  • ac:启用属性缓存(attribute caching),客户端会缓存文件属性。同样。对于数据一致性要求较高的集群,不推荐使用此选项;
  • fsc:使用 FS-Cache 缓存数据到本地。需要同时配置 cachefilesd。奇怪的是我在测试中并没有发现数据被缓存到本地,这可能需要进一步的探究;
  • nconnect=16:设置 NFS client 和 server 间建立 16 条 TCP 连接。NFS client 默认只建立一条 TCP 连接,所有 RPC 复用这条连接。在某些情况下这会限制连续读写的带宽。增大 nconnect(最大值 16)可以解决这个问题。

特别的,noatime / relatime 的设置对于 NFS 并无影响 [3],NFS client 始终会缓存 atime 的更改。

有些教程中会推荐修改 rsizewsize,这两个值在 NFSv4.2 默认协商出的即是最大值 1048576,因而无需手动更改,只需检查一下是否协商正确即可。

根据 [4],sunrpc.tcp_max_slot_table_entries 可能会影响性能,可以适当调大(默认 2)。在我的测试中,我发现当遇到千万数量级的持续小文件访问负载时,NFS 有时候会卡住。当我把这个参数调大时,此问题得以解决。设置 /etc/modprobe.d/sunrpc.conf

1
options sunrpc tcp_slot_table_entries=16384

有时我会遇到 nfsd 占用大量 CPU 且性能急剧下降的问题,同时记录到大量 delegreturn RPC calls。根据 [5],可以通过禁用 fs.leases-enable 解决,设置 /etc/sysctl.conf

1
fs.leases-enable = 0

nfsd 因为种种原因重启后,默认会有 90s 的 grace period 用于锁恢复,这段时间内 nfsd 会拒绝所有 open 请求,在内核日志中显示:

1
[1073511.138061] NFSD: starting 90-second grace period (net f0000000)

实践中发现这段时间可以适当调小,以减少 nfsd 重启带来的影响。设置 /etc/default/nfs-kernel-server

1
2
# Options for rpc.svcgssd.
RPCSVCGSSDOPTS="--lease-time 10 --grace-time 10"

测试

TODO

总结

TODO

参考

[1] https://man.archlinux.org/man/exports.5.en#no_subtree_check

[2] https://man.archlinux.org/man/nfs.5.en#Using_NFS_over_UDP_on_high-speed_links

[3] https://man.archlinux.org/man/nfs.5.en#File_timestamp_maintenance

[4] https://learn.microsoft.com/en-us/azure/azure-netapp-files/performance-linux-concurrency-session-slots

[5] https://docs.gitlab.com/ee/administration/nfs.html#disable-nfs-server-delegation

This blog is a write-up of the paper “ACS: Concurrent Kernel Execution on Irregular, Input-Dependent Computational Graphs“ from arXiv’24.

Motivation

Some workloads (e.g., Simulation Engines for Deep RL, Dynamic DNNs) cannot fully utilize the massive parallelism of GPUs (see Figure 1). The main reason is that these workloads contain lots of small kernels which cannot fully utilize the GPU, and these kernels are not executed concurrently, although most of them are independent and in theory can be executed concurrently.

Figure 1. Achieved Occupancy of simulation engines (up) and dynamic DNN (down)

But there are some challenges to execute these kernels concurrently:

  1. Input-dependent kernel dependencies. For some workload, the the dependencies between kernels are only determined at runtime for each input. Constructing full computational graph and resolving dependencies before execution will introduce high latency (see Figure 2,average of 47% of overall execution time as the paper says).

Figure 2. DAG construction time as % of execution time

  1. Irregular kernel dependencies. Some workloads have irregular computational graphs. We can partitioned the computational graph of the workload into independent streams of kernels. But this would require fine-grained scheduling and synchronization, with large overhead (see Figure 3).

Figure 3. Kernel launch and synchronization overheads

Existed solutions:

  1. CUDA Graph and AMD ATMI. They allow users specify dependencies between different kernels as DAG, and can eliminate the synchronization and kernel launch overhead. But the DAG needs to be constructed in full before execution, which imakes them not suitable for dynamic kernel dependencies (such as Dynamic DNNs).

  2. Using events provided by the CUDA stream management API, which allows synchronization between kernels across streams through the cudaStreamWaitEvent API, without blocking the host. But approach still requires deriving dependencies between all kernels beforehand.

  3. Persistent threads (PT) can eliminate the scheduling and launch overheads, but are only effective when all kernels are homogeneous.

    PT is just like coroutine in some programming languages.

  4. CUDA dynamic parallelism (CDP) or AMD’s device enqueue (DE) enables parent kernels to launch child kernels, but , only allowing data dependencies between one parent and its children (so cannot be use to synchronize between multiple tasks).

Design

The goal of this paper is to design a framework that enables efficient concurrent execution of GPU kernels with:

  1. lightweight detection of inter-kernel dependencies at runtime,

  2. low overhead kernel scheduling and synchronization.

The key idea is to perform the dependence checking and scheduling within a small window of kernels at runtime similar to out-of-order instruction scheduling.

The authors proposed Automatic Concurrent Scheduling (ACS) as solution. The overall design of ACS-SW is shown in Figure 4. It contains three main functionalities:

Figure 4. ACS-SW Overview

  1. Determining inter-kernel dependencies. By checking for overlaps between read segments and write segments, we determine dependencies between kernels. For a wide range of commonly used kernels (e.g., matrix multiplication, convolution), we can infer the read and write segments from the input easily. But for some kernels, it’s impossible to determine the range of memory accessed statically because of the potential indirect memory accesses, so the authors just assume the entire GPU memory may be accessed.

    Memory regions written to/accessed by the kernel

    The authors use a kernel wrapper to finish the dependency detection. get_addresses() is called to get __read_segments__ and __write_segments__.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    struct ACE_wrapper { 
    //list of read,write segments defined as
    //[{start_adr1,size1},{start_adr2,size2}..]
    list __read_segments__;
    list __write_segments__;
    // function which gets called at kernel
    // launch to populate read,write segments
    void get_addresses(
    dim3 blocks, dim3 threads, ...
    );
    // function declaration of the kernel
    static __global__ void kernel(...);
    };
  2. Tracking kernel state at runtime. The kernels in the window can be three states:

    1. Ready: kernels it is dependent on complete execution.
    2. Pending: upstream kernels are still pending or executing.
    3. Executing.

Kernels in the scheduling window with their state and corresponding upstream kernels

  1. Eliminating CPU synchronization overheads. See ACS-HW for more details.

ACS has two variants:

  1. ACS-SW: software-only implementation which emulates the out-of-order kernel scheduling mechanism.

  2. ACS-HW: hardware-facilitated implementation which is more efficient as it also alleviates synchronization overheads.

ACS-SW

Window Module

This module is to determining inter-kernel dependencies. It is implemented as a separate thread that manages the input FIFO queue and the scheduling window. The kernel state tracking is implemented in the hardware.

Scheduler Module

This module schedules and launches ready kernels for execution. It has fixed number of CUDA streams. Each stream contains only one kernel at any given time. Threads with empty streams poll the scheduling window for a ready kernel.

ACS-SW: The scheduler module

ACS-HW

ACS-SW incurs kernel synchronization and launch overheads because scheduler module launches a kernel in the CPU. ACS-HW solves these problems by a software-hardware co-design.

ACS-HW Overview

Software-side: maintains an input FIFO queue like ACS-SW, and a list of kernels in the GPU’s scheduling window, but it can be stale.

Hardware-side: the scheduling window and its management are implemented in hardware on the GPU side.

A key novelty in hardware design is two stage dependency detections. First, ACS use software to perform initial detection using stale kernel information (without frequent synchronize overhead), then utilizes hardware to correct outdated dependency information. This two-stage approach significantly reduces the hardware complexity.

ACS-HW Scheduler

Evaluation

  1. Baseline: cuDNN implementation (for DNNs) and a jax implementation (for deep RL simulation), both using CUDA streams.
  2. ACS-SW: on real hardware.
  3. ACS-SW-Sim: ACS-SW on the GPU simulator.
  4. ACS-HW: on the GPU simulator.
  5. CUDAGraph.

Deep RL physics simulations: Normalized Speedup

Deep RL physics simulations: Normalized Speedup on GPU simulator

Deep RL physics simulations: Achieved occupancy

Dynamic DNNs: Normalized speedup

Dynamic DNNs: Achieved occupancy

Static DNNs: Normalized speedup

Static DNNs: Achieved occupancy

Comments

Strengths

This paper focuses on the problem of low GPU utilization caused by the serial execution of numerous small CUDA kernels. I believe this paper effectively addresses this problem, particularly with the following innovative points that are impressive me:

  1. Out-of-order dependency detection and scheduling. Out-of-order (OoO) is a common technique in micro-architecture and software (e.g., hard disk I/O queue) designs. It’s an impressive and innovative idea to introduce OoO into this area to find the dynamic dependencies efficiently.

  2. A good trade-off. When I first read the Introduction section of the paper, I thought the read-write dependencies detection may be a difficulty task. To my knowledge, there aren’t reliable static binary memory access analysis techniques (otherwise, segmentation fault wouldn’t be a common problem). However, the authors made a good simplification and trade-off regarding this problem. For most common kernels, memory access areas can be inferred from input parameters. For the rest kernels, it can be assumed that they access the entire memory. Since few common operators occupy most of the execution time, this trade-off leads to significant performance improvements with a relatively low scheduling overhead. This innovation is my favorite aspect of this paper.

  3. Two-stage dependency detection in ACS-HW. While a complete hardware dependency detection approach is theoretically feasible, it could incur significant chip area costs (as we know, the re-order buffer in microprocessor carries large area). The authors proposed a two-stage software-hardware co-design dependency detection, significantly simplifying the difficulty of hardware design. It is a brilliant idea.

Weaknesses

This paper has some potential weaknesses:

  1. To each type of kernel, we must custom get_addresses function int the kernel wrapper. This weakness may limit the adoption of ACS.

  2. Deciding whether kernels should be executed concurrently requires considering more factors than just data dependencies. If there are resource conflict (e.g., memory bandwidth, shared memory size) between two large kernels, performance may degrade if they co-execute.

Improvements

I propose some potential improvements to this paper:

  1. In response to the first weakness mentioned above, I propose a profiling-rollback strategy to achieve safe automatic dependency detection. This strategy leverages the commonly used paging technique in OS virtual memory management: we can set a memory page as read-only or write-only. When a program is running, if a page fault is triggered, we can know that a read/write occurs. While I’m unsure if Nvidia GPUs provide APIs for user to control page tables, let’s assume such APIs exist. Given that many workloads are iterative (e.g., neural network training), we can profile the workload just one iteration, utilizing the aforementioned paging trick to record the memory access segments of each kernel. Obviously this may introduce some inaccuracies, we need a rollback strategy to ensure correct program execution. During runtime, we set known __write_segments__ as read-write, while other areas are set as read-only. Upon encountering a page fault, we detect an error and revert to the default strategy (assuming all memory areas will be read and wrote). With this strategy, we can eliminate the need of manual get_addresses function, and maximize the potential parallelism.

  2. Regarding the second weakness, I suggest adopting the method of GPUPool to determine which kernels are suitable for concurrent execution. A naive solution involves tracking the number of SMs each kernel occupies. When the SMs of a GPU are fully occupied, even if there are kernels in the ready state and available CUDA streams, no new kernels are scheduled.

This blog is a write-up of the paper “GPUPool: A Holistic Approach to Fine-Grained GPU Sharing in the Cloud“ from PACT’22.

Motivation

This paper focuses on the GPU sharing in cloud scenarios.

Currently, existing GPU sharing techniques can be categorized into 2 types:

  • Time-sharing means executing each concurrent VM on a full device in a round-robin fashion. Pros: Simple and mature. Cons: VMs could still under-utilize the hardware within each time slice.

  • Shape-sharing: split a device into partitions and allows multiple workloads to execute on different partitions simultaneously.

Space-sharing can be categorized into 2 types:

  • Coarse-grained assigns disjoint sets of streaming multiprocessors (SMs) and memory channels to concurrent workloads. For example, Nvidia MIG. Pros: offers great performance isolation among tenants of the same GPU. Cons: (i) resource under-utilization within each SM consisting of heterogeneous functional units (e.g., FP32, INT, FP64, Tensor Cores) meant for different workload types. (ii) inefficient memory bandwidth usage caused by the bursty nature of GPU memory traffic.

  • Fine-grained allows different workloads to co-run on the same SMs and request memory bandwidth flexibly, such as CUDA Stream and MPS. Pros: Better hardware utilization.

The key problem of GPU sharing in data center is performance unpredictability. It contains 2 key challenges:

  1. Mitigating interference. The amount of performance improvement from fine-grained sharing varies drastically depending on how compatible the concurrent workloads are in terms of resource usage. Also, the interference cannot be statically estimated. So, it is non-trivial to determine compatibility among a large number of incoming jobs in the cluster.

  2. Providing QoS guarantees.

Existing solutions:

  • Software-based: kernel slicing or a persistent thread model. Cons: high scheduling overhead.

  • Hardware-based: integrate sophisticated resource management logic into hardware to allocate resources for concurrent kernels. Cons: expensive and also inflexible.

Common problems of existing solutions:

  1. They do not concern with interference mitigation at the cluster level.

  2. They do not handle scenarios where incoming jobs must be distributed among multiple GPUs to satisfy QoS constraints.

Figure 1. Simulated system throughput of co-running `parb_spmv` and `rod_hotspot` at various TBs/SM settings

Problems of hardware TB scheduler which hinder the fine-grained sharing:

  1. It always attempts to launch as many thread blocks per SM (TBs/SM) for each kernel as allowed by the execution context storage constraints (e.g., registers, shared memory, thread slots). It leaves insufficient resources for concurrent kernels. As showed in Figure 1, if we can individually set the TBs/SM for each kernel, we may achieve a higher throughput.

  2. It only dispatches concurrent kernels onto SMs after the earlier arriving one completes launching all the thread blocks specified by the kernel grid size. This will force an almost serially execution of kernels in some scenarios.

GPU applications in the cloud fall into two main categories: latency-sensitive, and throughput-oriented. Throughput-oriented workloads are good candidates for hardware space-sharing. They have the following characteristics:

  1. Most workloads involve a large variety of kernels with different hardware resource utilization characteristics (e.g., CNN: compute-intensive, batch-norm: memory-intensive).

  2. Active SMs are underutilized in some resources (FP, tensor core, memory bandwidth).

  3. They typically repeatedly execute the same sequence of kernels (e.g., ML).

  4. Relaxed QoS Requirements.

Design

This paper proposed a hardware-software co-designed strategy to solve these challenges.

Hardware

This paper changes the default behavior of CUDA runtime to make it more suitable for fine-grained sharing:

  1. Allows CUDA runtime to program the TBs/SM setting as one of the kernel launch parameters. The value of TBs/SM is selected by the performance predictor.

  2. Make the TB scheduler launch TBs from any concurrent kernels whenever they are running under their TBs/SM quota.

Software

Concept Explanation:

  • Job: a task submitted by user, such as a DNN training task. It may be iterative and contains multiple kernels.
  • Kernel: CUDA kernel.
  • Normalized Progress (NP): $t _ {isolate} / t _ {co-execute}$.

Two key observations:

  1. Co-execution performance of GPU kernels is highly correlated with resource utilization of individual kernels measured when running in isolation.

  2. Once we have predicted which job pairs can co-execute without violating QoS requirements, the scheduling task can be reduced to the classic maximum cardinality matching problem in graph theory.

Figure 2. Overall System Design of GPUPool

Based on these 2 observations, the author proposed GPUPool. Its overall system design is shown in Figure 2. It consists of 4 steps:

  1. Kernel Profiler. GPUPool groups all incoming GPU job into a batch for every scheduling window (e.g., 30 seconds). User should provide application executable and execution time budget. Then GPUPool automatically profiles the application for one iteration of the job in isolation on hardware, to collect the performance counter metrics of each kernel of data.

  2. Co-execution Performance Predictor. This step decides the compatibility of all possible job pairs within the batch using the profiling result. It contains 2 stages:

    1. Kernel-wise Predictors. It predicts how well each kernel from one job will co-run with the ones in the other job. This stage uses a Gradient Boosting Tree (GBT) model to predict the performance of each kernel when co-running with another kernel (based on the 1st key observation). The model takes the profiling data of kernels as input and outputs the NP. This prediction will be done for each feasible TBs/SM settings.

    2. Job-wise Predictor. It gets an interference matrix (shown in Figure 3) based on the predicted NP (under optimal TBs/SM setting) from former stage, which indicates how will two kernels slow down when they are co-running. Then, GPUPool using this matrix to calculate the co-running time of two jobs. Here, the authors found that a whole calculation may require tens of thousands iterations, but the result will coverage to a steady-state after several iterations. So the authors used an approximation algorithm (shown in Figure 4) – stops timeline calculation once the accumulated slowdown values of each job is within a small delta over the past epoch.

Figure 3. Interference Matrix

Figure 4. Concurrent Application Timeline
  1. Job dispatcher. It decides which job pairs should co-run to maximize system performance while satisfying QoS. The decisions are found by solving a maximum cardinality matching problem – each node represent a job, when two jobs can co-run and will not violate the QoS requirement, connecting an edge between them. Then a graph theory algorithm is used to maximum cardinality matching, which means a largest subset of edges that do not share a common end node. Due to the potential unreliability of the performance predictor, GPUPool also add a safety margin $\delta$ to edge formulation.

$$E = \left{ ( {job} _ i, {job} _ j ) \mid {job} _ i,{job} _ j \in V\ \text{and}\ {NP} _ {job _ x} > {QoS} _ {job _ x} \times (1 + \delta ), x \in {i, j} \right}$$

  1. Execution. The batch of jobs are assigned to the modified GPU hardware.

Evaluations

The paper compare GPUPool against three baseline systems:

  1. No-Sharing.

  2. Coarse: packing the jobs onto as few GPUs as possible using a greedy scheduling algorithm.

  3. Heuristic: pairing up jobs with the highest and lowest bandwidth utilization (profiled offline) from a batch of incoming jobs.

The metrics is system throughput $STP=\sum_{i=1}^n \cfrac{t_{isolated}^i}{t_{shared}^i}$. $t_{isolated}^i$ and $t_{shared}^i$ are turnaround time of the i-th concurrent job when executing in an isolated and shared environment respectively. The paper also uses we use ${QoS}_{reached}$ to evaluate QoS fulfilment rate.

Comparison of GPU Sharing Systems

Sorted STP on GPUs

Throughput Normalized to QoS Target

Prediction Accuracy of Different ML Techniques

Comments

Strengths

This paper targets the fine-grained GPU sharing problem in the cloud. I believe this work provides a valuable solution to this problem.

From my perspective, fine-grained GPU sharing presents three key challenges:

  1. Limitations imposed by hardware and CUDA, which make it difficult for programmers to flexibly control kernel execution.

  2. Reliable and low-cost performance prediction for concurrent kernel execution. Establishing an analytical performance prediction model is highly challenging. One naive approach is using real hardware to profile, but due to the $\mathcal{O}(n^2)$ ($n$ representing the number of jobs) time complexity, this method is not scalable to larger clusters.

  3. Efficient algorithms to find appropriate job combinations. If we allow an arbitrary number of jobs to execute concurrently, this becomes an NP-hard problem.

This paper cleverly addresses or bypasses these challenges through the following strategies:

  1. Hardware-software co-design, which involves modifying hardware to provide more flexible API for upper-layer application. While this prevents the authors from testing their method on actual hardware and forces them perform experiments on simulator (GPGPU-Sim), I believe such simulations can provide valuable insights for adjustments on real hardware.

  2. Predicting kernel concurrent execution performance by a ML model. This is a standout aspect of the paper (which is also my favorite novelty). The authors introducing ML with a good motivation to effectively addresses a challenging performance modeling problem, bypassing a complicated analytical modeling. Also, this ML model has good interpretability, top-10 import metrics (show in Figure) align well with human’s intuition. Furthermore, in my research experiences about Deep Learning Compiler (e.g., TVM), I also found many paper introduce such ML models for performance prediction. I believe the thought that leveraging ML techniques to bypass some complicated modeling problems is highly valuable in system research, which is the most important thing I learned from this paper.

  3. Instead of solving the whole NP-hard job combination problem, the authors limit the number of concurrently executed jobs to 2, considering this simpler case. It is a fantastic tradeoff. The simplified problem can be solved by a maximum cardinality matching algorithm, which may not find the optimal combination, but exchanging reasonable scheduling overhead for a substantial performance improvement.

Weaknesses

This paper also has some potential weaknesses:

  1. It seems to ignore the situation which two concurrent jobs have different execution times. For instance, when a longer job and a shorter job are executed together, after the shorter job finishes, GPUPool seems unable to schedule a new job to the GPU. Instead, the remaining GPU time is monopolized by the longer job. This could result in a lower resource utilization.

  2. The concurrent execution of multiple jobs on a single GPU may also be constrained by GPU memory capacity. A possible improvement is to ask users to indicate maximum GPU memory usage of their applications and consider the these constraints when constructing the graphs.

  3. This paper does not consider the job which leverages multiple GPUs. These jobs are quite common in reality. When a job can occupy multiple GPUs, there are some additional constraints:

    1. Inter-GPU connection (e.g., NVLink or InfiniBand) bandwidth is the potential bottleneck, especially for distributed training strategies relying on high GPU interconnect bandwidth, such as Data Parallelism. Improper job scheduling may lead to contention for bandwidth among multiple jobs, or jobs requiring high GPU interconnect bandwidth may run on different nodes.

    2. When a single job leverages multiple GPUs, the workload types on different GPUs may not be the same. For example, in Pipeline Parallelism, different GPUs run different stages of the neural network.

  4. This paper does not clearly take into account the impact of memory hierarchy on performance, such as shared memory (or just implicitly consider it using a ML model). Some CUDA kernels are optimized by carefully utilizing CUDA SM shared memory, such as Flash Attention. When two kernels run together, does it lead to shared memory contention? Could it result in runtime errors or shared memory overflowing into global memory, causing a severe performance decline? Experiments in the paper can not answer these questions. Also, the selected profiling metrics to train stage 1 model listed in Figure 5 do not contains any metrics about shared memory capacity. Another possibility is that a ML model is already good enough to handle this problem. Regardless, the impact of memory hierarchy on GPU-sharing deserves further study.

Figure 5. Metrics Used to Train Stage 1 Prediction Model

Possible Improvements

I have some potential ideas to improve this work:

  1. As response to the first weakness mentioned above, we can extend GPUPool to enable it to schedule a new job to the GPU after the shorter job finishes. This improvement can be achieved by a simple modification: keep the running jobs in the incoming window, and if two jobs are still running in the same GPU, also keep the edge between them in the pairing graph. With this modification, if shorter job finishes, we can re-run the matching algorithm to find a new job to pair with it.

  2. We can extend GPUPool to support multiple GPU job. To achieve that, we should consider inter-GPU connection bandwidth. This may include following modifications:

    1. Ask users to indicate the required inter-GPU bandwidth or connection types (e.g., NVLink/PCIe/Infiniband/Ethernet).

    2. Take a multiple GPU task as several sub-jobs. Each of sub-job is a single GPU job, with interconnection constraints. Then we can reuse the infrastructure of GPUPool to find the co-running chances.

    3. Extend the last step “Execution” to consider the interconnection constraints, so it can dispatch sub-jobs to nodes that meet the constraints. This may require an efficient graph algorithm to find job placement, which requires a further research.

  3. Sometimes the goal of a data center is not just to improve resource utilization, but also to save energy. Improving resource utilization does not necessarily mean energy saving, because the chip’s speed $S$, power consumption $P$, and frequency $f$ have the following approximate relationship:

$$\begin{align}
S & \propto f \
P & \propto f^\alpha, \text{while}\ \alpha \in [2, 3]
\end{align}$$

We can extend the optimization target of GPUPool to power consumption. This can be achieved by add a power prediction model with similar methods. Then we can use a multi-objective optimization algorithm to find the best job combination, considering both performance and power consumption.

Motivation

机器学习集群需要一个安全的方式向用户暴露服务,以及跨公网服务器互联,为此需要部署 VPN 网络。

VPN 网络的部署需要考虑如下因素:

  1. 网络拓扑:需要选择合适的拓扑结构以尽可能降低延迟;
  2. 用户管理:可以方便地进行用户的增减和授权;
  3. 使用和维护简单。

Design

网络拓扑

网络拓扑决定着延迟。

延迟最低的方案显然是 full-mesh,即每一对 peer 之间都有直接的 P2P 连接。但这种拓扑结构的管理复杂度是 $\mathcal{O}(n^2)$ 的,并且每添加一个新的 peer 就需要修改所有其他 peer 的配置文件,还需要解决 NAT 带来的问题,这必须借助一些自动化的软件管理。我尝试了 NetmakerHeadscale,但它们似乎都无法正确处理学校内的复杂网络环境,比如各种企业级路由器使用的 symmetric NAT,成功建立 P2P 的概率非常之低

最终我选择了 full-mesh 和 hub-and-spoke 相结合的拓扑。由于服务器数量和 IP 很少变化,手动配置一个服务器间的 full-mesh 网络是可行的。与此同时,提供一个 gateway server 作为用户接入的 hub,用户只需要与 gateway server 建立连接。由于大部分用户其实是在校内使用 VPN 的,因此连接到校内的 gateway server 并转发流量并不会带来太多额外延迟。这种结构可以平衡延迟与管理复杂度,用户的增减和授权也只需要在 gateway server 上操作。

Network Topology

协议选择

流行的 OpenVPN 和 IPSec 都足够优秀,但新兴的 WireGuard 具有无可比拟的配置简单性。对于服务端,WireGuard 可以用几行配置文件定义一个 peer 和路由;对于用户,由于 WireGuard 采用基于密钥对的认证方式,只需要一个配置文件即可接入 VPN 网络,不需要额外的密码记忆和登录操作。

管理方式

出于可预测性和稳定性的考量,我选择了手动配置的方法。服务器间的 full-mesh 网络一次配置后就不需要再频繁更改。而用户管理则通过一个脚本实现,当需要添加一个新用户时,脚本生成密钥对并分配 IP,把公钥和路由信息加入 gateway server 的 peer list 中,然后生成包含私钥和分配的 IP 的配置文件,并发给用户。

Gateway server 上的用户 peer 配置示例:

1
2
3
4
5
[Peer]
PublicKey = <redacted>
AllowedIPs = 10.1.x.y/32
AllowedIPs = fd01::x:y/128
PersistentKeepalive = 25

用户的接入配置文件示例:

1
2
3
4
5
6
7
8
9
10
11
12
13
[Interface]
PrivateKey = <redacted>
Address = 10.1.x.y/16
Address = fd01::x:y/64

[Peer]
PublicKey = <redacted>
AllowedIPs = 10.1.0.0/16 # route all VPN traffic to gateway server
AllowedIPs = fd01::/64
Endpoint = wg.ustcaigroup.xyz:51820 # gateway server is dual stack
# Endpoint = wg.ustcaigroup.xyz:51820 # IPv4
# Endpoint = wg.ustcaigroup.xyz:51820 # IPv6
PersistentKeepalive = 25

环境

本文基于的硬件环境为 Ascend 910B3,基于的软件环境包括 CANN 7.0-RC1PyTorch 1.11.0Ascend PyTorch Adapter v5.0.rc3-pytorch1.11.0。其他 CANN 和 PyTorch 版本上的情况可能略有不同。

注册过程

Ascend PyTorch Adapter 中添加自定义算子

参考:

torch_npu/csrc/aten/npu_native_functions.yaml 中添加 npu_add_custom 函数:

1
2
custom:
- func: npu_add_custom(Tensor x, Tensor y) -> Tensor # 添加的函数

torch_npu/csrc/aten/ops/op_api 中添加 AddCustomKernelNpu.cpp 文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#include <torch/csrc/autograd/custom_function.h>

#include "torch_npu/csrc/framework/utils/OpAdapter.h"
#include "torch_npu/csrc/aten/NPUNativeFunctions.h"
#include "torch_npu/csrc/aten/ops/op_api/op_api_common.h"

namespace at_npu {
namespace native {
using torch::autograd::Function;
using torch::autograd::AutogradContext;

at::Tensor NPUNativeFunctions::npu_add_custom(const at::Tensor& x, const at::Tensor& y) {
at::Tensor result = OpPreparation::ApplyTensor(x); // 创建输出内存

// calculate the output result of the NPU
EXEC_NPU_CMD(aclnnAddCustom, x, y, result);
return result;
}
} // namespace native
} // namespace at_npu

之后重新编译安装 torch_npu

CANN 中添加自定义算子的实现

参考:

首先定义算子描述文件 add_custom.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[
{
"op": "AddCustom",
"language": "cpp",
"input_desc": [
{
"name": "x",
"param_type": "required",
"format": [
"ND"
],
"type": [
"fp16"
]
},
{
"name": "y",
"param_type": "required",
"format": [
"ND"
],
"type": [
"fp16"
]
}
],
"output_desc": [
{
"name": "z",
"param_type": "required",
"format": [
"ND"
],
"type": [
"fp16"
]
}
]
}
]

执行

1
msopgen gen -i add_custom.json -c ai_core-Ascend910B3 -f pytorch -out . -lan cpp

生成算子工程:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
AddCustom
├── build.sh
├── cmake
│ ├── config.cmake
│ ├── func.cmake
│ ├── intf.cmake
│ ├── makeself.cmake
│ └── util
├── CMakeLists.txt
├── CMakePresets.json // 修改 ASCEND_CANN_PACKAGE_PATH
├── framework
├── op_host
│ ├── add_custom_tiling.h // 定义 length 和 tiling 相关信息
│ ├── add_custom.cpp // 算子 host 侧实现
│ ├── CMakeLists.txt
├── op_kernel
│ ├── CMakeLists.txt
│ ├── add_custom.cpp // 算子 kernel 侧实现
└── scripts

CMakePresets.json 中修改 ASCEND_CANN_PACKAGE_PATH 为 CANN 安装路径。

op_host/add_custom_tiling.h 的内容如下(简单实现):

1
2
3
4
5
6
7
8
9
#include "register/tilingdata_base.h"

namespace optiling {
BEGIN_TILING_DATA_DEF(AddCustomTilingData)
TILING_DATA_FIELD_DEF(uint32_t, size); // 定义 tensor size
END_TILING_DATA_DEF;

REGISTER_TILING_DATA_CLASS(AddCustom, AddCustomTilingData)
}

op_host/add_custom.cpp 中修改算子调用时的 block_dim

1
context->SetBlockDim(20); // 910B3 的 block_dim

op_kernel/add_custom.cpp 是算子的具体实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

#include "kernel_operator.h"

#ifdef __DAV_C220_VEC__

extern "C" __global__ __aicore__ void add_custom(GM_ADDR x, GM_ADDR y, GM_ADDR z, GM_ADDR workspace, GM_ADDR tiling) {
GET_TILING_DATA(tiling_data, tiling);
uint32_t M = tiling_data.size; // 从 tiling_data 中获取 tensor size

// ...
}

#else

// 重要:CANN 会尝试不同的 ccec 编译参数以推断算子的类型(VEC、CUBE、MIXED),如果不创建一个 stub 函数将会编译失败
extern "C" __global__ __aicore__ void add_custom(GM_ADDR x, GM_ADDR y, GM_ADDR z, GM_ADDR workspace, GM_ADDR tiling) {
pip_barrier(PIPE_ALL);
}

#endif

编译部署

1
2
$ bash build.sh
$ ./custom_opp_euleros_aarch64.run

PyTorch 中调用:

1
2
3
4
5
6
import torch
import torch_npu

# ...

z = torch.npu_add_custom(x, y) # 由于是运行时编译,第一次运行时需要等待编译

注册原理

TODO

参考

TODO

0%