环境

本文基于的硬件环境为 Ascend 910B3,基于的软件环境包括 CANN 7.0-RC1PyTorch 1.11.0Ascend PyTorch Adapter v5.0.rc3-pytorch1.11.0。其他 CANN 和 PyTorch 版本上的情况可能略有不同。

注册过程

Ascend PyTorch Adapter 中添加自定义算子

参考:

torch_npu/csrc/aten/npu_native_functions.yaml 中添加 npu_add_custom 函数:

1
2
custom:
- func: npu_add_custom(Tensor x, Tensor y) -> Tensor # 添加的函数

torch_npu/csrc/aten/ops/op_api 中添加 AddCustomKernelNpu.cpp 文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#include <torch/csrc/autograd/custom_function.h>

#include "torch_npu/csrc/framework/utils/OpAdapter.h"
#include "torch_npu/csrc/aten/NPUNativeFunctions.h"
#include "torch_npu/csrc/aten/ops/op_api/op_api_common.h"

namespace at_npu {
namespace native {
using torch::autograd::Function;
using torch::autograd::AutogradContext;

at::Tensor NPUNativeFunctions::npu_add_custom(const at::Tensor& x, const at::Tensor& y) {
at::Tensor result = OpPreparation::ApplyTensor(x); // 创建输出内存

// calculate the output result of the NPU
EXEC_NPU_CMD(aclnnAddCustom, x, y, result);
return result;
}
} // namespace native
} // namespace at_npu

之后重新编译安装 torch_npu

CANN 中添加自定义算子的实现

参考:

首先定义算子描述文件 add_custom.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[
{
"op": "AddCustom",
"language": "cpp",
"input_desc": [
{
"name": "x",
"param_type": "required",
"format": [
"ND"
],
"type": [
"fp16"
]
},
{
"name": "y",
"param_type": "required",
"format": [
"ND"
],
"type": [
"fp16"
]
}
],
"output_desc": [
{
"name": "z",
"param_type": "required",
"format": [
"ND"
],
"type": [
"fp16"
]
}
]
}
]

执行

1
msopgen gen -i add_custom.json -c ai_core-Ascend910B3 -f pytorch -out . -lan cpp

生成算子工程:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
AddCustom
├── build.sh
├── cmake
│ ├── config.cmake
│ ├── func.cmake
│ ├── intf.cmake
│ ├── makeself.cmake
│ └── util
├── CMakeLists.txt
├── CMakePresets.json // 修改 ASCEND_CANN_PACKAGE_PATH
├── framework
├── op_host
│ ├── add_custom_tiling.h // 定义 length 和 tiling 相关信息
│ ├── add_custom.cpp // 算子 host 侧实现
│ ├── CMakeLists.txt
├── op_kernel
│ ├── CMakeLists.txt
│ ├── add_custom.cpp // 算子 kernel 侧实现
└── scripts

CMakePresets.json 中修改 ASCEND_CANN_PACKAGE_PATH 为 CANN 安装路径。

op_host/add_custom_tiling.h 的内容如下(简单实现):

1
2
3
4
5
6
7
8
9
#include "register/tilingdata_base.h"

namespace optiling {
BEGIN_TILING_DATA_DEF(AddCustomTilingData)
TILING_DATA_FIELD_DEF(uint32_t, size); // 定义 tensor size
END_TILING_DATA_DEF;

REGISTER_TILING_DATA_CLASS(AddCustom, AddCustomTilingData)
}

op_host/add_custom.cpp 中修改算子调用时的 block_dim

1
context->SetBlockDim(20); // 910B3 的 block_dim

op_kernel/add_custom.cpp 是算子的具体实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

#include "kernel_operator.h"

#ifdef __DAV_C220_VEC__

extern "C" __global__ __aicore__ void add_custom(GM_ADDR x, GM_ADDR y, GM_ADDR z, GM_ADDR workspace, GM_ADDR tiling) {
GET_TILING_DATA(tiling_data, tiling);
uint32_t M = tiling_data.size; // 从 tiling_data 中获取 tensor size

// ...
}

#else

// 重要:CANN 会尝试不同的 ccec 编译参数以推断算子的类型(VEC、CUBE、MIXED),如果不创建一个 stub 函数将会编译失败
extern "C" __global__ __aicore__ void add_custom(GM_ADDR x, GM_ADDR y, GM_ADDR z, GM_ADDR workspace, GM_ADDR tiling) {
pip_barrier(PIPE_ALL);
}

#endif

编译部署

1
2
$ bash build.sh
$ ./custom_opp_euleros_aarch64.run

PyTorch 中调用:

1
2
3
4
5
6
import torch
import torch_npu

# ...

z = torch.npu_add_custom(x, y) # 由于是运行时编译,第一次运行时需要等待编译

注册原理

TODO

参考

TODO

This is an unfinished blog.

Preface

Due to Internet censorship in China (known as GFW, Great Firewall, 防火长城), many websites (e.g. Google, Twitter) are blocked, and some websites (e.g. GitHub) suffer connectivity issues. In China, the means to circumvent internet censorship is referred to as 翻墙 (means climbing over the wall).

In China, to freely access the Internet, a proxy is essential. Despite various commercial options available, they may not be suitable for everyone. Therefore, I have constructed a user-friendly and easy-to-maintain proxy system for my research group, as a part of my responsibilities as a system administrator.

Target

  1. Easy to use. Team members only need some simple configurations.The proxy client should be able to automatically update configuration.
  2. Stability.
  3. Sufficient traffic, to download large datasets.
  4. Low Latency, to provide good experience for web.
  5. Low Cost.
  6. Easy to maintain. Frequent maintenance is unacceptable, and only simple changes of the configuration are required for new function.
  7. Concealment. The cat-and-mouse game between GFW and anti-censorship tools has been escalating. Ten years ago (2013), only an OpenVPN client was all your need to “Across the Great Wall and reach every corner in the world”. Now, you must use much more sophisticated solutions to prevent your “unusual” traffic from being detected by GFW. According to GFW Report, popular Shadowsocks (a proxy protocol which simply encrypt all traffic using pre-shared key) was detected and blocked, and the TLS-based proxy also encountered large-scale blocking in Oct 2022. The tools and protocols used must be concealed enough to allow the service to run for a long time.

Available Resources

CERNET

Cloudflare WARP

VPS

Server in USTC

Anti-Censorship Tools

Adopted Solution

Deployment

Problems

Client Initialization

Compatibility

Conclusion

前言

作为高考以来带给我最大焦虑感的考试,TOEFL 让我 2023 年大部分时间在黑暗中度过,我对其的时间、金钱投入也是最大的。

一开始定下总分 100、口语 20 的目标,中间经历了无数天自信心丧失、被焦虑情绪淹没、口语练到舌头打结,最终在 2023 年 11 月 3 日查询到了满意的成绩。

我写下此文既作为自己过去的总结,也希望能帮助到可能看到这篇文章的人。

我参加的场次和得分情况:

考试时间 总分 阅读 听力 口语 写作 备注
2023.7.22 89 27 24 16 22 改革前
2023.8.15 89 28 25 17 19 这场起为改革后
2023.9.16 96 29 27 19 21
2023.10.14 96 30 24 19 23
2023.10.28 101 28 27 22 24
MyBest 103 30 27 22 24

用到的学习材料:

阅读

对于大部分中国学生而言这是最简单的部分,一个合格的 211 以上的学生必定能轻松应对。

我在考前只做了两篇适应一下做题节奏,第一次考就取得了 27 分,之后一直稳定,并在第四次考到了满分。个人感觉 TOEFL 阅读难度甚至低于江苏高考和六级阅读。虽然我第一次考试前背了许多单词,但那更多是为了 GRE 准备的,TOEFL 阅读本身基本无词汇方面的挑战。

虽然高分并不难,但满分还需要一点运气。我考满分的那次,两篇阅读的话题分别是「地球早期的海洋与大气」、「农业革命与灌溉」,都是我非常熟悉的话题。这种情况下的阅读就是简单模式。

听力

TOEFL 奇葩的考试模式,让听力、口语、写作都考察你的听力能力。但这三部分听力其实是完全不一样的

  • 听力部分本身:
    • Conversation:难度较高,日常对话一直是我薄弱的地方,连读、吞音等现象最多,语速也较快;
    • Lecture:难度一般,虽然看似很长,但其实语速较慢,容错也高,一句话没听清完全可以根据上下文 infer;
  • 综合口语:这部分的听力其实难度最高,需要尽量多捕获细节并记下充分的笔记,我口语基础本身很差,难上加难;
  • 综合写作:难度最低,一开始会让你读完一篇阅读材料熟悉话题,并且听力结构死板,逻辑清晰,语速较慢。

但不得不说,经过恰当的训练,听力部分也是很容易提分、考到高分的。我集中大量训练了 20 天左右,另外还有大概 30 天零零碎碎的训练(和别的事情混在一起)。

关于听力,最重要的一点是,必须要摸索出适合你自己的做题方式。很多学习资料会强调听力时如何正确记笔记,我一开始也是那样训练的,但考完第一次后我发现这种方法并不适合我,记笔记会分散你的注意力,听力内容跟丢(不再能把握上下文的逻辑关系)的概率会极大增加。

我的总结是,笔记适合记录细节,人脑适合记住逻辑。

TOEFL 纯听力部分其实并不注重细节,反而更考察你对听力材料的整体把握。后来我 20 天的专门训练中,我就彻底抛弃了笔记,效果很好。需要说明的是,后来我发现遇到细节密度比较高的时候,偶尔记笔记还是有用的,能帮助你避免走神,记下的内容其实没用,我考场上从没看过。在这里,记笔记其实只是为了强化人脑记忆,并不是一种外部信息存储方式。

我自己使用的听力训练法:第一遍做题,第二遍重听,第三遍看着文字内容听,之后再听若干次,直到你能听清每个细节为止。专门训练时每篇听力我大概要花 20~40 分钟不等,每天练至少 6 篇。

同样,话题熟悉度会很大程度上影响发挥。我第一次得 27 分的那场,有篇 lecture 是讲的经典故事「胶带手撕石墨烯得诺贝尔奖」,虽然我很熟悉并且做得顺风顺水,但内容确实有点偏专业,有许多物理学专业词汇,涉及石墨烯的分层结构、各向异性的导电性的原理。由于 TOEFL 听力 lecture 还是以理工科为主,摸鱼时从知乎 B 站上学到的没用知识,甚至中学时代看过的一些科普读物,都可能以一种意想不到的方式帮助你,广博的知识面会让你事半功倍。但与此同时,遇到不熟悉的话题就很麻烦,我第四场考试听力只得 24 分,原因就是遇到了一个 literature 话题,大部分内容没听明白。

2023 年 7 月改版后听力有个坑点:由于取消了中场休息,有些人做得快,会在你听听力时就开始讲口语,产生严重干扰。第二场考试前虽然我专项训练了,但听力仍然只有 25 分,就是踩了这个坑。

避免此坑的方法是,所有的 direction 部分全部快速跳过,阅读部分可以剩两分钟提前结束,这样你可以成为全场第一个讲口语的人,让别人被你干扰

宁叫我负天下人,不叫天下人负我。

口语

看分数就知道这是最折磨我的一部分,甚至后两场就只是为口语考的(口语没到 20 申请时非常危险)。

口语专项我高强度训练大概 30 天,非专项训练的天数加起来数不清了。

对于像我这种口语基础很差的人来说,大量的训练可以保证你的分数能在 20 左右,之后还是要看运气和临场发挥。

TOEFL 口语与其说是口语考试,不如说是大综合。对我个人而言,口语部分的阅读和听力要求甚至高于阅读和听力部分本身:

  • task2、task3 的阅读部分要求速读能力,个人感觉没有 4 words/s 是搞不定的,而且你不会有没读通后回滚的机会。而阅读部分其实完全可以照我平时看论文的速度去看,一句话没看明白也能多看几遍。
  • 综合口语的听力要求你记下细节,相比之下听力部分很多时候只要记下逻辑就行。记细节就必须依赖笔记,平衡好笔记、接收信息和把握整体逻辑,是最为困难的。

独立口语

素材积累是有必要的,但数量不在多,我只准备了 10 个常用的,重要的是一定要熟练运用,看到题目需要快速反应出来可以套什么素材。这可以去专门练习学而思考满分上的口语黄金 80 题

同时素材也不是万能的,独立口语不可避免地带有许多随机因素,经常需要临场发挥编故事,这时用中文快速想好后翻译成英文(写下几个关键词,说的时候连词成句)会比较快。

综合口语

对我来说整场考试难度最高部分,每次考到这基本就肾上腺素爆发。

如何应对综合口语是我花最多时间训练的部分,没有什么捷径,必须要自己找感觉、找经验。我在这里说一下我总结出的适合我的经验:

  • 阅读时:task2、task3 虽然给了你 45s 阅读,但最好只用 15s 就扫完,并找出关键句(非关键句直接不看),之后把关键句抄下来(不必一字不差,但尽量完整,可以直接读,不必组织语言的那种)。这样做的好处是,我在准备时间可以直接快速读一遍,正式说的时候一开始不仅流畅而且节省时间;
  • 听力时:尽可能记下细节,但必须要同时排除非重点,重点部分则同样记下关键词/句。与此同时,记笔记绝对不能影响到接收信息本身;
  • 准备时:一边把要说的内容读出来(不要默念,默念会让你产生你已经说流畅了的错觉),一边圈出有用的信息(或者划掉无用信息),用箭头整理出一条说的线,必要时在一些关键词间写下填充内容,降低临时组织语言的负担;
  • 正式说:以保证流畅度为优先目标,时间不够了、卡住了时可以丢弃部分细节。结结巴巴、重复一句话不仅会降低分数,还会浪费时间。

无论什么情况下,千万不能过度紧张。过度紧张会让你思考变慢,也会极大增加说的时候的卡顿。我得 22 分的那场,考口语时就处于比较放松的状态。

综合口语我个人的训练方法:第一遍正常做,然后紧接着重说一遍,之后看解答,然后不停重复说直到能非常流畅。这种训练方式下一篇大概需要 15~30 分钟,我一天练 10 篇。

写作

没有感情,全是套路。实际上我根本没有在写作训练上投入多少时间,一般的英语基础加上适当的技巧就能拿到至少 22。

需要注意的是,不要让打字速度拖你后腿。我是打字速度比较慢、并且 typo 很多的人,前两场次这确实影响到了我,不过后来熟练了也没问题了。

综合写作

综合写作的阅读可以定定心心读,给的时间甚至够你看两遍,也不用记笔记。听力部分也很简单,有阅读做铺垫让你熟悉话题,同时结构死板,逻辑清晰,语速较慢,记下重要细节并不困难。

要注意的是不要死背模版,把考试时间浪费在打模版上得不偿失,保证逻辑清晰结构工整即可。时间应该用在尽量多还原细节上,language use 用高考级别的词汇就行了,足够拿到 24 分。

学术交流写作

2023 年 7 月改革后去掉了独立写作,换成了学术交流写作,时间缩短到 10 分钟。第二次考试写作只有 19,是因为我心大完全没练新题型就上了,结果就是完全没按照要求答。

后来我花了半天专门训练了学术交流写作,基本上道。考试时其实只要看 professor 的提问,一堆废话不用看,之后扫一眼两个 student sample answers,找出核心观点,这是为了避免观点撞车,具体内容也不用看完,之后就可以开始写了。

我个人的模版如下:

1
2
3
4
5
6
7
8
9
From my perspective, <我的观点>.

Although <找一个你反对的 sample answer 抄上他的观点>,<简单说一下我的观点的 advantage>.

<详细展开,可以用些例子,也可以指出你反对的观点的不足,60~70 words 足够>.

<(可选,我个人喜欢的表达方式)有时候可以说一下我的 method 其实可以更好地达到我反对的 method 的目标>.

So, <总结观点>.

总结

不积跬步,无以至千里。

对我个人而言,TOEFL 让我反思了大学以来的学习模式。本科时的课程要么是我已经熟悉的或者有基础的,要么是考前突击的。TOEFL 这种语言考试没有捷径(除非你是语言天才),必须从 Day 1 开始一点点训练,一点点找感觉、找经验。在这个过程种除了题目的障碍,更多还有负面情绪的障碍,找一些你信任的、同时愿意倾听你的人分享情绪非常有帮助。

Problem

On October 30, 2023, I received a warning message from the data center administrator, informing me that the firewall detected mining traffic sending from the server managed by me.

The “mining traffic” was a bitcoin.sipa.be DNS request sent to 223.5.5.5.

Initially, I thought it was a simple task to find the virus process, just like my previous encounter with another mining virus. In that case, the hacker logged in the server by hacking a weak SSH password, gained root permission possibly by an privilege escalation vulnerability exploitation (it was a server running EOL Ubuntu 16.04). Then a cron job was set up to run a mining virus.

However, this time the situation was different. I couldn’t find any suspicious processes, and there was no unusual GPU usage. Since I didn’t deploy any monitoring programs to record historical processes and sockets, the investigation couldn’t get started.

On October 31, I received the same warning again. Each time when mining traffic is detected, the firewall will block the server’s outbound network. Loss of Internet will cause lots of troubles.

I suspected that someone may have suffered a supply chain attack, such as, downloading a Python package containing a virus, or cloning code from GitHub and running it without any check.

The immediate task is to identify who and which process was responsible.

Solution

While I can’t directly determine who or which process, I can block and log suspicious traffic for further investigation.

This job can be done by iptables:

1
2
3
4
5
# iptables -N LOGDROP                   # create a new chain
# iptables -A LOGDROP -j LOG --log-uid # log info
# iptables -A LOGDROP -j DROP # drop packet

# iptables -I OUTPUT 1 -p udp -m string --string "bitcoin" --algo bm -j LOGDROP # match string "bitcoin" in udp packet

The --log-uid option can enable UID recording in /var/log/kern.log, for example:

1
IN= OUT=wg0 SRC=10.1.92.3 DST=10.1.2.13 LEN=42 TOS=0x00 PREC=0x00 TTL=64 ID=23294 DF PROTO=UDP SPT=52328 DPT=2333 LEN=22 UID=2109 GID=2109

Result

I’m waiting the next requests sent by virus.

问题

每年的 CVPR 前 GPU 总是供不应求,需要从其他地方借卡。USTC 有一个供校内用户使用的 BitaHub,但它同样有 CVPR 前一卡难求的问题,同时基于任务提交的使用模式也非常不方便,提交占用多卡的任务经常需要漫长的排队,数据管理方式更是反人类。

作为组里的服务器管理员,为了让自己在 CVPR 前活得轻松点,避免重蹈 2021 年 CVPR 前疲于应对资源调配的覆辙,有必要改善 BitaHub 的使用体验:

  1. 如何长期占用显卡避免重复排队(虽然略不道德,但实属无奈之举);
  2. 如何方便地从我们的服务器读取数据,而不是被迫使用 BitaHub 反人类的数据管理模式;
  3. 如何尽量使 BitaHub 的 GPU 使用体验接近组里的服务器,降低迁移成本,提高资源调度的灵活性。

思路

BitaHub 中的任务是以 docker 容器的方式运行的,因而给了我们在容器里配置我们想要的环境的可能,只需要通过某种方式登录 ssh 到容器中。

经过研究发现,只要启动命令不停止运行,BitaHub 中的容器就会长期运行,不释放 GPU 资源。同时 BitaHub 中的容器是可以联网的,而且 BitaHub 的网页上还贴心地给出了每个任务容器中 root 用户的 ssh 私钥。

这些给了我们利用的机会,只需要在容器内运行一个 tunnel 程序以让外部得以访问容器中的 22 端口,就能登录并长期占用资源。同时由于容器联网,也可以直接挂载校内其他服务器的文件系统。

解决方案

最终选择的 tunnel 程序是 ssh,它可以创建反向隧道:

1
ssh -i <key_file> -F none -o "StrictHostKeyChecking no" -o "ServerAliveInterval 15" -v -N -R <port>:localhost:22 jump@<jumpserver>

jumpserver 中配置用户 jump 并允许特定私钥登录,然后用某种方式把私钥传递进容器(可以直接打包进镜像,但我选择了更方便的方式——创建一个 BitaHub 数据集存放,每个任务添加这个数据集即可)。

容器的启动命令就是上述命令(考虑到网络波动,可以套一层 while true 循环或者用 autossh 自动重连),启动后就在 <jumpserver> 上的 <port> 端口创建了一个反向隧道,<port> 被映射到了容器内的 22 端口。

可以在 <jumpserver>sshd_config 中配置 GatewayPorts yes,这样反向隧道就会监听 0.0.0.0 而不是 127.0.0.1。不这样做的话我就要在 <jumpserver> 上给每个人创建用户,或者每个端口用 iptables 转发,但这太过繁琐。绑定 0.0.0.0 则可以直接从现有的 VPN 网络中访问。

挂载文件系统的方式有很多选择,考虑到安全性和便捷性,我选择了 SSHFS。NFS 直接暴露于公网过于危险,而 NFS 用户验证的配置又过于繁琐。同时 BitaHub 运行容器的内核既没加载 wireguard kmod 也没映射 /dev/net/tun,因此无法利用 VPN 保护数据安全。SSHFS 可以直接复用现存的用户认证方式,而 SSH 流量本身也更容易被潜在的机房防火墙放过。

使用如下命令挂载 SSHFS:

1
sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=30,ssh_command='ssh -p <dataserver_port> -i <key_file>' <user>@<dataserver>:/path /path

后记

TODO

问题

Nginx 自从 1.25.0 版本以来对 QUIC 的支持已被合并入 mainline,对于想体验的用户而言可以直接使用官方发布的 nginx docker 镜像,非常方便。

但是我的服务器上的 nginx 使用了 SNI 分流,源于 Shadow TLSXray Reality 等新一代基于 TLS 的代理协议的需求。这些代理协议并不能由 nginx 代为处理 TLS 层(和之前可以使用 gPRC/WebSocket 等作为数据传输方式的协议不同),但为了实现最好的伪装效果,使用 443/tcp 端口是有必要的(伪装的白名单目标网站一般情况下也只会在 443/tcp 端口开放 HTTPS 服务)。因此 443/tcp 端口的复用是必要的。

如果要让 SNI 分流和 QUIC 共存,在原来的 SNI 分流配置上只需要给每个 server 加上 listen 443 quic 即可。示例配置如下。

配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
http {

# ...

server {
server_name example.com;

# 443/tcp 已经被 nginx stream 占用,不能再次监听
# listen 443 ssl http2 reuseport so_keepalive=on;
# listen [::]:443 ssl http2 reuseport so_keepalive=on;

# 监听 443/udp 端口并启用 QUIC
# ref: https://nginx.org/en/docs/http/ngx_http_v3_module.html
listen 443 quic reuseport;
listen [::]:443 quic reuseport;

# 监听 unix domain socket 以接受 stream 传送过来的连接,也可以使用本地端口
# 接受 proxy_protocol,否则 log 中显示的链接源地址都是 unix:
listen unix:/dev/shm/nginx-example.sock ssl http2 proxy_protocol;
set_real_ip_from unix:; # 只对于来自 unix domain socket 的连接覆盖其源地址
real_ip_header proxy_protocol;

add_header Alt-Svc 'h3=":443"; ma=86400'; # used to advertise the availability of HTTP/3

# ...
}

server {
server_name foo.example.com;

# 可以多个域名共享 443/udp
listen 443 quic;
listen [::]:443 quic;

listen unix:/dev/shm/nginx-example-foo.sock ssl http2 proxy_protocol;
set_real_ip_from unix:;
real_ip_header proxy_protocol;

add_header Alt-Svc 'h3=":443"; ma=86400'; # used to advertise the availability of HTTP/3

# ...
}
}

stream {

# ...

# 根据 TLS SNI 分流
map $ssl_preread_server_name $name {
example.com unix:/dev/shm/nginx-example.sock;
foo.example.com unix:/dev/shm/nginx-example-foo.sock;
learn.microsoft.com 127.0.0.1:8443; # 用于 shadow-tls/xray-reality 等
default unix:/dev/shm/nginx-default.sock;
}

server {
# 监听 443/tcp 并根据 SNI 分流
listen 443 reuseport so_keepalive=on;
listen [::]:443 reuseport so_keepalive=on;
proxy_pass $name;
ssl_preread on;
proxy_protocol on;
}

}

测试

目前 curl/wget mainline 还没有支持 QUIC,可以使用 ymuski/curl-http3 这个 docker 镜像:

1
2
3
4
5
6
7
8
$ docker run -it --rm ymuski/curl-http3 curl https://static.monsoon-cs.moe/public/ --http3 -IL

HTTP/3 200
server: nginx/1.25.2
date: Tue, 26 Sep 2023 14:52:29 GMT
content-type: text/html; charset=utf-8
strict-transport-security: max-age=63072000
alt-svc: h3=":443"; ma=86400

参考

问题

实验室有一些 AMD EPYC 7713 的服务器,采购的原因是组里有一些人的程序有非常高的 CPU 负载(我也不知道是什么负载,为什么不能跑在 GPU 上,我也没有精力去逐个帮助解决),框框多的 AMD 处理器非常适合这种需求。

不过 AMD 的处理器虽然香,用在炼丹实验室会有额外的问题:Anaconda 安装的 numpy 和 PyTorch 默认都使用了 MKL 作为 BLAS 的实现,MKL 的 library function 也是大部分高 CPU 负载程序的热点,但 MKL 会判断自己是否在 Intel CPU 上运行,如果不是,则没有优化效果。

由于这是炼丹实验室,大家很少有足够的 HPC 基础去自己编译适合的 numpy 和 PyTorch 版本,也很难脱离 Anaconda,对于 MKL 的依赖因此很难去除。为此需要一个对一般用户无感知的解决方案

解决方案

通过搜索引擎可以搜索到一个广为流传解决方案:设置环境变量 MKL_DEBUG_CPU_TYPE=5。这是个曾经有效的解决方案,但对于 MKL 2020 及之后的版本不再有效

最终我在此处找到了更巧妙的解决方案。

MKL 会调用一个 mkl_serv_intel_cpu_true() 函数以检查自己是否运行在 Intel CPU 上,只要提供一个虚假的、始终返回 1mkl_serv_intel_cpu_true(),即可欺骗 MKL 让它认为自己在 Intel CPU 上运行。

为此,可以利用 Linux 的 LD_PRELOAD 机制LD_PRELOAD 指向的动态链接库有最高的加载优先级,只要编译一个想要的 mkl_serv_intel_cpu_true() 函数为 so 文件,并用 LD_PRELOAD 指向它,即可抢先完成此函数的加载。

笔者也经常有耳闻 LD_PRELOAD 机制被用于库函数劫持攻击,此处算是一种妙用。

具体实施

新建 mkl_trick.c:

1
2
3
int mkl_serv_intel_cpu_true() {
return 1;
}

使用 gcc -shared -fPIC -o libmkl_trick.so mkl_trick.c 编译,并将生成的 libmkl_trick.so 复制到 /usr/local/lib

在 Shell 的全局初始化文件中加入:

1
2
3
export MKL_DEBUG_CPU_TYPE=5  # 兼容旧版本 MKL
export MKL_ENABLE_INSTRUCTIONS=AVX2 # 可选,指明 MKL 可以使用 AVX2
export LD_PRELOAD=/usr/local/lib/libmkl_trick.so

实验室的同学有的用 Bash 也有的用 ZSH,所以两者都要修改:

  • Bash: 新建文件 /etc/profile.d/mkl.sh 并添加上述内容
  • ZSH: 添加到 /etc/zsh/zshenv

参考

See original publication page for more details.

All my answer files can be browsed in here, or you can download zipped file (5.9G).

Requirements

This is a test for candidates who wish to participate in the training class organized by VCB-Studio. Finish as many problems as you can, and then do the following things:

  1. Pack your answers, result files, and necessary attachments into a zip/rar/7z file. Source files we provided and intermediate file in your encoding should not be packed in.
  2. Register a Baidu Net Disk account (https://pan.baidu.com), upload the zipped file and create a sharing link. Whether you like it or not, Baidu Net Disk has been the most effective way to share files within our team since day one. Other sharing methods will NOT be considered.
  3. Send the link via email to [email protected] before Beijing Time (UTC+8) Monday, 23 Jan 2023, 23:59:59. Late submissions will NOT be considered.
  4. Prepare a QQ account. The follow-up training courses will be conducted in the QQ group.

You should independently complete the answers without any public discussion. Any form of plagiarism will NOT be tolerated.

This test has 5 questions. For question 2 and 3, you can choose ONE of them. Choosing both then we will pick one with higher points. The answers should be made in English.

Question1 (15pt)

Please describe yourself as who you are, where do you study, how do you come to know VCB-Studio and why are you interested in this project, etc. Please do not write more than 500 words, or approximately 1 page. (15pt)

Answers are hidden for privacy reasons.

Question2 (30pt)

Scanned pictures (or simply scans) are an important part of BDRips, which are often released as lossless PNG, TIFF format or lossy JPG format. Scans feature high resolution and large size. In the file Q2.7z, two sets of pictures have been provided for you. PNGs are the source scans, and WEBPs are transcoded from PNGs according to VCB-Studio Collation specifications. Your tasks are:

  1. Summarize the format conversion rules of scans in VCB-Studio Collation specifications. (6pt)
  2. Convert the sources to AVIF and JPEG-XL format, with sizes comparable to the WEBPs. (12pt)
  3. Comment on the quality, encoding speed, and compatibility of AVIF and JPEG- XL, and why/why not you may recommend us switching to the new format as the upgrade for WEBP in 2023. (12pt)

You are free to utilize existing tools, but you need to describe clearly where you find the tool and how to use it.

(1) Format conversion rules of scans in VCB-Studio Collation specifications

Choosing a format with better image quality at the same size when ensuring compatibility.

(2) Converting test

See Q2/convert.py for my conversion code. Pillow, pillow_avif_plugin and jxlpy are used libraries. Pillow is the image processing library which I often use, it supports WEBP but not AVIF and JPEG-XL. So I find two Pillow plugins by Google to support AVIF and JPEG-XL.

PNG and WEBP Ref are given images, and WEBP Cus, AVIF, JPEG-XL are custom encoded images.

WEBP Custom is encoded by Pillow, which is backed by libwebp. Encoding speed is set to slowest(6), and quality is set to 90 to keep the same size with reference webp images.

AVIF is encoded by pillow-avif-plugin, which is backed by libavif. Encoding speed is set to slowest(0), and quality is set to 84 to get the comparable size with reference webp images.

JPEG-XL is encoded by jxlpy, which is backed by libjxl. Encoding speed is set to slowest(9), decoding speed is also slowest(0), and quality is set to 92 to get the comparable size with reference webp images.

The following table shows the result:

Image PNG (size) WEBP Ref (size) WEBP Cus (size/time) AVIF (size/time) JPEG-XL (size/time)
01 26.97 MB 2.95 MB 2.95 MB / 3.36 s 2.77 MB / 37.77 s 2.56 MB / 32.00 s
02 26.25 MB 2.93 MB 2.94 MB / 3.27 s 2.71 MB / 34.87 s 2.48 MB / 33.07 s
03 3.60 MB 0.26 MB 0.26 MB / 0.37 s 0.28 MB / 11.48 s 0.28 MB / 5.12 s
04 21.78 MB 1.03 MB 1.03 MB / 2.06 s 1.32 MB / 29.56 s 1.39 MB / 32.25 s
05 2.65 MB 0.13 MB 0.13 MB / 0.24 s 0.15 MB / 9.29 s 0.18 MB / 4.11 s
06 2.66 MB 0.13 MB 0.13 MB / 0.25 s 0.15 MB / 9.39 s 0.16 MB / 3.81 s
07 24.38 MB 1.71 MB 1.71 MB / 2.25 s 1.67 MB / 27.78 s 1.68 MB / 35.59 s
08 55.52 MB 7.58 MB 7.58 MB / 26.48 s 7.93 MB / 83.44 s 6.36 MB / 72.90 s
09 44.39 MB 2.00 MB 2.00 MB / 3.53 s 1.99 MB / 59.79 s 2.47 MB / 71.73 s
10 41.59 MB 1.21 MB 1.21 MB / 3.11 s 1.16 MB / 59.99 s 1.70 MB / 63.65 s

PS: pillow-avif-plugin uses 8 threads to encode images (on i7-11700), and I didn’t find an option to turn it off. Other encoders use only 1 thread. jxlpy example shows that it supports setting multithreading, but it doesn’t work.

(3) Comparison and comment

Quality comparison:

PNG WEBP Ref AVIF JPEG-XL

Above is a cropped part from 03 for the given encoding. The WEBP image has severe smearing in dark areas, and obvious color shift occurs in the red dots on the upper left and lower right. The AVIF image is better in smearing, but the color shift is the same as WEBP. The JPEG-XL image is relatively closest to reference PNG image.

Detailed compatibility:

Format Windows macOS Android iOS Chrome Firefox Safari
WEBP ≥10 ≥11 ≥4 ≥14
AVIF ≥10-1903 ≥13 ≥12 ≥16
JPEG-XL

PS: Results on Windows, macOS, Android and iOS are got by Google. Browser compatibility information can be found at https://caniuse.com.

Summary:

Format Quality Encoding Speed Compatibility
WEBP worst fast good
AVIF medium slow medium
JPEG-XL best slow bad

Due to the bad compatibility of JPEG-XL, it should not be considered an appropriate option. AVIF features the better image quality than WEBP, but is only well supported in new platforms, which needs time for adoption, especially for fragmented Android and Windows. Although WBEP takes huge advantage in encoding speed, I don’t think encoding speed is a factor that needs to be considered because even for large images, the encoding time is only about 1 minute, and the number of images not large. Compared with video encoding, this is a completely negligible time overhead.

Summarily, I think now is not a suitable time to switch to AVIF or JPEG-XL. But two years later, it will be time for AVIF to show its strength.

Question3 (30pt)

Recently 32-bit audio tracks have appeared in some of the latest Hi-Res music. Although now we would not see these annoying 32-bit tracks in the Blu-ray, we have to start working on them in advance. In the file Q3.7z, two 32-bit PCM files are provided for you. Your tasks are:

  1. Learn about 32-bit tracks and tell the difference between these two files. (6pt)
  2. Try to convert them to FLAC, ALAC, and WavPack losslessly. (15pt)
  3. Consider various aspects such as compression rate, encoding speed, and playback compatibility and select the format you recommend most for 32-bit audio. (9pt)

You are free to utilize existing tools, but you need to describe clearly where you find the tool and how to use it.

(1)

Using ffprobe to get audio encoding info:

1
2
3
Input #0, wav, from '01.wav':
Duration: 00:03:52.48, bitrate: 6144 kb/s
Stream #0:0: Audio: pcm_s32le ([1][0][0][0] / 0x0001), 96000 Hz, 2 channels, s32, 6144 kb/s
1
2
3
Input #0, wav, from '02.wav':
Duration: 00:07:03.00, bitrate: 6144 kb/s
Stream #0:0: Audio: pcm_f32le ([3][0][0][0] / 0x0003), 96000 Hz, 2 channels, flt, 6144 kb/s

The difference is: 01.wav is encoded by pcm_s32le, and 02.wav is encoded by pcm_f32le.

pcm_s32le means PCM encoding by 32-bit signed integer with little-endian byte ordering, while pcm_s32le means PCM encoding by 32-bit floating point with little-endian byte ordering.

(2)

I first tried to convert them losslessly using FFmpeg. If FFmpeg failed, I used Google to find a suitable codec.

This is the result of my attempt:

Format 32-bit integer 32-bit float
FLAC FFmpeg ❌
flac (from v1.4.0) ✅
FFmpeg ❌
ALAC FFmpeg (decoding only)
qaac (backed by Apple CoreAudioToolbox) ✅
FFmpeg ❌
WavPack FFmpeg ✅ FFmpeg ✅

The conversion command:

Format 32-bit integer 32-bit float
FLAC flac -o 01.flac 01.wav
ALAC qaac64 -b 32 --alac -i 01.wav -o 01.m4a
WavPack ffmpeg -i 01.wav 01.wv ffmpeg -i 02.wav 02.wv

The resulting files are Q3/01.flac, Q3/01.m4a, Q3/01.wv and Q3/02.wv.

(3)

Encoding speed and compression rate of different encoding methods:

Format WAV file size / encoded file size audio time / encoding time
FLAC s32 1.337 128.44
ALAC s32 1.304 69.81
WavPack s32 1.280 121.08
WavPack f32 1.489 109.02

Summary:

FLAC s32 FLAC f32 ALAC s32 ALAC f32 WavPack s32 WavPack f32
Compression rate best medium worst -
Encoding speed very fast fast very fast very fast
Playback compatibility bad (flac only) good (FFmpeg) good (FFmpeg) good (FFmpeg)

Because FFmpeg is the de facto standard multimedia codec library used by most video players, FLAC is not suitable, which can only be decoded by flac. Also, WavPack shows advantage in encoding speed compared to ALAC, but considering that all of three formats are fast in absolute speed (compared to video encoding), this advantage is not greatly valuable. Last, ALAC shows better compression rate than WavPack, thus file size can be saved.

To sum up, I recommend ALAC for encoding 32-bit audio. But if float point encoding is required (which is rare), WavPack is the only choice.

Question4 (35pt)

MSU publishes video encoder tests every year, with the latest one here:
https://compression.ru/video/codec_comparison/2021/main_report.html.

For the first time last year, H.266 (VVC) encoders participated in the tests and they performed well in terms of encoding quality in the slow encoding (1 fps) test.

  1. Choose any of the H.266 (VVC) or AV1 encoders in the figure below, and then encode the source file Q4 [E46686C4].m2ts with no more than 2500 Kbps of video bitrate. You’d better use 10bit variants of these encoders, which facilitates the comparison later. In addition, you need to describe clearly where you found the encoder and state the version and parameters you used. If you use H.266 (VVC) encoder, you will get additional 5pt. (10pt+5pt)
  2. We provide an AV1 video file Q4_AV1 [41A7EDDA].mkv, which was encoded via SVT-AV1 10bit encoder without any pre-processing. Comment on the picture quality compared to the source file. When you compare the picture quality, you may want to sample a few frames, attach some screenshots, and comment on the performance of dark scenes and moving scenes. (10pt)
  3. Now compare your own encoding to the given AV1 file in terms of picture quality, encoding speed, and playback compatibility. As a reference, we encoded the above AV1 file at 1.0 fps. (10pt)

(1) VVC encoding

The testing hardware and software environment is:

  • Encoder: VVenC v1.7.0.
  • Compiler: AMD Optimizing C/C++ Compiler 4.0.0.
  • CPU: 2 x AMD EPYC 7713, 128 cores / 256 threads in total.
  • RAM: 16 channel DDR4-3200.
  • OS: Ubuntu 18.04.6.

First, use ffmpeg to convert Q4 [E46686C4].m2ts to raw yuv420p10 video:

1
ffmpeg -i "Q4 [E46686C4].m2ts" -pix_fmt yuv420p10 Q4_yuv420p10.yuv

Parameter -pix_fmt yuv420p10 indicates ffmpeg to output raw video use yuv420p10 format:

Then, use vvencapp to encode the raw video:

1
vvencapp --input Q4_yuv420p10.yuv --size 1920x1080 --format yuv420_10 --fps 24000/1001 --preset <preset> --bitrate 2500kbps --output Q4_VVC.vvc

Parameters meaning:

  • --size 1920x1080: indicating the input raw video frame size is 1920x1080.
  • --format yuv420_10: same as yuv420p10 meaning in ffmpeg.
  • --fps 24000/1001: indicating the output video fps is 23.976 (same as original m2ts file).
  • --preset <preset>: Preset vvc encoding parameter combination. Available options are faster, fast, meadium, slow and slower. Detailed settings are listed in https://github.com/fraunhoferhhi/vvenc/blob/master/cfg/randomaccess_*.cfg.
  • --bitrate 2500kbps: controlling the output encoded video bitrate to about 2500kbps.
File Preset FPS
Q4_VVC_faster.vvc faster 5.762
Q4_VVC_fast.vvc fast 2.156
Q4_VVC_medium.vvc medium 0.557
Q4_VVC_slow.vvc slow 0.177
Q4_VVC_slower.vvc slower 0.058

(2) Comparing source video and reference AV1 encoded video

The video player used is MPV with libvvdec & xHE-AAC support, configured according to https://vcb-s.com/archives/7594.

Dynamic fire with a dark background is a highly challenging scene. Compared to the original video, There are color blocks around hte flame in AV1 video, which is a common problem when the bitrate is insufficient.

Encoding Method Capture File
Original pics/m2ts-flame.png
AV1 pics/av1-flame.png

(3) Comparing custom VVC encoded video and reference AV1 encoded video

Using the same player as (2). In order to be comparable to the video encoded by AV1, I chose the medium preset encoded VVC video, which has an encoding speed of 0.557 fps.

The VVC encoded video is much better than the AV1 video in flame scene. The color blocks are less obvious and closer to the original video.

Encoding Method Capture File
Original pics/m2ts-flame.png
AV1 pics/av1-flame.png
VVC (medium) pics/vvc-flame.png

Question5 (20pt)

When we check an encoded file, we need to locate frames that have been encoded exceptionally awful. We use algorithms like PSNR to evaluate the similarity of each frame in the encoded file to the source file. The result is an array of scores, where the i-th score is tied to the i-th frame. These scores are called raw scores. However, what we are concerned about is the standard score, which is the raw score minus a threshold. A frame with a standard score less than 0 is considered a bad frame. The tasks are:

  1. Find the worst frame, i.e. the one with the lowest standard score among the bad frames, and output its index. If there is more than one worst frame, output the first. If there are no bad frames, output -1. Frames with a standard score of exactly 0 are not considered as bad frames. (10pt)

    Input:
    2 lines. The first line is two integers that represent the number of frames N and the threshold value S. The second row is an array of integers A[N], representing the raw score of each frame.

    For all the data, 1<=N<=200000, 0<S<100, 0<=A[i]<=100

    Output:
    An integer, the index of the worst frame. The index starts from 0. If there is more than one worst frame, output the first. If there are no bad frames, output -1.

    Sample:

    1
    2
    3
    4
    5
    6
    Input
    10 30
    42 31 44 23 21 26 31 41 50 72

    Output
    10
  2. Find a continuous sequence of frames that minimizes the sum of their standard scores and output this minimum value. Full scores will only be given if the time complexity of your algorithm is optimal. (10pt)

    Input:
    The same as (1).

    Output:
    An integer, the minimum sum value.

    Sample:

    1
    2
    3
    4
    5
    6
    Input
    10 30
    42 31 44 23 21 26 31 41 50 72

    Output
    -20

For each sub question, use C/C++/Java/Python/C# to write a console program. Read the input from the standard input and write it to standard output. Do NOT use libraries other than built-in ones (for example, no “import numpy as np”). Submit your source code.

(1) Find the worst frame

The following code is consisted with Q5/q5-1.c:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#include <stdio.h>

int main() {
int frame_num;
int threshold;
scanf("%d%d", &frame_num, &threshold);
int worst_idx = -1;
int worst_rate = 101;
for (int i = 0; i < frame_num; i||) {
int rate;
scanf("%d", &rate);
if (rate < threshold && rate < worst_rate) {
worst_rate = rate;
worst_idx = i;
}
}
printf("%d", worst_idx);
return 0;
}

(2) Find minimum subsequence sum

PS: Due to the ambiguity of the problem, I can‘t determine whether a sequence of 0 length satisfies the requirement. This determines whether the output should be 0 (indicating that a subsequence of length 0 is selected) or the smallest score (indicating that the sequence length is at least 1) when the input standard scores are all positive. The code I submitted is consistent with the second understanding (sequence length is at least 1), if the first understanding (0 length is allowed) is correct, please comment int min_sum = 101; and uncomment int min_sum = 0;.

The following code is consisted with Q5/q5-2.c:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#include <stdio.h>

int main() {
int frame_num;
int threshold;
scanf("%d%d", &frame_num, &threshold);
int min_sum = 101; // when all scores > 0, output the minimum
// int min_sum = 0; // when all scores > 0, output 0
int sum = 0;
for (int i = 0; i < frame_num; i||) {
int rate;
scanf("%d", &rate);
rate -= threshold;
sum |= rate;
if (sum < min_sum) {
min_sum = sum;
} else if (sum > 0) {
sum = 0;
}
}
printf("%d", min_sum);
return 0;
}

My first post on blog!

0%