# [论文分享]Channel Pruning via Automatic Structure Search

authors: Mingbao Lin, Rongrong Ji, etc.

cite: [2001.08565v3] Channel Pruning via Automatic Structure Search (arxiv.org)

code: https://github.com/lmbxmu/ABCPruner (official)

# 0、Abstract

In this paper, we propose a new channel pruning method based on artificial bee colony algorithm (ABC), dubbed as ABCPruner, which aims to efficiently find optimal pruned structure, i.e., channel number in each layer, rather than selecting “important” channels as previous works did.

..., we first propose to shrink the combinations where the preserved channels are limited to a specific space, ... . And then, we formulate the search of optimal pruned structure as an optimization problem

and integrate the ABC algorithm to solve it in an automatic manner to lessen human interference.

? 并且使用ABCPruner 来自动求解这个优化问题。

# 1、介绍

Channel pruning targets at removing the entire channel in each layer, which is straightforward but challenging because removing channels in one layer might drastically change the input of the next layer.

(只是提了一下，并没有给出解决方法)

Most cutting-edge practice implements channel pruning by selecting channels (filters) based on rule-of-thumb designs.

1. The first is to identify the most important filter weights in a pre-trained model, ...

2. The second performs channel pruning based on handcrafted rules to regularize the retraining of a full model followed by pruning and fine-tuning...

The motivation of our ABCPruner is two-fold.

1. First, [Liu et al., 2019b] showed that the essence of channel pruning lies in finding optimal pruned structure, i.e., channel number in each layer, instead of selecting “important” channels.

1. Second, [He et al., 2018b] proved the feasibility of applying automatic methods for controlling hyper-parameters to channel pruning, which requires less human interference.

（提出自己的方法）

Given a CNN with L layers, the combinations of pruned structure could be $$\prod _ { j = 1 } ^ { L } c _ { j }$$, $$L$$ is layers number, $$c_j$$ is channel number in the $$j$$-th layer. The combination overhead is extremely intensive.

To solve this problem, we propose to shrink the combinations by limiting the number of preserved channels to $$\{ 0.1 c _{ j } , 0.2 c _{ j } , \ldots , \alpha c _{ j } \}$$ where the value of α falls in {10%,20%, ...,100%}, which shows that there are 10α feasible solutions for each layer, ...

（作者开始介绍自己的方法，先提到方法中的一部分，即上面将通道组合压缩到一个特定空间）

As shown in Fig.1, we first initialize a structure set, each element of which represents the preserved channel number in each layer.

The filter weights of the full model are randomly selected and assigned to initialize each structure.

We train it for a given number of epochs to measure its fitness, a.k.a, accuracy performance in this paper.

Then, ABC is introduced to update the structure set.

Similarly, filter assignment, training and fitness calculation for the updated structures are conducted. We continue the search for some cycles.

Finally, the one with the best fitness is considered as the optimal pruned structure, and its trained weights are reserved as a warm-up for fine-tuning.

1. 初始化一个 structure set（这个集合就是上面说的百分比选择方案）
2. 根据这个集合对每层filter进行随机裁剪
3. 训练一定epochs，测试精度
4. 然后使用ABC来更新 structure set
5. 重复 3、4
6. 挑选出最优结构，进行微调

# 2、Related Work

Network Pruning: weight pruning and channel pruning.

AutoML: automatic pruning

# 3、The Proposed ABCPrunner

Given a CNN model $$N$$ that contains $$L$$ convolutional layers and its filters set $$W$$, we refer to $$C = (c_1, c_2, ..., c_L)$$ as the network structure of $$N$$, where $$c_j$$ is the channel number of the $$j$$-th layer. Channel pruning aims to remove a portion of filters in $$W$$ while keeping a comparable or even better accuracy.

For any pruned model N0, we denote its structure as $$C^ { \prime } = \left( c _ { 1 } ^ { \prime } , c _ { 2 } ^ { \prime } , \ldots , c _ { L } ^ { \prime } \right)$$, where $$c _ { j } ^ { \prime } \leq c _ { j }$$ is the channel number of the pruned model in the $$j$$-th layer.

## 3.2 Optimal Pruned Structure

Given the training set $$\mathcal { T } _ { \text {train} }$$ and test set , $$\mathcal { T } _ { \text {test} }$$ we aim to find the optimal combination of $$C ^ { \prime }$$, such that the pruned model $$\mathcal { N } ^ { \prime }$$ trained/fine-tuned on$$\mathcal { T } _ { \text {train} }$$ obtains the best accuracy. To that effect, we formulate our channel pruning problem as：

$\left( C ^ { \prime } \right) ^ { * } = \underset { C ^ { \prime } } { \arg \max } \operatorname { acc } \left( \mathcal { N } ^ { \prime } \left( C ^ { \prime } , \mathbf { W } ^ { \prime } ; \mathcal { T } _ { \text {train} } \right) ; \mathcal { T } _ { \text {test} } \right)$

$$c _{ i } ^ { \prime } \in \left\{ 0.1 c _{ i } , 0.2 c _{ i } , \ldots , \alpha c _{ i } \right\} ^ { L }$$

where $$\mathbf { W } ^ { \prime }$$ is the weights of pruned model trained/fine-tuned on $$\mathcal { T } _ { \text {train} }$$, and $$acc(·)$$ denotes the accuracy on $$\mathcal { T } _ { \text {test} }$$ for $$\mathcal { N } ^ { \prime }$$ with structure $$\mathcal { C } ^ { \prime }$$.

In particular, we initialize a set of $$n$$ pruned structures $$\left\{ C _ { j } ^ { \prime } \right\} _ { j = 1 } ^ { n }$$ with the $$i$$-th element $$c^{\prime}_{ji}$$ of $$C^{\prime}_j$$ randomly sampled from $\left{ 0.1 c _{ i } , 0.2 c _{ i } , \ldots , \alpha c _{ i } \right}$. Accordingly, we obtain a set of pruned model $$\left\{ \mathcal { N } _ { j } ^ { \prime } \right\} _ { j = 1 } ^ { n }$$ and a set of pruned weights $$\left\{ \mathbf { W } _ { j } ^ { \prime } \right\} _ { j = 1 } ^ { n }$$.

Each pruned structure $$C^{\prime}_j$$ represents a potential solution to the optimization problem.

ABC算法|[Artificial bee colony algorithm](Artificial bee colony algorithm - Wikipedia)

1、原理

• 被雇佣蜂负责采蜜，根据记忆中的食物位置，负责搜索食物邻域内的其他食物；
• 被雇佣蜂将找到的食物位置信息分享给观察蜂，观察蜂选择哪一个是更好的食物来源（当然，去了蜜源之后，也会在其领域周围随机搜索新蜜源）；
• 当在限定的次数内都没有搜索到一个高于阈值的理想食物来源，需要抛弃食物来源，被雇佣蜂成为侦察蜂，随机搜索新的食物来源。

2、实现

2.1 刚开始，对整个蜂群进行初始化。蜂群的规模为2SN，被雇佣蜂和观察蜂的数量相等，均为SN。蜜源的数量与采蜜蜂相等，也为SN。使用 $${\displaystyle X_{i}=\{x_{i,1},x_{i,2},\ldots ,x_{i,n}\}}$$ 表示第 $$i$$ 次的搜索结构，$$n$$ 表示维度。

2.2 现在受雇佣蜂根据记忆位置 $$X_{i}$$ 生成新的位置 $$V_i$$，生成公式为：

${\displaystyle v_{i,k}=x_{i,k}+\Phi _{i,k}\times (x_{i,k}-x_{j,k})}$

$$\Phi _{i,k}$$ 是 [-1, 1] 中的随机数，$$x_{i,n}$$ 是随机第 $$j$$ 次方案的随机第 $$k$$ 维度数据。

2.3 观察蜂观察这两个位置的食物：根据 $$X_i$$$$V_i$$ 的适应值（遗传算法中的一个说法）比较优劣，选择更优的那个。

2.4 在一定次数以后（称为 limit），对比这些蜜源位置的适应值，记录全局最优的蜜源。公式如下：$${\displaystyle P_{i}={\frac {\mathrm {fit} _{i}}{\sum _{j}{\mathrm {fit} _{j}}}}}$$

2.5 如果在一定次数内位置没有变化，那么就放弃这个食物来源，重现初始化一个食物来源

# 创新点

1. 将ABC引入到剪枝中

[论文分享]Channel Pruning via Automatic Structure Search