Skip to content

Latest commit

 

History

History
69 lines (40 loc) · 4.76 KB

quant_ea.md

File metadata and controls

69 lines (40 loc) · 4.76 KB

QuantEA

1. Algorithm Introduction

Quantization based on Evolutionary Algorithm (QEA) is an automatic hybrid bit quantization algorithm. It uses an evolutionary strategy to search for the quantization bit width of each layer in a CNN network. Taking the automatic quantization and compression for ResNet-20 as an example, the quantized search space is a quantized bit width of the convolution kernel parameter of each layer and a quantized bit width of the activation value (for example, 2bit/4bit/8bit). A population P including N individuals is maintained, and each individual corresponds to a compressed network model. A population P' of the same size N is generated through cross mutation. Each compressed network model performs training/validation, and uses indicators such as accuracy, FLOPs, and a parameter quantity specified by a user in a verification set as an optimization target, to sort and select an individual, and update and maintain the population P'.

2. Methodology

2.1 Search Space

The search space is constructed with the quantization bit width of a parameter (weights) and a quantization bit width of an activations of each layer of the neural network (for example, 2-bit/4-bit/8-bit). Using ResNet-20 as an example, the first layer and the last layer are not quantized, the search is done for the quantization bit width of weights/activations of the middle 18 layers. Set the search candidate of each layer to [2 bits, 4 bits, 8 bits]. Then the total search space is $3^{(18+18)}=1.5\times 10^{17}$.

2.2 Search Algorithm

Pareto front is obtained using the NSGA-III multi-objective optimization evolution algorithm:

  1. Search process: Generate codes of N compressed models from the population P through evolution operations such as crossover and mutation.
  2. Evaluation process:
    1. Complete the construction of the compression model based on the N codes generated by the evolution operation.
    2. Execute the evaluation process to generate all user-defined evaluation metrics, including accuracy, FLOPs, and parameters.
  3. Optimization process: The evolutionary algorithm is invoked to update the population P.

Repeat the search, evaluation, and optimization to complete the entire evolutionary automatic quantization bit width search process and find the Pareto font. After the quantitative model is searched, the models on the Pareto front are trained to obtain the final performance. For details about the NSGA-III algorithm, see the original paper [1].

[1] Deb, Kalyanmoy, and Himanshu Jain. "An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints." *IEEE Transactions on Evolutionary Computation* 18.4 (2013): 577-601.

2.3 Advantages

  1. The fp32 model can be quantized into low-bits to reduce computing and storage overheads.
  2. The evolution algorithm searches for the quantization bit width of each layer. Compared. The searched model takes advantages over the fix bit width quantization, like 8bit (baseline-w8a8) or 4bit (baseline-w4a4) quantization. It has less computing workload, and higher classification accuracy.
  3. The NSGA - III algorithm can search out the Pareto front and generate multiple optimal models with different constraints at a time.

4. User Guide

4.1 Search Space Configuration

Quantization bit width of the weight and activation value (configured by bit_candidates in examples/compression/quant_ea/quant_ea.yml. For example, [4,8] indicates that the search space is 4/8 bits.)

The current example provides the ResNet series as the basic neural network. If you need to replace the network with other networks, refer to vega/networks/quant.py to replace nn.Conv2d in your network with the quantized convolutional layer QuantConv.

4.2 Dataset Configuration

QuantEA's data can be either a standard CIFAR-10 dataset or a custom dataset. For details, see the development manual.

The configuration of the CIFAR-10 data set is as follows:

4.3 Running Configuration

Configure parameters including searching and training the quantization model, which corresponds to nas1 and fully_train in the examples/compression/quant_ea/quant_ea.yml configuration file.

Run the following command in the examples directory:

vega ./compression/quant_ea/quant_ea.yml

The two phases ("nas" and "fully_train") are performed in sequence. The Pareto front is found during the search process, and the front models are trained to obtain the final performance.

5. Algorithm output

The following two files are generated in ./tasks/<task id>/output/nas/.