Skip to content

YOLOv2: Weight quantization per-layer vs per-channel  #82

@kojiwoow

Description

@kojiwoow

Has anyone tested per-channel quantization on YOLOv2?

The report http://cs231n.stanford.edu/reports/2017/pdfs/808.pdf shows, more than 15% mAP drop on YOLOv2 PASCAL VOC 2007 testset, when using 'per-layer' INT8-quantization for all convolutional layers.

The drop was significant, so we decided to use 'per-channel' quantization for weights, described on https://arxiv.org/pdf/1806.08342.pdf#page=30&zoom=100,0,621

For the experiment, we stored the calculated scale factor of l->input_quant_multipler for each channel. Then, we de-quantized for output activation values.

However, compared to the 'per-layer' quantization, the increase was only 0.1% mAP.

I wonder if we did the wrong experiment, or if other people had similar results, since the second paper I posted shows better results on 'per-channel' quantization compared to 'per-layer' quantization for classification networks (ResNet, MobileNet)

The test environment:

  • Symmetric quantization
  • Scale factor of weights: absolute maximum
  • Scale factor of input activation: KL-divergence

Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions