Abstract
Knowledge distillation compacts deep networks by letting a small student network learn from a large teacher network. The accuracy of knowledge distillation recently benefited from adding residual layers. We propose to reduce the size of the student network even further by recasting multiple residual layers in the teacher network into a single recurrent student layer. We propose three variants of adding recurrent connections into the student network, and show experimentally on CIFAR-10, Scenes and MiniPlaces, that we can reduce the number of parameters at little loss in accuracy.
Original language | English |
---|---|
Title of host publication | 2018 25th IEEE International Conference on Image Processing (ICIP) |
Subtitle of host publication | Proceedings |
Place of Publication | Piscataway |
Publisher | IEEE |
Pages | 3393-3397 |
Number of pages | 5 |
ISBN (Electronic) | 978-1-4799-7061-2 |
ISBN (Print) | 978-1-4799-7062-9 |
DOIs | |
Publication status | Published - 2018 |
Event | 25th IEEE International Conference on Image Processing - Athens, Greece Duration: 7 Oct 2018 → 10 Oct 2018 Conference number: 25 |
Conference
Conference | 25th IEEE International Conference on Image Processing |
---|---|
Abbreviated title | ICIP 2018 |
Country/Territory | Greece |
City | Athens |
Period | 7/10/18 → 10/10/18 |
Keywords
- Knowledge distillation
- compacting deep representations for image classification
- recurrent layers