Journal article 933 views 136 downloads
Literature Review of Deep Network Compression
Informatics, Volume: 8, Issue: 4, Start page: 77
Swansea University Authors: Xianghua Xie , Mark Jones
-
PDF | Version of Record
Copyright: © 2021 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) Licence.
Download (232.3KB)
DOI (Published version): 10.3390/informatics8040077
Abstract
Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property. This presents significant challenges and restricts many deep learning applications, making the focus on reducing the complexity of models while maint...
Published in: | Informatics |
---|---|
ISSN: | 2227-9709 |
Published: |
MDPI AG
2021
|
Online Access: |
Check full text
|
URI: | https://cronfa.swan.ac.uk/Record/cronfa58687 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract: |
Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property. This presents significant challenges and restricts many deep learning applications, making the focus on reducing the complexity of models while maintaining their powerful performance. In this paper, we present an overview of popular methods and review recent works on compressing and accelerating deep neural networks. We consider not only pruning methods but also quantization methods, and low-rank factorization methods. This review also intends to clarify these major concepts, and highlights their characteristics, advantages, and shortcomings. |
---|---|
Keywords: |
deep learning; neural networks pruning; model compression |
College: |
Faculty of Science and Engineering |
Funders: |
This work was supported by the Deanship of Scientific Research, King Khalid University
of Kingdom of Saudi Arabia under research grant number (RGP1/207/42). |
Issue: |
4 |
Start Page: |
77 |