Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks

Kaleab B Belay

[AAAI-22] Undergraduate Consortium
Abstract: Deep Neural Networks have memory and computational demands that often render them difficult to use in low-resource environments. Also, highly dense networks are over-parameterized and thus prone to overfitting. To address these problems, we introduce a novel algorithm that prunes (sparsifies) weights from the network by taking into account their magnitudes and gradients taken against a validation dataset. Unlike existing pruning methods, our method does not require the network model to be retrained once initial training is completed. On the CIFAR-10 dataset, our method reduced the number of paramters of MobileNet by a factor of 9X, from 14 million to 1.5 million, with just a 3.8% drop in accuracy.

Sessions where this paper appears

  • Poster Session 3

    Fri, February 25 8:45 AM - 10:30 AM (+00:00)
    Blue 4
    Add to Calendar

  • Poster Session 6

    Sat, February 26 8:45 AM - 10:30 AM (+00:00)
    Blue 4
    Add to Calendar