Neural Networks Classify through the Class-Wise Means of Their Representations
Mohamed El Amine Seddik, Mohamed Tamaazousti
[AAAI-22] Main Track
Abstract:
In this paper, based on an asymptotic analysis of the Softmax layer, we show that when training neural networks for classification tasks, the weight vectors corre sponding to each class of the Softmax layer tend to converge to the class-wise means computed at the representation layer (for specific choices of the representation activation). We further show some consequences of our findings to the context of transfer learning, essentially by proposing a simple yet effective initialization procedure that significantly accelerates the learning of the Softmax layer weights as the target domain gets closer to the source one. Experiments are notably performed on the datasets: MNIST, Fashion MNIST, Cifar10, and Cifar100 and using a standard CNN architecture.
Introduction Video
Sessions where this paper appears
-
Poster Session 2
Fri, February 25 12:45 AM - 2:30 AM (+00:00)
Blue 2
-
Poster Session 9
Sun, February 27 8:45 AM - 10:30 AM (+00:00)
Blue 2