Studying the Evolution of Neural Activation Patterns During Training of Feed-Forward ReLU Networks

Hartmann, David and Franzen, Daniel and Brodehl, Sebastian (2021) Studying the Evolution of Neural Activation Patterns During Training of Feed-Forward ReLU Networks. Frontiers in Artificial Intelligence, 4. ISSN 2624-8212

[thumbnail of pubmed-zip/versions/1/package-entries/frai-04-642374/frai-04-642374.pdf] Text
pubmed-zip/versions/1/package-entries/frai-04-642374/frai-04-642374.pdf - Published Version

Download (2MB)

Abstract

The ability of deep neural networks to form powerful emergent representations of complex statistical patterns in data is as remarkable as imperfectly understood. For deep ReLU networks, these are encoded in the mixed discrete–continuous structure of linear weight matrices and non-linear binary activations. Our article develops a new technique for instrumenting such networks to efficiently record activation statistics, such as information content (entropy) and similarity of patterns, in real-world training runs. We then study the evolution of activation patterns during training for networks of different architecture using different training and initialization strategies. As a result, we see characteristic- and general-related as well as architecture-related behavioral patterns: in particular, most architectures form bottom-up structure, with the exception of highly tuned state-of-the-art architectures and methods (PyramidNet and FixUp), where layers appear to converge more simultaneously. We also observe intermediate dips in entropy in conventional CNNs that are not visible in residual networks. A reference implementation is provided under a free license1.

Item Type: Article
Subjects: Eurolib Press > Multidisciplinary
Depositing User: Managing Editor
Date Deposited: 27 Mar 2023 05:36
Last Modified: 17 May 2024 09:27
URI: http://info.submit4journal.com/id/eprint/841

Actions (login required)

View Item
View Item