Learning Credit Assignment

Date:

Poster Description

This poster presents a mean-field theoretical framework for understanding hierarchical credit assignment in deep learning. While modern deep networks achieve remarkable predictive power, their nested nonlinear structure obscures how learning coordinates millions of parameters to reach a decision.

To address this, we propose a mean-field learning model in which an ensemble of sub-networks, rather than a single network, is trained for classification. The analysis reveals that synaptic connections naturally divide into three functional categories:

  1. Essential connections critical for decision making,
  2. Irrelevant connections that can be pruned without performance loss, and
  3. Variable connections that capture nuisance factors and encode task variability.

This framework provides a statistical–physics perspective on the emergence of structure and redundancy in deep learning, offering insights into how complex networks perform distributed credit assignment and maintain generalization.