Figure 1 I want a theory that predicts which features deep nets learn, when they learn them, and why. But neural nets are messy and hard to analyse, so we need to find some way of simplifying them for analysis which still recovers the properties we care about. Deep linear networks (DLNs) are one attempt at that: the models that keep depth, nonconvexity, and hierarchical representation formation while remaining analytically tractable. In principle, they let me connect data geometry (singular ...| The Dan MacKinlay stable of variably-well-consider’d enterprises
Figure 1 I’m not sure what this truly means or that anyone is, but I think it wants to mean something like quantifying architectures that make it “easier” to learn about the phenomena of interest. This is a practical engineering discipline in NNs but maybe also intersting to think about in humans.| The Dan MacKinlay stable of variably-well-consider’d enterprises