Figure 1 I want a theory that predicts which features deep nets learn, when they learn them, and why. But neural nets are messy and hard to analyse, so we need to find some way of simplifying them for analysis which still recovers the properties we care about. Deep linear networks (DLNs) are one attempt at that: the models that keep depth, nonconvexity, and hierarchical representation formation while remaining analytically tractable. In principle, they let me connect data geometry (singular ...| The Dan MacKinlay stable of variably-well-consider’d enterprises
Figure 1 I’m not sure what this truly means or that anyone is, but I think it wants to mean something like quantifying architectures that make it “easier” to learn about the phenomena of interest. This is a practical engineering discipline in NNs but maybe also intersting to think about in humans.| The Dan MacKinlay stable of variably-well-consider’d enterprises
Does quantum computing offer speedups to machine learning? When? Figure 1: Instrumenting Schrödinger’s cat. 1 Incoming Learning of neural networks w/ quantum computers & learning of quantum states with graphical models (video) 2 References Arute, Arya, Babbush, et al. 2019. “Quantum Supremacy Using a Programmable Superconducting Processor.”Nature. Bernstein. 2009. “Introduction to Post-Quantum Cryptography.” In Post-Quantum Cryptography. de Wolf. 2017. “The Potential Impact of Q...| The Dan MacKinlay stable of variably-well-consider’d enterprises