Figure 1 We can build automata from neural nets. And they seem to do weird things, like learn languages, in a predictable way, which is wildly at odds with our traditional understanding of the difficulty of the task (Paging Doctor Chomsky). How can we analyse NNs in terms of computational complexity? What are the useful results in this domain? Related: grammatical inference, memory machines, overparameterization, NN compression, learning automata, NN at scale, explainability… 1 Computation...