Figure 1 I’m not sure what this truly means or that anyone is, but I think it wants to mean something like quantifying architectures that make it “easier” to learn about the phenomena of interest. This is a practical engineering discipline in NNs but maybe also intersting to think about in humans.| The Dan MacKinlay stable of variably-well-consider’d enterprises
Figure 1 I just ran into this area while trying to invent something similar myself, only to find I’m years too late. It’s an interesting analysis suited to relaxed or approximated causal modelling of causal interventions. It seems to formalise coarse-graining for causal models. We suspect that the notorious causal inference in LLMs might be built out of such things or understood in terms of them. 1 Causality in hierarchical systems A. Geiger, Ibeling, et al. (2024) seems to summarise SOT...| The Dan MacKinlay stable of variably-well-consider’d enterprises
Wherein the internal structure of foundation models is examined and it is observed that embeddings from different models are mappable by structure alone, and linear alignment to human neural activity is noted.| The Dan MacKinlay stable of variably-well-consider’d enterprises