Figure 1 I’m not sure what this truly means or that anyone is, but I think it wants to mean something like quantifying architectures that make it “easier” to learn about the phenomena of interest. This is a practical engineering discipline in NNs but maybe also intersting to think about in humans.| The Dan MacKinlay stable of variably-well-consider’d enterprises
Figure 1 I just ran into this area while trying to invent something similar myself, only to find I’m years too late. It’s an interesting analysis suited to relaxed or approximated causal modelling of causal interventions. It seems to formalise coarse-graining for causal models. We suspect that the notorious causal inference in LLMs might be built out of such things or understood in terms of them. 1 Causality in hierarchical systems A. Geiger, Ibeling, et al. (2024) seems to summarise SOT...| The Dan MacKinlay stable of variably-well-consider’d enterprises
Figure 1 Placeholder, for notes on what kind of world models reside in neural nets. 1 Incoming NeurIPS 2023 Tutorial: Language Models meet World Models 2 References Basu, Grayson, Morrison, et al. 2024. “Understanding Information Storage and Transfer in Multi-Modal Large Language Models.” Chirimuuta. 2025. “The Prehistory of the Idea That Thinking Is Modelling.”Human Arenas. Ge, Huang, Zhou, et al. 2024. “WorldGPT: Empowering LLM as Multimodal World Model.” In Proceedings of the ...| The Dan MacKinlay stable of variably-well-consider’d enterprises