I don’t know much about this variant of Bayes, but the central idea is that we consider Bayes updating as a coherent betting rule and back everything else out from that. This gets us something like classic Bayes but with an even more austere approach to what probability is. I am interested in this because, following an insight of Susan Wei’s, I note that it might be an interesting way of understanding when foundation models do optimal inference, since most neural networks are best underst...| The Dan MacKinlay stable of variably-well-consider’d enterprises
Figure 1 I’m not sure what this truly means or that anyone is, but I think it wants to mean something like quantifying architectures that make it “easier” to learn about the phenomena of interest. This is a practical engineering discipline in NNs but maybe also intersting to think about in humans.| The Dan MacKinlay stable of variably-well-consider’d enterprises
Configuring machine learning experiments with Fiddle| The Dan MacKinlay stable of variably-well-consider’d enterprises
Figure 1 Jonathan Huggins summarizes: Complexity of Inference in Bayesian Networks. To cover: Sampling from a posterior measure versus calculating, it, approximation versus exact computation. Graphical models. What does calculation even mean on arbitrary measure spaces? 1 References Bodlaender, Donselaar, and Kwisthout. 2022. “Parameterized Complexity Results for Bayesian Inference.” Cooper. 1990. “The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks....| The Dan MacKinlay stable of variably-well-consider’d enterprises
Normalising flows for PDE learning. Figure 1 Lipman et al. (2023) seems to be the origin point, extended by Kerrigan, Migliorini, and Smyth (2024) to function-valued PDEs. Figure 2: An illustration of our FFM method. The vector field (in black) transforms a noise sample drawn from a Gaussian process with a Matérn kernel (at ) to the function (at ) via solving a function space ODE. By sampling many such , we define a conditional path of measures approximately interpolating between and the f...| The Dan MacKinlay stable of variably-well-consider’d enterprises
Diffusion models for PDE learning. Figure 1 Slightly confusing terminology, because we are using diffusion models to learn PDEs, but the PDEs themselves are often used to model diffusion processes. Also sometimes the diffusion models that do the modelling aren’t actually diffusive, but are based on Poisson flow generative models. Naming things is hell. 1 Classical diffusion models TBD 2 Poisson Flow generative models These are based on non-diffusive physics but also seem to be used to simu...| The Dan MacKinlay stable of variably-well-consider’d enterprises
Figure 1 Placeholder. Levers for Biological Progress - by Niko McCarty In order for 50-100 years of biological progress to be condensed into 5-10 years of work, we’ll need to get much better at running experiments quickly and also collecting higher-quality datasets. This essay focuses on how we might do both, specifically for the cell. Though my focus in this essay is narrow — I don’t discuss bottlenecks in clinical trials, human disease, or animal testing — I hope others will take o...| The Dan MacKinlay stable of variably-well-consider’d enterprises