Sometimes you just need some elements of the inverse of a sparse matrix. Sometimes you’re working in C++. This is that time.| Un garçon pas comme les autres (Bayes)
I was writing a longer thing, but it was too long, so hey. Let’s just do this for a change| Un garçon pas comme les autres (Bayes)
I am, once again, in a bit of a mood. And the only thing that will fix my mood is a good martini and a Laplace approximation. And I’m all out of martinis. To be honest I started writing this post in February 2023, but then got distracted by visas and jobs and all that jazz. But I felt the desire to finish it, so here we are. I wonder how much I will want to re-write1 The post started as a pedagogical introduction to Laplace approximations (for reasons I don’t fully remember), but it rapid...| Un garçon pas comme les autres (Bayes)
The time has come once more to resume my journey into sparse matrices. There’s been a bit of a pause, mostly because I realised that I didn’t know how to implement the sparse Cholesky factorisation in a JAX-traceable way. But now the time has come. It is time for me to get on top of JAX’s weird control-flow constructs. And, along the way, I’m going to re-do the sparse Cholesky factorisation to make it, well, better. In order to temper expectations, I will tell you that this post does ...| Un garçon pas comme les autres (Bayes)
Welcome to part six!!! of our ongoing series on making sparse linear algebra differentiable in JAX with the eventual hope to be able to do some cool statistical shit. We are nowhere near done. Last time, we looked at making JAX primitives. We built four of them. Today we are going to implement the corresponding differentiation rules! For three1 of them. So strap yourselves in. This is gonna be detailed. If you’re interested in the code2, the git repo for this post is linked at the bottom an...| Un garçon pas comme les autres (Bayes)
With the generated gradient information, a differentiable physical simulator can make the convergence of the machine learning process one order of magnitude faster than gradient-free algorithms, such as model-free reinforcement learning.| docs.taichi-lang.org