This is just a fun experiment to answer the question: how can I share a memory-mapped tensor from PyTorch to Numpy, Jax and TensorFlow in The post Memory-mapped CPU tensor between Torch, Numpy, Jax and TensorFlow first appeared on Terra Incognita.| Terra Incognita
I am, once again, in a bit of a mood. And the only thing that will fix my mood is a good martini and a Laplace approximation. And I’m all out of martinis. To be honest I started writing this post in February 2023, but then got distracted by visas and jobs and all that jazz. But I felt the desire to finish it, so here we are. I wonder how much I will want to re-write1 The post started as a pedagogical introduction to Laplace approximations (for reasons I don’t fully remember), but it rapid...| Un garçon pas comme les autres (Bayes)
The time has come once more to resume my journey into sparse matrices. There’s been a bit of a pause, mostly because I realised that I didn’t know how to implement the sparse Cholesky factorisation in a JAX-traceable way. But now the time has come. It is time for me to get on top of JAX’s weird control-flow constructs. And, along the way, I’m going to re-do the sparse Cholesky factorisation to make it, well, better. In order to temper expectations, I will tell you that this post does ...| Un garçon pas comme les autres (Bayes)
Welcome to part six!!! of our ongoing series on making sparse linear algebra differentiable in JAX with the eventual hope to be able to do some cool statistical shit. We are nowhere near done. Last time, we looked at making JAX primitives. We built four of them. Today we are going to implement the corresponding differentiation rules! For three1 of them. So strap yourselves in. This is gonna be detailed. If you’re interested in the code2, the git repo for this post is linked at the bottom an...| Un garçon pas comme les autres (Bayes)
This is part five of our ongoingseriesonimplementing differentiable sparse linear algebra in JAX. In some sense this is the last boring post before we get to the derivatives. Was this post going to include the derivatives? It sure was but then I realised that a different choice was to go to bed so I can get up nice and early in the morning and vote in our election. It goes without saying that before I split the posts, it was more than twice as long and I was nowhere near finished. So probably...| Un garçon pas comme les autres (Bayes)
Just some harmeless notes. Like the ones Judy Dench took in that movie.| Un garçon pas comme les autres (Bayes)
This is part three of an ongoing exercise in hubris. Part one is here.Part two is here. The overall aim of this series of posts is to look at how sparse Cholesky factorisations work, how JAX works, and how to marry the two with the ultimate aim of putting a bit of sparse matrix support into PyMC, which should allow for faster inference in linear mixed models, Gaussian spatial models. And hopefully, if anyone ever gets around to putting the Laplace approximation in, all sorts of GLMMs and non-...| Un garçon pas comme les autres (Bayes)
An IIR filter deconvolving a blurred 2D image in four “recurrent” sequential passes. This post is a follow-up to my post on deconvolution/deblurring of the images. In my previous blog p…| Bart Wronski
Me being “progressively stippled.” :) Introduction I recently read the “Gaussian Blue Noise” paper by Ahmed et al. and was very impressed by the quality of their results and the rigor o…| Bart Wronski
This post covers a topic slightly different from my usual ones and something I haven’t written much about before – applied elements of probability theory. We will discuss what happens with “n…| Bart Wronski
In this shorter post, I will describe a 2X downsampling filter that I propose as a “safe default” for GPU image processing. It’s been an omission on my side that I have not proposed any specific fi…| Bart Wronski
JAX is a high performance library that offers accelerated computing through XLA and Just In Time Compilation. It also has handy features that enable you to write one codebase that can be applied to batches of data and run on CPU, GPU, or TPU. However, one of its biggest selling| Machine learning nuggets
Augmenting transformer language models with sparse access of large memory matrices| machine learning musings
A practical, code-first look at DeepMind's new haiku library.| machine learning musings
Put on your metaphorical safety goggles and start building something weird with JAX.| machine learning musings