Reinforcement Learning formulas cheat sheet| www.gabriel.urdhr.fr
How to quickly use llama.cpp for LLM inference (no GPU needed).| /dev/posts/
How to quickly use vLLM for LLM inference using CPU.| /dev/posts/
Overview of neural network distillation| /dev/posts/
Some notes on how transformer-decoder language models work,| /dev/posts/
I created a little Shiny application to demonstrate that Neural Networks are just souped up linear models: https://lucy.shinyapps.io/neural-net-linear/ This application has a neural network fit to a dataset with one predictor, x, and one outcome, y. The network has one hidden layer with three activations. You can click a “Play” button to watch how the neural network fits across 300 epochs. You can also click on the nodes of the neural network diagram to highlight each of the individual ac...|