<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="/rss.xsl" media="all"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
<channel>
<title>Roastidio.us Tagged with elixir</title>
<link>https://roastidio.us/tag/2771</link>
<atom:link href="https://roastidio.us/tagged_with/elixir" rel="self" type="application/rss+xml"></atom:link>
<description>Roastidio.us Tagged with elixir</description>
<item>
<title>Dropping Cloudflare for bunny.net | jola.dev</title>
<link>https://jola.dev/posts/dropping-cloudflare</link>
<enclosure type="image/jpeg" length="0" url="https://jola.dev/images/og-image-2b7872671fc7c11e464dac899d8d3068.png?vsn=d"></enclosure>
<guid isPermaLink="false">1M42zLP0OWc6VEZ7PxV89n0bn2eHHFxl7owrmw==</guid>
<pubDate>Tue, 07 Apr 2026 18:13:31 +0000</pubDate>
<description>Dropping Cloudflare and migrating to bunny.net, starting out with my blog.</description>
<content:encoded>&lt;p&gt;
TL;DR my motivation and experience for moving my blog from Cloudflare to bunny.net&lt;/p&gt;
&lt;p&gt;
I’ve been a long time Cloudflare user. They offer a solid service that is free for the vast majority of their users, that’s very generous. Their infrastructure is massive and their feature set is undeniably incredible. &lt;/p&gt;
&lt;p&gt;
One of my biggest concerns though is around how easily I could become heavily dependent on this one single company that then can decide to cut me off and disable all of my websites, for any arbitrary reason. It’s a single point of failure for the internet. Every Cloudflare outage ends up in the news. And I can’t help but feel that the idea of centralizing the internet into a single US corporation feels off. Not to mention the various scandals that have surrounded them. So I was open to alternatives.&lt;/p&gt;
&lt;h2&gt;
Bunny.net&lt;/h2&gt;
&lt;p&gt;
&lt;a href=&quot;https://bunny.net?ref=f0l8865b7g&quot;&gt;Bunny.net&lt;/a&gt; (affiliate link because why not, raw link &lt;a href=&quot;https://bunny.net&quot;&gt;here&lt;/a&gt;) is a Slovenian (EU) company that is building up a lot of momentum. Their CDN-related services rival Cloudflare already, and although their PoP network is smaller than Cloudflare’s, they score highly on performance and speed across the globe. It’s a genuinely competitive alternative to Cloudflare.&lt;/p&gt;
&lt;p&gt;
It has the additional benefit of being a European company, and I like the idea of growing and supporting the European tech scene.&lt;/p&gt;
&lt;h2&gt;
What I was moving away from&lt;/h2&gt;
&lt;p&gt;
I’ve been using various different services, but focusing on this blog, the first thing was Cloudflare as the registrar for the domain name. I did some research on alternative registrars, but I just didn’t find any good European options. The closest I found was INWX, but their lack of free WHOIS Privacy made them a non-option. I ended up with Porkbun. They run on Cloudflare infrastructure, but they have better support. So the remaining thing Cloudflare was doing for me was the “Orange Cloud”: automatic caching, origin hiding, and optional protection features.&lt;/p&gt;
&lt;p&gt;
So that’s what we’re moving over! I’m gonna walk you through how to set up the bunny.net CDN for your website, with some sensible defaults.&lt;/p&gt;
&lt;h2&gt;
Step by step&lt;/h2&gt;
&lt;p&gt;
Setting up your bunny.net account is quick and you get $20 worth of free credits to play around with, those are valid for 14 days. You don’t need to give them a credit card up front to try things out, but if you do, you get another $30 worth of credits. You do need to confirm your email though before you can start setting things up. Once you’re out of the trial, you pay per use, which for most cases is cents a month. However, note that bunny.net require a minimum payment of $1 per month.&lt;/p&gt;
&lt;p&gt;
I guess a cheap price to pay to &lt;em&gt;stop being the product&lt;/em&gt; and start becoming the customer.&lt;/p&gt;
&lt;h3&gt;
Creating your pull zone&lt;/h3&gt;
&lt;p&gt;
The pull zone is the main mechanism for enabling the CDN for your website. You’ll find them under CDN in the left navigation bar. Here’s how to set one up:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;
Fill in the pull zone name. Just make it something meaningful to you, for example the website name.  &lt;/li&gt;
  &lt;li&gt;
For origin type, select Origin URL.  &lt;/li&gt;
  &lt;li&gt;
Fill in your Origin URL. This would be the address for directly accessing your server. In my case, it’s the public IP of my server.   &lt;/li&gt;
  &lt;li&gt;
If you’re running multiple apps on your server, for example using Dokploy, coolify, or self-hosted PaaSs like that, you’ll want to pass the Host header as well. Here you put in the domain of your app. In my case, that’s jola.dev.  &lt;/li&gt;
  &lt;li&gt;
For tier, select Standard.  &lt;/li&gt;
  &lt;li&gt;
Finally you can select your pricing zones. Note that some zones are more expensive, so you can choose to disable them. This just means that people in those areas will get redirected to the closest zone you do have enabled.  &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
And you’re done with the first part!&lt;/p&gt;
&lt;h3&gt;
Configuring your pull zone&lt;/h3&gt;
&lt;p&gt;
Now that you’ve set up the pull zone, it’s time to hook it up to your website and domain. Go to the pull zone you created. You’ll see a “hostnames” screen. Time to connect things.&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;
Under “Add a custom hostname” fill in your website domain name.  &lt;/li&gt;
  &lt;li&gt;
You’ll get a modal with some instructions. You need to follow them to set up the DNS name to point your website to go through the CDN.  &lt;/li&gt;
  &lt;li&gt;
Go to where you manage domain name and add a CNAME record to point your domain to the given CNAME value in the modal, something like website.b-cdn.net.  &lt;/li&gt;
  &lt;li&gt;
Once you’ve done that, wait a few minutes to let it propagate, and then click “Verify &amp;amp; Activate SSL”.   &lt;/li&gt;
  &lt;li&gt;
If it says success, you’re done. Your website is now running through the bunny.net CDN, similar to the Cloudflare orange cloud.  &lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
Configuring caching&lt;/h3&gt;
&lt;p&gt;
This is the part where bunny.net will really shine through!&lt;/p&gt;
&lt;p&gt;
If your website is set up to return the appropriate cache headers for each resource, things will just work. Bunny defaults to respecting the cache control headers when pointing a pull zone at an origin site. To verify, go to Caching → General and check that “Respect origin Cache-Control” is set under “Cache expiration time”. Note that if you set &lt;code class=&quot;makeup ok&quot;&gt;no-cache&lt;/code&gt;, bunny will use that and will not cache at the edge.&lt;/p&gt;
&lt;p&gt;
Alternatively, if you don’t have cache headers set up, and you don’t want to control that yourself, you can instead enable Smart Cache. This will default to caching typically cached resources like images, CSS, JS files etc, while avoiding caching things like HTML pages. This will work for most cases!&lt;/p&gt;
&lt;p&gt;
But I wanted to go &lt;em&gt;faster&lt;/em&gt;. If you’ve read my post about building this website, here’s how I’ve set up my cache headers: I added a new pipeline in the router called &lt;code class=&quot;makeup ok&quot;&gt;public&lt;/code&gt; and added an extra middleware to it. I technically have everything using this pipeline, but leaving the standard &lt;code class=&quot;makeup ok&quot;&gt;browser&lt;/code&gt; pipeline that comes out of the box with Phoenix keeps my options open to add authenticated (uncached) pages in the future. &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;makeup ok&quot;&gt;pipeline :public do
    plug :accepts, [&amp;quot;html&amp;quot;]
    plug :put_root_layout, html: {JolaDevWeb.Layouts, :root}
    plug :put_secure_browser_headers, @secure_headers
    plug :put_cdn_cache_header
  end
  
  defp put_cdn_cache_header(conn, _opts) do
    put_resp_header(conn, &amp;quot;cache-control&amp;quot;, &amp;quot;public, s-maxage=86400, max-age=0&amp;quot;)
  end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;
You can see the whole router here &lt;a href=&quot;https://github.com/joladev/jola.dev/blob/main/lib/jola_dev_web/router.ex&quot;&gt;https://github.com/joladev/jola.dev/blob/main/lib/jola_dev_web/router.ex&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
This setup means I even cache the HTML pages, which makes this ridiculously fast. Here’s the landing page response time from various locations, using the &lt;a href=&quot;https://larm.dev/tools/response-time/r/89374810-dbb3-4227-87d1-9a947be29e49&quot;&gt;Larm response time checker tool&lt;/a&gt;:&lt;/p&gt;
&lt;img src=&quot;https://jola.dev/images/joladev-larm-response-time.png&quot; alt=&quot;&quot; title=&quot;&quot;/&gt;
&lt;p&gt;
Because I’m caching the HTML pages, if I publish a new post I do need to purge the pull zone to reset the cached HTML files.&lt;/p&gt;
&lt;h3&gt;
Setting some sensible defaults&lt;/h3&gt;
&lt;p&gt;
All of these are optional, but nice to have!&lt;/p&gt;
&lt;p&gt;
On your pull zone page, under General → Hostnames, go toggle “Force SSL” on for your domain to ensure that all requests use SSL. SSL/TLS is pretty standard these days, and many TLDs and websites use HSTS to enforce it, but no harm in enabling it here too.&lt;/p&gt;
&lt;p&gt;
DDoS protection comes out of the box, but we can set some other things up. First of all, go to Caching and then Origin Shield in the left menu on your pull zone, and activate Origin Shield. Select the location closest to your origin. This reduces load on your server, as bunny.net will cache everything in the Origin Shield location, and all edge locations will try that location first before hitting your server.&lt;/p&gt;
&lt;p&gt;
Next, go to Caching → General and scroll down. At the bottom of the page you can select Stale Cache: While Origin Offline and While Updating. This means bunny will keep serving cached content even if it is stale, if it can’t reach your origin, and that it will serve stale content while fetching the latest version. Both are nice to haves, nothing you have to enable, but provide a slightly better service to your users!&lt;/p&gt;
&lt;p&gt;
Next, let’s set up an Edge rule to redirect any requests to our automatically generated pull zone domain to our actual domain, to avoid confusing crawlers. On your pull zone, in the left menu, click Edge rules. &lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;
Add edge rule.  &lt;/li&gt;
  &lt;li&gt;
Name it “Default domain redirect”.  &lt;/li&gt;
  &lt;li&gt;
Under actions, select Redirect.  &lt;/li&gt;
  &lt;li&gt;
For URL, input your URL plus the path variable. Eg for me it’s &lt;code class=&quot;makeup ok&quot;&gt;https://jola.dev{{path}}&lt;/code&gt; .  &lt;/li&gt;
  &lt;li&gt;
Status code: use the default 301.  &lt;/li&gt;
  &lt;li&gt;
For conditions, pick Match any and Request URL Match any.  &lt;/li&gt;
  &lt;li&gt;
Input &lt;code class=&quot;makeup ok&quot;&gt;*://&amp;lt;slug&amp;gt;.b-cdn.net/*&lt;/code&gt; replacing &lt;code class=&quot;makeup ok&quot;&gt;&amp;lt;slug&amp;gt;&lt;/code&gt; with the name given to your pull zone.  &lt;/li&gt;
  &lt;li&gt;
Save edge rule!  &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
Now you should be able to go to &lt;code class=&quot;makeup ok&quot;&gt;https://slug.b-cdn.net&lt;/code&gt; for your pull zone and get redirected to your proper domain!&lt;/p&gt;
&lt;h2&gt;
Conclusion&lt;/h2&gt;
&lt;p&gt;
This post just covers the very basics of getting set up on bunny.net. I haven’t even scratched the surface of edge rules, cache configuration, the Shield features for security and firewalls, video hosting and streaming, edge scripting and edge distributed containers, and much more.&lt;/p&gt;
&lt;p&gt;
I especially appreciate the great statistics, logs, and metrics you get out of the dashboard. You can even see every single request coming through to help you investigate issues, and clear feedback on what’s getting cached and not. I’m actively moving everything else over and I’m excited for the upcoming S3 compatible storage!&lt;/p&gt;
&lt;p&gt;
You should give &lt;a href=&quot;https://bunny.net?ref=f0l8865b7g&quot;&gt;bunny.net&lt;/a&gt; a try!&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Newres Al Haider</title>
<link>https://www.newresalhaider.com/post/yggdrasil/</link>
<enclosure type="image/jpeg" length="0" url="https://www.newresalhaider.com/post/yggdrasil/featured.png"></enclosure>
<guid isPermaLink="false">9f86ElXSxCWyKIG3n3jqy43jSawpsOe4kIddWQ==</guid>
<pubDate>Mon, 06 Apr 2026 18:15:11 +0000</pubDate>
<description>An introduction to the Ash declarative framework by growing Yggdrasil, the World Tree of Norse mythology.</description>
<content:encoded>&lt;h1&gt;
    Growing Yggdrasil, the World Tree, with Ash
&lt;/h1&gt;
&lt;p&gt;&lt;strong&gt;2026-04-05&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Declarative programming can be a powerful paradigm for organizing software systems. By defining the business processes once, we ensure there is a single source of domain knowledge. From this foundation, we can derive other parts of the system such as API endpoints, database schemas, and even user interfaces. This approach reduces repetition and helps prevent bugs caused by misaligned domain models.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://ash-hq.org/&quot;&gt;Ash Framework&lt;/a&gt; seems like an excellent way to see declarative programming in action. Written in &lt;a href=&quot;https://elixir-lang.org/&quot;&gt;Elixir&lt;/a&gt;, it allows you to describe your domain in a consistent and expressive way, from which it can automatically generate data layers, REST or GraphQL APIs, and admin interfaces. With Ash, you define what your application should do, and the framework takes care of how to make it happen. It can derive JSON REST endpoints, handle validation, manage persistence, and provide authorization logic, all from the same declarative definitions.&lt;/p&gt;
&lt;p&gt;I am new to Ash and I tend to learn best by writing things out, so this article is as much for me as it is for you. Rather than trying to understand everything up front, I prefer to get hands-on quickly with a small, self-contained project. We’ll start with the basics here, and if things go well, expand on it in a follow-up or two.&lt;/p&gt;
&lt;p&gt;Given the name Ash, it felt appropriate to build something inspired by the gigantic ash tree of Norse mythology: &lt;a href=&quot;https://en.wikipedia.org/wiki/Yggdrasil&quot;&gt;Yggdrasil, the World Tree&lt;/a&gt;.&lt;/p&gt;
&lt;figure&gt;
    &lt;img src=&quot;https://www.newresalhaider.com/post/yggdrasil/featured.png&quot; alt=&quot;&quot; title=&quot;&quot;/&gt;
    
    &lt;figcaption&gt;
        
        &lt;h4&gt;An image of a Yggdrasil the world tree as a cybernetic Ash tree.&lt;/h4&gt;
        
        
    &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Yggdrasil is said to connect the Nine Worlds of &lt;a href=&quot;https://en.wikipedia.org/wiki/Norse_cosmology&quot;&gt;Norse cosmology&lt;/a&gt;, though the exact number and nature of these worlds vary between sources. Each world has its own nature, inhabitants, and relationships with the others, making it an ideal metaphor for exploring how Ash models resources, attributes, and relationships. In this project, we will create a domain model for these concepts with Ash and derive a JSON REST API from them.&lt;/p&gt;
&lt;p&gt;The first step is getting started with a basic Ash project for which we will use the Igniter tool. (I will assume Elixir is already installed, but if not, see the &lt;a href=&quot;https://elixir-lang.org/install.html&quot;&gt;Elixir Install page&lt;/a&gt; for instructions). This is used for project setup and code generation, which will help us get started a lot quicker.&lt;/p&gt;
&lt;p&gt;To start off following command will install Igniter:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;mix archive.install hex igniter_new&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Once we have Igniter, the next step is creating a new project in the &lt;code&gt;yggdrasil&lt;/code&gt; directory, adding Ash, and moving into it.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;mix igniter.new yggdrasil --install ash &amp;amp;&amp;amp; cd yggdrasil&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This will land us in a new Elixir project directory with Ash installed. From this seed we will evolve our application to represent the worlds and characters of Norse mythology.&lt;/p&gt;
&lt;p&gt;The newly created project comes with a &lt;code&gt;hello&lt;/code&gt; function in the &lt;code&gt;lib/yggdrasil.ex&lt;/code&gt; module. Let&amp;#39;s try it out in the &lt;code&gt;iex&lt;/code&gt;, the Elixir interactive shell which we can start with:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;iex -S mix&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Running it will bring us into the shell, where we can run the &lt;code&gt;hello&lt;/code&gt; function in the &lt;code&gt;yggdrasil&lt;/code&gt; module:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;iex(1)&amp;gt; Yggdrasil.hello()
:world&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We now have a hello world, but in yggdrasil we want to represent the &lt;em&gt;worlds&lt;/em&gt; of Norse mythology that are linked by Yggdrasil. The first thing we need for this is a &lt;em&gt;Domain&lt;/em&gt;. This will function as a container for the various concepts, such as the worlds, that we will introduce later.&lt;/p&gt;
&lt;p&gt;For simplicity&amp;#39;s sake, first we will replace the contents of &lt;code&gt;lib/Yggdrasil.ex&lt;/code&gt; with the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;defmodule Yggdrasil do
  @moduledoc &amp;quot;&amp;quot;&amp;quot;
  The Yggdrasil domain — acts as the trunk of the tree
  and organizes all resources like World and Character.
  &amp;quot;&amp;quot;&amp;quot;

  use Ash.Domain

  resources do
    # Resources will be registered here
    resource Yggdrasil.World
  end
end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We create a file &lt;code&gt;lib/resources/world.ex&lt;/code&gt; with the following contents:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;defmodule Yggdrasil.World do
  @moduledoc &amp;quot;&amp;quot;&amp;quot;
  A resource representing a world in Yggdrasil.
  &amp;quot;&amp;quot;&amp;quot;

  use Ash.Resource,
    # in-memory store
    data_layer: Ash.DataLayer.Ets,
    domain: Yggdrasil

  actions do
    create :create do
      accept [:name, :description]
    end

    update :update do
      accept [:description]
    end

    # Provide default actions
    defaults [:read, :destroy]
  end

  attributes do
    # Primary key
    uuid_primary_key :id

    # World name and description
    attribute :name, :string, allow_nil?: false, public?: true
    attribute :description, :string, public?: true
  end
end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And finally &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;config :yggdrasil, :ash_domains, [Yggdrasil]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With these files in place, we now have a minimal Ash domain containing a single resource: World. Let’s take a moment to unpack what we just created before moving on.&lt;/p&gt;
&lt;p&gt;At the top level, Yggdrasil acts as our domain, the trunk of our system. It brings together all the resources that make up the application and defines how they relate to each other. Right now, our domain only includes one resource, Yggdrasil.World, but we’ll add more later.&lt;/p&gt;
&lt;p&gt;The Yggdrasil.World module itself is declared as a resource. In Ash, a resource is the fundamental building block. It describes a specific type of data and what can be done with it. Instead of writing separate schemas, changesets, and controllers, we declare everything about a resource in one place, and Ash takes care of the details.&lt;/p&gt;
&lt;p&gt;Our World resource uses the Ash.DataLayer.Ets data layer, which stores data in Elixir’s in-memory ETS tables. This setup is fast and simple, making it perfect for early experimentation, though data won’t persist between runs. Later, this can be swapped out for a different data layer to gain full persistence. The argument &lt;code&gt;domain: Yggdrasil&lt;/code&gt; connects the resource back to the domain we just defined so that the framework knows where it belongs.&lt;/p&gt;
&lt;p&gt;Inside the actions block, we declare what operations are available for this resource. The create action accepts a name and a description, while the defaults &lt;code&gt;[:read, :destroy]&lt;/code&gt; line automatically adds the standard read, update, and delete actions. There’s no need to write any manual CRUD logic—Ash generates it for us.&lt;/p&gt;
&lt;p&gt;The attributes block defines the structure of each world. Every world has a UUID primary key (&lt;code&gt;:id&lt;/code&gt;) and two fields, &lt;code&gt;:name&lt;/code&gt; and &lt;code&gt;:description&lt;/code&gt;. The &lt;code&gt;:name&lt;/code&gt; attribute is made required using &lt;code&gt;(allow_nil?: false)&lt;/code&gt;, ensuring that each world must have one. Both attributes are marked &lt;code&gt;public?: true&lt;/code&gt; so they appear in APIs and outputs.&lt;/p&gt;
&lt;p&gt;Finally, the configuration line we added tells Ash which domains to load when the application starts. Without this, the framework wouldn’t know about our new resource.&lt;/p&gt;
&lt;p&gt;At this point, our small Ash tree has already taken root. We’ve declared the first piece of our domain, and Ash now knows how to create, read, update, and delete worlds. Let’s see that in action next by exploring our resource interactively in iex.&lt;/p&gt;
&lt;p&gt;First, start the interactive shell from your project root:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;iex -S mix&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once inside iex, we want to create our first world Asgard, the shining realm of the gods, with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;asgard = (
  Yggdrasil.World
  |&amp;gt; Ash.Changeset.for_create(:create, %{
       name: &amp;quot;Asgard&amp;quot;,
       description: &amp;quot;A shining realm of order and power, suspended high above the clouds.&amp;quot;
     })
  |&amp;gt; Ash.create!()
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;which would return the world as such: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;14:29:14.665 [debug] Creating Yggdrasil.World:

Setting %{id: &amp;quot;726b678e-6cb6-4277-b291-85ecfa313d3a&amp;quot;, name: &amp;quot;Asgard&amp;quot;,
 description: &amp;quot;A shining realm of order and power,...}

%Yggdrasil.World{
  id: &amp;quot;726b678e-6cb6-4277-b291-85ecfa313d3a&amp;quot;,
  name: &amp;quot;Asgard&amp;quot;,
  description: &amp;quot;A shining realm of order and power, suspended high above the clouds.&amp;quot;,
  __meta__: #Ecto.Schema.Metadata&amp;lt;:loaded&amp;gt;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are a multiple things happening here, so let&amp;#39;s unwrap things step by step. &lt;/p&gt;
&lt;p&gt;First we start off with our Ash resource that we have defined &lt;code&gt;Yggdrasil.World&lt;/code&gt;. In Ash, resources describe the structure of our data, including attributes like name and description, as well as the actions that can be performed on them.&lt;/p&gt;
&lt;p&gt;Next we are using the pipe operator. &lt;code&gt;|&amp;gt;&lt;/code&gt;, to pass this result to our next function. Elixir’s pipe operator takes the result of the expression on the left and passes it as the first argument to the function on the right.&lt;/p&gt;
&lt;p&gt;For example instead of writing:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;function(value, a, b)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;with the pipe operator we can equivalently write:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;value |&amp;gt; function(a, b)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This allows us to write a sequence of operations in a readable, step-by-step style. In our code example, it means &lt;code&gt;Yggdrasil.World&lt;/code&gt; is passed into the &lt;code&gt;Ash.Changeset.for_create&lt;/code&gt; function, as the first parameter. This function also takes the identifier of our create action &lt;code&gt;:create&lt;/code&gt;, as well as the structure representing Asgard, with its name and description. &lt;/p&gt;
&lt;p&gt;What this function returns is a &lt;code&gt;changeset&lt;/code&gt;, a data structure representing the intended change of a resource in Ash (e.g.: creating, updating, etc). This is especially useful when it comes to validation and error checking, as we will see it later down the line. For now we use this changeset and pipe it into the function that executes the actual creation: &lt;code&gt;Ash.create!()&lt;/code&gt;. &lt;/p&gt;
&lt;p&gt;The resulting value is a &lt;code&gt;%Yggdrasil.World{}&lt;/code&gt; struct, which represents the newly created world. Ash also automatically generated a UUID for the id field, which uniquely identifies this world inside the system.&lt;/p&gt;
&lt;p&gt;Before returning the struct, Ash logs the operation it performed. That is why we see the debug output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;[debug] Creating Yggdrasil.World&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The final line is the Elixir struct that was created:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;%Yggdrasil.World{
  id: &amp;quot;726b678e-6cb6-4277-b291-85ecfa313d3a&amp;quot;,
  name: &amp;quot;Asgard&amp;quot;,
  description: &amp;quot;A shining realm of order and power, suspended high above the clouds.&amp;quot;,
  __meta__: #Ecto.Schema.Metadata&amp;lt;:loaded&amp;gt;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This struct is also stored in the variable &lt;code&gt;asgard&lt;/code&gt;, so we can reference it later in the session.&lt;/p&gt;
&lt;p&gt;Now that we understand how creating a world works, let’s add another one.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;midgard = (
  Yggdrasil.World
  |&amp;gt; Ash.Changeset.for_create(:create, %{
       name: &amp;quot;Midgard&amp;quot;,
       description: &amp;quot;The realm of humans, bound to the earth and everyday struggles.&amp;quot;
     })
  |&amp;gt; Ash.create!()
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This follows the exact same pattern as before. We build a changeset that describes the creation of Midgard, and then execute it with &lt;code&gt;Ash.create!()&lt;/code&gt;. Much simpler than the mythological creation of &lt;a href=&quot;https://en.wikipedia.org/wiki/Midgard&quot;&gt;Midgard&lt;/a&gt;, which involved the slaying of the giant Ymir.&lt;/p&gt;
&lt;p&gt;Now that we have some worlds, let&amp;#39;s read them using the read action: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;worlds = (
  Yggdrasil.World
  |&amp;gt; Ash.Query.for_read(:read)
  |&amp;gt; Ash.read!()
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;which would give us our list of worlds:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;[
  %Yggdrasil.World{
    id: &amp;quot;some-uuid-1&amp;quot;,
    name: &amp;quot;Asgard&amp;quot;,
    description: &amp;quot;A shining realm of order and power, suspended high above the clouds.&amp;quot;
  },
  %Yggdrasil.World{
    id: &amp;quot;some-uuid-2&amp;quot;,
    name: &amp;quot;Midgard&amp;quot;,
    description: &amp;quot;The realm of humans, bound to the earth and everyday struggles.&amp;quot;
  }
]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As one can expect, we can also do an update call. For example, let’s change the description of Asgard:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;asgard = (
  asgard
  |&amp;gt; Ash.Changeset.for_update(:update, %{
       description: &amp;quot;The fortified realm of the Aesir, ruled by Odin.&amp;quot;
     })
  |&amp;gt; Ash.update!()
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are a few things to note here. Instead of starting from the &lt;code&gt;Yggdrasil.World&lt;/code&gt; module, we now start from the existing asgard struct. This is because we are modifying a resource that already exists.&lt;/p&gt;
&lt;p&gt;The function &lt;code&gt;for_update&lt;/code&gt; creates a changeset that describes the intended update. Just like with creation, the changeset itself does not perform the update, it only represents the change we want to make.&lt;/p&gt;
&lt;p&gt;We then pass this changeset into &lt;code&gt;Ash.update!()&lt;/code&gt;, which executes the update. Ash applies the changes, runs any validations, and returns the updated &lt;code&gt;%Yggdrasil.World{}&lt;/code&gt; struct.&lt;/p&gt;
&lt;p&gt;We can verify the change by reading the list of worlds again:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;worlds = (
  Yggdrasil.World
  |&amp;gt; Ash.Query.for_read(:read)
  |&amp;gt; Ash.read!()
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;which would give us a result such as: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;[
  %Yggdrasil.World{
    id: &amp;quot;6b62b3ea-b08b-4387-8539-37e645e53026&amp;quot;,
    name: &amp;quot;Midgard&amp;quot;,
    description: &amp;quot;The realm of humans, bound to the earth and everyday struggles.&amp;quot;,
    __meta__: #Ecto.Schema.Metadata&amp;lt;:loaded&amp;gt;
  },
  %Yggdrasil.World{
    id: &amp;quot;d2646509-6c92-4049-a2db-0555612fc365&amp;quot;,
    name: &amp;quot;Asgard&amp;quot;,
    description: &amp;quot;The fortified realm of the Aesir, ruled by Odin.&amp;quot;,
    __meta__: #Ecto.Schema.Metadata&amp;lt;:loaded&amp;gt;
  }
]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;An interesting thing we could try out is updating the name of a world instead:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;asgard2 = (
  asgard
  |&amp;gt; Ash.Changeset.for_update(:update, %{
       name: &amp;quot;Asgard2&amp;quot;
     })
  |&amp;gt; Ash.update!()
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We get the following error: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;** (Ash.Error.Invalid)
Invalid Error

* No such input `name` for action Yggdrasil.World.update

The attribute exists on Yggdrasil.World, but is not accepted by Yggdrasil.World.update

Perhaps you meant to add it to the accept list for Yggdrasil.World.update?


Valid Inputs:

* description&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is because when we were defining our update action in our module, the only attribute we accept is &lt;code&gt;:description&lt;/code&gt;, see fragment below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;update :update do
      accept [:description]
    end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In other words, while the name attribute exists on the resource, it is not allowed to be modified through the update action. This is a domain modelling decision, and gives us fine-grained control over how our data can change. In this case, we decided that a world’s name is fixed after creation, while its description can evolve over time.&lt;/p&gt;
&lt;p&gt;Finally we get to do delete, where we destroy asgard, our Ragnarok action if you will. We can do this by the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;Ash.destroy!(asgard)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;Ash.destroy!&lt;/code&gt; takes a resource struct, in this case &lt;code&gt;asgard&lt;/code&gt;, and removes it from the data store. Since we’re using an in-memory ETS store, it deletes it from memory immediately. The function should return &lt;code&gt;:ok&lt;/code&gt; on success. We can double check this by requesting our list of worlds again by our usual means:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;worlds = (
  Yggdrasil.World
  |&amp;gt; Ash.Query.for_read(:read)
  |&amp;gt; Ash.read!()
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;which returns only Midgard:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;[
  %Yggdrasil.World{
    id: &amp;quot;6b62b3ea-b08b-4387-8539-37e645e53026&amp;quot;,
    name: &amp;quot;Midgard&amp;quot;,
    description: &amp;quot;The realm of humans, bound to the earth and everyday struggles.&amp;quot;,
    __meta__: #Ecto.Schema.Metadata&amp;lt;:loaded&amp;gt;
  }
]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point, we’ve taken the first steps in modeling our little piece of Yggdrasil. We have a domain, a resource, and a way to create, read, update, and delete worlds, enough to bring about a small Ragnarok.&lt;/p&gt;
&lt;p&gt;Next, we will explore how we can start connecting resources together. After all, the worlds need their heroes and villains to really come alive.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>How Many Paradigms Does It Take to Screw In a Lightbulb?</title>
<link>https://rocket-science.ru/hacking/2026/04/06/paradigms-for-lightbulb</link>
<guid isPermaLink="false">ysRzvH1jN84Qm_Agm_de_LqeXFor5GTkMeR9hw==</guid>
<pubDate>Mon, 06 Apr 2026 12:52:41 +0000</pubDate>
<description>A developer who knows only one programming paradigm resembles a carpenter whose entire toolbox contains a single hammer. Naturally, a hammer will drive a nail with admirable precision. Or a screw, if sufficient enthusiasm is applied. But try to saw or plane a board with that hammer, and it becomes immediately clear—assuming you’ve encountered a saw or a plane at least once in your life—that the instrument has been chosen poorly. So it is with paradigms: knowledge of nothing but imperati...</description>
<content:encoded>&lt;p&gt;A developer who knows only one programming paradigm resembles a carpenter whose entire toolbox contains a single hammer. Naturally, a hammer will drive a nail with admirable precision. Or a screw, if sufficient enthusiasm is applied. But try to saw or plane a board with that hammer, and it becomes immediately clear—assuming you’ve encountered a saw or a plane at least once in your life—that the instrument has been chosen poorly. So it is with paradigms: knowledge of nothing but imperative programming, or nothing but object-oriented design, transforms a developer into a mechanical executor of tasks, incapable of seeing an elegant solution even when it lies on the surface, waiting to be noticed.&lt;/p&gt;&lt;p&gt;The narrowness of a programmer trapped in a single paradigm manifests in everything. They will erect loops where a single higher-order function would suffice. They will breed classes and inheritance where a pure function and composition would have been more than enough. They will attempt to verify the correctness of an algorithm with a debugger and tests instead of proving it formally at the type level. Such a developer resembles a tourist who knows exactly one word of the foreign language and is attempting, with its help, to explain a route across the entire city to a taxi driver. And it’s a small mercy if the word isn’t obscene.&lt;/p&gt;&lt;p&gt;Let us, for a start, walk through the principal paradigms and see what instruments each offers for solving problems. We’ll begin with the most ancient and familiar—the imperative paradigm.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Imperative programming&lt;/strong&gt; is the world of instructions and mutable state. The programmer tells the machine: do this, then that, change this variable, repeat five times. A classical example in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C&lt;/code&gt;:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;intsum=0;for(inti=0;i&amp;lt;10;i++){sum+=i;}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Here we explicitly manage the state of the variable &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sum&lt;/code&gt;, accumulating the result step by step. This is natural for the machine, but tedious for the human. Every step must be spelled out, every mutation tracked. The imperative style serves well when the task reduces to a sequence of actions with side effects: write to a file, update a database, print to the screen. But as soon as the task grows in complexity, the code devolves into a tangle of interrelated variables and conditions.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Procedural programming&lt;/strong&gt; is the imperative approach enriched with structures and functions. We group instructions into procedures to avoid repetition and improve readability. The same example:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;intcalculate_sum(intn){intsum=0;for(inti=0;i&amp;lt;n;i++){sum+=i;}returnsum;}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Now the logic is packaged into a function that can be reused. The procedural style dominated the era of Pascal and early &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C&lt;/code&gt;. It taught programmers to think in modules and structure their code, but it never freed them from the problems of mutable state and side effects.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Object-oriented programming&lt;/strong&gt; (in Gosling’s understanding, not Kay’s) promised to solve all problems at once: encapsulation, inheritance, polymorphism—the three pillars upon which the entire world supposedly rests. Data and methods unite into objects, objects assemble into class hierarchies. It sounds splendid, until you begin to examine how the code actually works:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;classCounter{privateintvalue=0;publicvoidincrement(){value++;}publicintgetValue(){returnvalue;}}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;State lives inside the object, convenient methods form the API, full encapsulation achieved. So it would seem, but the state hasn’t gone anywhere—it has merely relocated into a class field. And along with it relocated all the old afflictions: data races in multithreading, the difficulty of testing, the unpredictability of behavior. The object-oriented approach serves well for modeling a domain when you need to describe entities and their interactions. But it transforms into a nightmare when class hierarchies sprawl to dozens of inheritance levels, and half the methods exist solely to pass a call further down the chain.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Functional programming&lt;/strong&gt; looks at the task from an entirely different angle. Here there is no mutable state, no loops, no side effects. There are only functions that receive data and return results. The same summation example in Haskell:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;sum=foldl(+)0[0..9]&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;One line instead of five. No loops, no intermediate variables. The function &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;foldl&lt;/code&gt; takes (1) an addition operation, (2) an initial value, and (3) a list, returning the result. The code reads like a mathematical expression, not a sequence of commands. The functional style is particularly well suited for working with collections, for building data-processing pipelines, for parallel computation. When there is no mutable state, there is no need for locks and synchronization. Functions can be safely launched simultaneously on different processor cores. Though for the domain of &lt;em&gt;Accounting for a liquor store in the suburbs&lt;/em&gt;—it’s a rather dubious ally.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Logic programming&lt;/strong&gt; overturns one’s very notion of how to write code. Instead of explaining &lt;em&gt;how&lt;/em&gt; to solve a task, the programmer describes &lt;em&gt;what&lt;/em&gt; they want to obtain. The system finds the solution on its own. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Prolog&lt;/code&gt; is the classical representative of this paradigm:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;parent(tom,bob).parent(tom,liz).parent(bob,ann).grandparent(X,Z):-parent(X,Y),parent(Y,Z).&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;We described kinship relations and a rule for determining grandparents. Now we can pose the question: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;grandparent(tom, ann)&lt;/code&gt;?—and the system will answer “yes,” having found the path through the facts. Logic programming is indispensable in certain corners of artificial intelligence, expert systems, and task planning. I even &lt;a href=&quot;https://habr.com/ru/articles/885668/&quot;&gt;dragged it into&lt;/a&gt; the consistency validation of finite automata in one of &lt;a href=&quot;https://hexdocs.pm/finitomata&quot;&gt;my libraries&lt;/a&gt;. But an attempt to write a web server in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Prolog&lt;/code&gt; would look rather like an attempt to hammer a mole with a microscope.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Declarative programming&lt;/strong&gt; is a general term for approaches where the programmer describes the desired result rather than the sequence of steps. SQL is the textbook example:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;SELECTnameFROMusersWHEREage&amp;gt;18ORDERBYname;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;We don’t explain how to traverse the table, how to check the condition, how to sort the result. We simply declare: I want the names of users over eighteen, sorted alphabetically. The database will figure out how to do this efficiently on its own. The declarative style dominates in HTML, CSS (for now—I suspect someone will drag recursion into it before long), and configuration files. It allows one to separate the &lt;em&gt;what&lt;/em&gt; from the &lt;em&gt;how&lt;/em&gt;.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Concatenative programming&lt;/strong&gt; is built on the idea of function composition via a stack. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Forth&lt;/code&gt; is its most vivid representative:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;: square dup * ;
5 square .&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The function &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;square&lt;/code&gt; duplicates the top element of the stack and multiplies it by itself. The number 5 is placed on the stack, the function &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;square&lt;/code&gt; is applied, the result is printed. The code reads right to left, like reverse Polish notation. Concatenative languages are compact and efficient, but they demand a particular cast of mind. They remain popular in embedded systems and wherever code size and execution speed are critical.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Reactive programming&lt;/strong&gt; focuses on data streams and the propagation of changes. When a data source changes, all dependent computations update automatically. An example in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;RxJS&lt;/code&gt;:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;constclicks=fromEvent(document,&amp;#39;click&amp;#39;);constpositions=clicks.pipe(map(event=&amp;gt;event.clientX));positions.subscribe(x=&amp;gt;console.log(x));&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;We create a stream of click events, transform it into a stream of coordinates, and subscribe to changes. Each click automatically produces the coordinate in the output. The reactive style is ideal for interfaces, event handling, and working with asynchronous data sources. It liberates you from callback hell and makes the data flow explicit.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Aspect-oriented programming&lt;/strong&gt; addresses the problem of cross-cutting concerns—logging, caching, access control. Instead of smearing these aspects across the entire codebase, they can be described separately:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;@Transactional@LoggedpublicvoidupdateUser(Useruser){repository.save(user);}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The annotations &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;@Transactional&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;@Logged&lt;/code&gt; are aspects. They will be automatically “applied” to the method, wrapping it in a transaction and adding logging. The core code remains clean and comprehensible. The aspect-oriented approach is popular in enterprise development, where cross-cutting concerns permeate the entire system.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Metaprogramming&lt;/strong&gt; is the programming of programs that write programs. Macros in LISP allow code to be generated at compile time:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;(defmacrowhen(condition&amp;amp;restbody)`(if,condition(progn,@body)))&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The macro &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;when&lt;/code&gt; expands into an &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;if&lt;/code&gt; construct with a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;progn&lt;/code&gt; block. Metaprogramming grants extraordinary flexibility, enabling the creation of domain-specific languages right inside the host language. But with great power comes great responsibility: poorly written macros turn code into an unreadable mess. If you want to see what metaprogramming looks like when practiced by a sane person—take any of my libraries, or write your own in &lt;em&gt;Elixir&lt;/em&gt;. I know of no other language where macros have been done properly.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Dependently-typed programming&lt;/strong&gt; elevates the type system to a new plane. Types can depend on values, allowing complex invariants to be expressed at the type level.&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;dataVec(A:Set):Nat-&amp;gt;Setwhere[]:VecAzero_::_:{n:Nat}-&amp;gt;A-&amp;gt;VecAn-&amp;gt;VecA(sucn)append:{A:Set}{mn:Nat}-&amp;gt;VecAm-&amp;gt;VecAn-&amp;gt;VecA(m+n)&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Vec A n&lt;/code&gt; is a vector of elements of type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;A&lt;/code&gt; with length &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;n&lt;/code&gt;. The function &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;append&lt;/code&gt; takes two vectors of lengths &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;m&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;n&lt;/code&gt; and returns a vector of length &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;m + n&lt;/code&gt;. The compiler verifies correctness at the type level. It is impossible to write a function that violates the length invariant. Dependent types are used for the formal verification of critical systems, where an error costs far too much.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Theorem-proving&lt;/strong&gt; as a paradigm is the proof of program correctness by mathematical methods. &lt;em&gt;Lean&lt;/em&gt; and &lt;em&gt;Coq&lt;/em&gt; allow one to write not merely code, but proofs that the code does precisely what was intended:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;theoremadd_comm(nm:Nat):n+m=m+n:=byinductionnwith|zero=&amp;gt;simp[Nat.zero_add,Nat.add_zero]|succnih=&amp;gt;simp[Nat.succ_add,Nat.add_succ,ih]&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;This is not simply an addition function—it is a proof that addition is commutative. The compiler doesn’t merely check types; it checks the mathematical proof. This approach is employed in cryptography, compilers, and operating systems—domains where the price of an error is measured not in irritated users, but in human lives or millions of dollars in losses.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;The actor model&lt;/strong&gt; views a program as a collection of independent actors that exchange messages. Each actor has its own &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mailbox&lt;/code&gt;, processes messages sequentially, and can create new actors. &lt;em&gt;Erlang&lt;/em&gt; was built upon this idea:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;-module(counter).-export([start/0,loop/1]).start()-&amp;gt;spawn(fun()-&amp;gt;loop(0)end).loop(N)-&amp;gt;receive{increment,Pid}-&amp;gt;Pid!{value,N+1},loop(N+1);{get,Pid}-&amp;gt;Pid!{value,N},loop(N)end.&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The actor &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;counter&lt;/code&gt; receives &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;increment&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;get&lt;/code&gt; messages, modifies its state, and replies. No shared data, no locks. Actors scale horizontally, failures are isolated. This model is ideal for distributed systems, where failures are the norm rather than the exception.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Dataflow programming&lt;/strong&gt; describes computation as a graph of data streams. The nodes of the graph are operations, the edges are data flows between them. A change in one node propagates automatically through the graph. &lt;em&gt;LabVIEW&lt;/em&gt; uses visual dataflow programming for hardware control. The approach is intuitive for engineers accustomed to thinking in schematics and diagrams.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Constraint programming&lt;/strong&gt; describes a task as a set of constraints that must be satisfied. The system searches for a solution by enumerating possibilities and pruning the impossible. &lt;em&gt;MiniZinc&lt;/em&gt; is a language for constraint programming:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;var 1..9: x;
var 1..9: y;
constraint x + y = 10;
constraint x * y = 21;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Two variables, two constraints. The system will find &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;x = 3, y = 7&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;x = 7, y = 3&lt;/code&gt;. Constraint programming is applied in planning, scheduling, and resource optimization—wherever a task is formulated as finding a solution under constraints.&lt;/p&gt;&lt;p&gt;Phew.&lt;/p&gt;&lt;p&gt;Now let us pose the question: why does any of this matter to an ordinary developer? The answer is simple and simultaneously non-obvious. &lt;strong&gt;Each paradigm is a way of thinking, an approach to solving problems.&lt;/strong&gt; A programmer who knows only imperative programming will solve every task with loops and conditionals. They will see a list-processing task and write a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;for&lt;/code&gt; loop with intermediate variables. A programmer acquainted with the functional paradigm will write &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;map&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fold&lt;/code&gt;—elegantly, concisely, free of side effects. One who has mastered reactive programming will construct an event-processing pipeline where each stage is explicitly described and easily testable.&lt;/p&gt;&lt;p&gt;Knowledge of different paradigms expands one’s arsenal of tools. You won’t write a web server in &lt;em&gt;Prolog&lt;/em&gt; or prove theorems in &lt;em&gt;JavaScript&lt;/em&gt;. But an understanding of logic programming will help you formulate conditions more precisely and build database queries. Familiarity with dependent types will teach you to think in invariants and express constraints at the type-system level. Experience with actors will show you how to build scalable distributed systems without the headaches of synchronization.&lt;/p&gt;&lt;p&gt;In truth, in the modern world all mature languages have long since become multi-paradigm. &lt;em&gt;Scala&lt;/em&gt; combines object-oriented and functional approaches. &lt;em&gt;Rust&lt;/em&gt; adds a powerful ownership and borrowing system to the imperative style. &lt;em&gt;Python&lt;/em&gt; allows one to write procedurally, in an object-oriented fashion, and functionally. &lt;em&gt;F#&lt;/em&gt; unites functional programming with the &lt;em&gt;.NET&lt;/em&gt; ecosystem. &lt;em&gt;Swift&lt;/em&gt; attempts to incorporate elements of all major paradigms at once. A programmer who understands when an aspect is needed (yes, in any language—for instance, I &lt;a href=&quot;https://hexdocs.pm/telemetria&quot;&gt;dragged aspects into&lt;/a&gt;&lt;em&gt;Elixir&lt;/em&gt;) uses the language to its full power. One who knows only a single paradigm writes in any syntax as though it were PHP.&lt;/p&gt;&lt;p&gt;Paradigms are not a religion where you must choose one true faith and wage war on the heretics. They are tools, and a good craftsman knows when to reach for the hammer, when for the saw, and when for the plane. Need to parse something? Take the functional approach with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;map&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fold&lt;/code&gt;. Build a system with thousands of simultaneous connections? Actors are your choice. Formally prove an algorithm’s correctness? Welcome to &lt;em&gt;Lean&lt;/em&gt; or &lt;em&gt;Agda&lt;/em&gt;. Developing an interface with many interactive elements? Reactive programming will make the code comprehensible.&lt;/p&gt;&lt;p&gt;A programmer trapped in a single paradigm is condemned to solve problems inefficiently. They will drag familiar patterns behind them even when those patterns don’t fit. They will write a class where a function would suffice. They will create mutable state where it could be avoided entirely. They will erect a complex hierarchy where composition would have been enough. They resemble a person who knows only one route from home to work and stubbornly waits at the bus stop every day, even though the road has been torn up for a month and the bus now runs on the next street over.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;If a developer claims the badge of mid-level-plus but doesn’t feel at ease in at least the five principal paradigms—they are a pompous fool, and you should show them the door.&lt;/strong&gt;&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Rails on the BEAM</title>
<link>https://intertwingly.net/blog/2026/04/02/Rails-on-the-BEAM.html</link>
<guid isPermaLink="false">fMGmbR7zMWGvMfIgnim4ph5idQqa3KBQCM1_wA==</guid>
<pubDate>Sat, 04 Apr 2026 02:36:04 +0000</pubDate>
<description>Rails on the BEAM</description>
<content:encoded>&lt;header&gt;
&lt;h3&gt;&lt;a href=&quot;https://intertwingly.net/blog/2026/04/02/Rails-on-the-BEAM.html&quot;&gt;Rails on the BEAM&lt;/a&gt;&lt;/h3&gt;
&lt;hr/&gt;&lt;div&gt;&lt;time&gt;2026-04-02T23:21:00Z&lt;/time&gt;&lt;/div&gt;

&lt;/header&gt;
&lt;img src=&quot;https://intertwingly.net/blog/images/elixir.svg&quot; alt=&quot;&quot; title=&quot;&quot;/&gt;

&lt;p&gt;Same blog from the &lt;a href=&quot;https://intertwingly.net/blog/2026/01/28/Twilight-Zone.html&quot;&gt;Twilight Zone post&lt;/a&gt;. Same models, controllers, views. Same Turbo Streams broadcasting. Same Action Cable protocol. But check what&amp;#39;s serving it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npx github:ruby2js/juntos --demo blog
cd blog
npx juntos db:prepare
npx juntos up -d sqlite_napi&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Open &lt;a href=&quot;http://localhost:3000&quot;&gt;http://localhost:3000&lt;/a&gt;. Open a second tab. Create an article. Watch it appear in both. That part you&amp;#39;ve seen before.&lt;/p&gt;
&lt;p&gt;Now imagine a bug in a request handler crashes one of the JavaScript runtimes. On Node.js, that takes down the process — connections drop, state is lost, restart from scratch. On the BEAM, the OTP supervisor restarts just that runtime. The other runtimes keep serving. WebSocket connections stay open. Turbo picks up where it left off.&lt;/p&gt;
&lt;h2&gt;What Changed&lt;/h2&gt;
&lt;p&gt;Nothing in the application. The Ruby source is identical. What changed is the target:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Target&lt;/th&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Database&lt;/th&gt;
&lt;th&gt;Broadcasting&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Browser&lt;/td&gt;
&lt;td&gt;&lt;code&gt;juntos dev -d dexie&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;IndexedDB&lt;/td&gt;
&lt;td&gt;BroadcastChannel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Node.js&lt;/td&gt;
&lt;td&gt;&lt;code&gt;juntos up -d sqlite&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;SQLite&lt;/td&gt;
&lt;td&gt;WebSocket server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;BEAM&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;&lt;code&gt;juntos up -d sqlite_napi&lt;/code&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;SQLite or PostgreSQL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;OTP :pg&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The browser uses the &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/API/Broadcast_Channel_API&quot;&gt;BroadcastChannel API&lt;/a&gt; for cross-tab sync. Node.js runs a WebSocket server. The BEAM uses Erlang&amp;#39;s &lt;a href=&quot;https://www.erlang.org/doc/apps/kernel/pg.html&quot;&gt;process groups&lt;/a&gt; — distributed by default, no external dependencies.&lt;/p&gt;
&lt;h2&gt;QuickBEAM&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/elixir-volt/quickbeam&quot;&gt;QuickBEAM&lt;/a&gt; is a JavaScript runtime for the Erlang VM, built on &lt;a href=&quot;https://github.com/quickjs-ng/quickjs&quot;&gt;QuickJS-NG&lt;/a&gt; — a lightweight, standards-compliant JavaScript engine. Where Node.js embeds V8 in a C++ process, QuickBEAM embeds QuickJS in an Erlang NIF, giving JavaScript access to the BEAM&amp;#39;s concurrency and fault-tolerance primitives.&lt;/p&gt;
&lt;p&gt;Each QuickBEAM runtime is a lightweight isolate — Elixir can spin up a pool and dispatch requests round-robin across them, each running on its own OS thread. If one crashes, the OTP supervisor restarts it. The others keep serving. This is the same model Erlang uses for telecom switches — let it crash, recover instantly.&lt;/p&gt;
&lt;p&gt;QuickJS isn&amp;#39;t V8 — there&amp;#39;s no JIT, so raw compute is slower. But for a web application that&amp;#39;s mostly I/O (database queries, template rendering, HTTP responses), the difference is negligible. What you gain is a 5MB runtime instead of 45MB, sub-millisecond startup, and the entire OTP ecosystem.&lt;/p&gt;
&lt;h2&gt;The Architecture&lt;/h2&gt;
&lt;p&gt;The application runs inside QuickBEAM — a JavaScript runtime embedded in the Erlang VM. Elixir manages everything around it:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Browser (Turbo, Stimulus, Action Cable client)
    ↕ HTTP + WebSocket
Bandit (Elixir HTTP server)
    ↕ Plug router
QuickBEAM (JavaScript runtime pool)
    ↕ Beam.callSync
Elixir (:pg broadcasts, SQLite NIF, OTP supervision)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The JavaScript application handles request routing, controller logic, view rendering, and model operations — the same code that runs on Node.js. Elixir handles what it&amp;#39;s best at: concurrency, fault tolerance, and distributed messaging.&lt;/p&gt;
&lt;h2&gt;Action Cable, Both Sides&lt;/h2&gt;
&lt;p&gt;The browser runs the real &lt;code&gt;@hotwired/turbo-rails&lt;/code&gt; npm package — the same Action Cable client that Rails uses. The &lt;code&gt;&amp;lt;turbo-cable-stream-source&amp;gt;&lt;/code&gt; custom element connects to &lt;code&gt;/cable&lt;/code&gt; and speaks the Action Cable wire protocol.&lt;/p&gt;
&lt;p&gt;On the server, Elixir implements the other side: WebSocket upgrade via Bandit, subscription management via &lt;code&gt;:pg&lt;/code&gt;, and broadcast delivery in Action Cable&amp;#39;s JSON format. When a model&amp;#39;s &lt;code&gt;broadcasts_to&lt;/code&gt; callback fires, it crosses from JavaScript to Elixir via &lt;code&gt;Beam.callSync(&amp;#39;__broadcast&amp;#39;, channel, html)&lt;/code&gt;, and Elixir fans it out to every subscriber.&lt;/p&gt;
&lt;p&gt;Same protocol. Same custom elements. Same Turbo Stream HTML. The client has no idea it&amp;#39;s talking to Elixir instead of Rails.&lt;/p&gt;
&lt;h2&gt;What the BEAM Adds&lt;/h2&gt;
&lt;p&gt;Things you get for free that would require significant infrastructure on Node.js:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Fault tolerance&lt;/strong&gt; — a runtime crash restarts under OTP supervision, not &lt;code&gt;pm2&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Distributed pub/sub&lt;/strong&gt; — &lt;code&gt;:pg&lt;/code&gt; spans clustered BEAM nodes automatically. Add a node, broadcasts reach it. No Redis, no configuration.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;True parallelism&lt;/strong&gt; — pooled QuickBEAM runtimes on OS threads. Not single-threaded-with-cluster.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hot upgrades&lt;/strong&gt; — OTP releases support zero-downtime deployment&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For production, swap SQLite for PostgreSQL — same app, &lt;code&gt;juntos up -d postgrex&lt;/code&gt;. Database connections are pooled on the Elixir side via Postgrex, and combined with &lt;code&gt;:pg&lt;/code&gt; for broadcasting, you get a fully distributed deployment with no external dependencies beyond Postgres.&lt;/p&gt;
&lt;h2&gt;A Path to Phoenix&lt;/h2&gt;
&lt;p&gt;This isn&amp;#39;t just another deployment target. It&amp;#39;s a migration path.&lt;/p&gt;
&lt;p&gt;Your Rails app runs today inside QuickBEAM. The Elixir scaffold is a thin Plug/Bandit layer. But that layer could be Phoenix. At that point:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Rails controllers that need more performance? Rewrite as Phoenix controllers.&lt;/li&gt;
&lt;li&gt;Models that need distributed state? Move to GenServers.&lt;/li&gt;
&lt;li&gt;Views that need real-time interaction? Swap to LiveView.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One at a time. The rest keeps running in QuickBEAM. No big-bang rewrite.&lt;/p&gt;
&lt;p&gt;No other migration path offers this. Going from Rails to Phoenix today means starting over. Juntos on BEAM gives you a running app on day one and an incremental path forward.&lt;/p&gt;
&lt;h2&gt;Try It&lt;/h2&gt;
&lt;p&gt;Prerequisites: &lt;a href=&quot;https://nodejs.org/&quot;&gt;Node.js&lt;/a&gt; (18+) and &lt;a href=&quot;https://elixir-lang.org/install.html&quot;&gt;Elixir&lt;/a&gt; (1.18+). That&amp;#39;s all you need.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npx github:ruby2js/juntos --demo blog
cd blog
npx juntos db:prepare
npx juntos up -d sqlite_napi&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Same code. Same patterns. Different runtime. &lt;a href=&quot;https://github.com/ruby2js/ruby2js&quot;&gt;Source code&lt;/a&gt;. &lt;a href=&quot;https://www.ruby2js.com/docs/juntos/&quot;&gt;Documentation&lt;/a&gt;.&lt;/p&gt;
&lt;hr/&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://www.ruby2js.com/docs/juntos/&quot;&gt;Juntos&lt;/a&gt; is open source: &lt;a href=&quot;https://github.com/ruby2js/ruby2js&quot;&gt;github.com/ruby2js/ruby2js&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Our Journey: Building With Generative AI | Revelry</title>
<link>https://revelry.co/insights/artificial-intelligence/building-with-generative-ai/</link>
<enclosure type="image/jpeg" length="0" url="https://revelry.co/wp-content/uploads/2024/01/BLOG-ART-Journey-to-AI-2.2024.jpg"></enclosure>
<guid isPermaLink="false">1PtN8sfBZKQOlgHLVzp7_ByrkZwtb9cDuR6VQQ==</guid>
<pubDate>Mon, 30 Mar 2026 12:23:34 +0000</pubDate>
<description>Blog series by software engineer Daniel Andrews who shares about our product development team&#39;s early experience building with generative AI, including RAG</description>
<content:encoded>&lt;p&gt;Businesses of all sizes and industries are eager to take advantage of generative artificial intelligence (AI), so I’m going to share some details on Revelry’s journey with this emerging technology over the past year. Even if you’re already working in the space, I believe you’ll find something interesting and helpful about our experience. We’ve got a lot of learnings to share, so this will be the first of a series of posts.&lt;/p&gt;



&lt;p&gt;In this first post, I’ll cover our early exploration of generative AI – when we were moving as fast as possible to learn as much as we could. Subsequent posts will get deeper into our learnings around building more complex generative AI systems, in particular diving into how to incorporate Retrieval Augmented Generation (RAG) into software systems (and how you don’t need LangChain to do it).&lt;/p&gt;



&lt;h2&gt;Some Background&lt;/h2&gt;



&lt;p&gt;Here’s a little backstory on Revelry. Since 2013, we’ve been building custom software. Our bread and butter has primarily been web and mobile app development. In the early days, it was all about Rails and Node, but we’ve played around with PHP, .NET, Java, Python, and more.&lt;/p&gt;



&lt;p&gt;We’ve always had a bit of a thing for new and emerging tech. Take React, for instance. Back in 2014, when JQuery and Angular were the big names, we were already building apps with React. And we didn’t stop there – we jumped into React Native pretty early, too. Our first app in React Native hit the AppStore when it was just at version 0.21, and now it’s up to 0.73. (By the way, when are we getting a major version update? Looking at you too, LiveView 😉).&lt;/p&gt;



&lt;p&gt;We still work across a variety of tech stacks, but have collectively fallen in love with the elegance, performance, and strong community around &lt;a href=&quot;https://elixir-lang.org/&quot;&gt;Elixir&lt;/a&gt; and Phoenix, which we adopted as our preferred stack around 2018. We were building sophisticated &lt;a href=&quot;https://www.phoenixframework.org/&quot;&gt;Phoenix LiveView&lt;/a&gt; apps before there was even an official LiveView hex package published. (Yes, we were just referencing a commit hash in our mix.exs file — don’t judge.) We have done a lot in the blockchain space too, but I’m definitely not going into that in this article.&lt;/p&gt;



&lt;p&gt;This is all to give you a glimpse into how we at Revelry dive into new technologies. We’re not shy about exploring the bleeding edge, and it’s really paid off. Our early dives into spaces like React and Elixir have made us the experts we are today.&lt;/p&gt;



&lt;h2&gt;Where We Started&lt;/h2&gt;



&lt;p&gt;Thinking back to June 2022, when OpenAI released GPT-3, it’s been nothing short of a rollercoaster ride. We at Revelry, like many software development companies, quickly caught on that this was a game-changer for our industry. Sure, we had a bunch of engineers who were into machine learning, but AI-driven apps weren’t really our main gig. Our partners didn’t ask for them much, and they didn’t seem all that necessary… until GPT-3 came along.&lt;/p&gt;



&lt;p&gt;By Fall 2022, we were all in, diving deep into the world of these large language models (LLMs). The pace at which things have evolved since then is mind-blowing. Back then, things weren’t quite ready for the big stage, but it was obvious this was just the start.&lt;/p&gt;



&lt;p&gt;We saw a golden opportunity to weave generative AI into our tried-and-true software and product delivery processes. This wasn’t about replacing our team, but turbocharging their productivity. Imagine our folks focusing on the creative, problem-solving aspects of product and software design, while AI handles the tedious stuff – like writing user stories, plotting out product roadmaps, or drafting sprint reports. And what if getting up to speed on a new project could be quicker and smoother? If this could work for us, it’d surely catch on elsewhere, right?&lt;/p&gt;



&lt;p&gt;So, we rolled up our sleeves and jumped into the nitty-gritty. It started as a research and development adventure, filled with questions, like:&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Just how far can the capabilities of these LLMs go?&lt;/li&gt;



&lt;li&gt;What’s the engineering effort needed to integrate generative AI into our custom software?&lt;/li&gt;



&lt;li&gt;What does it take to set up an AI-powered app in a live environment?&lt;/li&gt;



&lt;li&gt;Can LLMs genuinely enhance our team’s productivity? If so, in what ways?&lt;/li&gt;



&lt;li&gt;Is it possible to create something other engineering teams would want to use as well?&lt;/li&gt;
&lt;/ul&gt;



&lt;h2&gt;Experimentation&lt;/h2&gt;



&lt;p&gt;So, everyone at Revelry began dabbling with ChatGPT, mostly just for kicks. Some of us were crafting Eminem-style raps to add a bit of flair to our company-wide All Hands meetings (We’ve got a different Reveler hosting each week.). Meanwhile, our CEO, &lt;a href=&quot;https://www.linkedin.com/in/gerardramos/&quot;&gt;Gerard Ramos&lt;/a&gt; – or G, as we call him – was tinkering with how ChatGPT could enhance our product delivery process.&lt;/p&gt;



&lt;p&gt;G found out pretty fast – with some clever prompting – that ChatGPT could whip up some solid product roadmaps and user stories, and even spin out working code examples based on those stories. This was more than just cool – it was promising. So, he proposed we start building tools around these use cases. And that’s how the idea for our first proof of concept came about: an AI-powered app to create user stories from just a few inputs. Sure, it wasn’t a game-changer yet, but it was a great starting point – allowing us to dip our toes in the water, while simultaneously boosting our productivity.&lt;/p&gt;



&lt;h3&gt;Our First AI- Powered Toy: StoryBot&lt;/h3&gt;



&lt;p&gt;Enter &lt;a href=&quot;https://github.com/revelrylabs/storybot-ai&quot;&gt;StoryBot&lt;/a&gt;. This little gem was a straightforward CLI tool that we ended up releasing as an open-source NPM package. It’s essentially a single JavaScript file, leveraging LangChain to tap into GPT-3 via OpenAI’s API (This was before GPT-4.). We threw in some tailored prompts, injected the user input, and voilà – it started spitting out decent user stories right in the command line.&lt;/p&gt;



&lt;p&gt;We went a bit further with it after that, letting the user refine their story through chat, still all in the command line. The cherry on top was the ability to export the story as an issue in a GitHub Repo (At Revelry, we not only use GitHub to store our code, but also for issue tracking, project management, and more.). Ultimately, we ended up with &lt;a href=&quot;https://asciinema.org/a/583539&quot;&gt;this StoryBot iteration&lt;/a&gt;:&lt;/p&gt;







&lt;h3&gt;StoryBot Under the Hood&lt;/h3&gt;



&lt;p&gt;Diving into the StoryBot &lt;a href=&quot;https://github.com/revelrylabs/storybot-ai&quot;&gt;repo&lt;/a&gt;, you’ll see the core functionality is in &lt;a href=&quot;https://github.com/revelrylabs/storybot-ai/blob/main/bin/story.js&quot;&gt;one JavaScript file&lt;/a&gt;. This file uses &lt;a href=&quot;https://js.langchain.com/docs/get_started/introduction&quot;&gt;LangChain.js&lt;/a&gt; for communicating with the OpenAI API, generating user stories from command line inputs. We could have opted for LangChain’s &lt;a href=&quot;https://python.langchain.com/docs/get_started/introduction&quot;&gt;Python library&lt;/a&gt;, but the two have close to feature parity, and our team works more in JavaScript than Python. At this point, it was all still experimentation, so we opted for whatever we could move the fastest in.&lt;/p&gt;



&lt;p&gt;Technically, for such a straightforward use case, direct API calls to OpenAI would have done the trick. However, LangChain offered ease of setup and capabilities beyond just interfacing with an LLM. It’s packed with features for creating AI-powered apps, like &lt;a href=&quot;https://python.langchain.com/docs/modules/data_connection/&quot;&gt;Retrieval&lt;/a&gt;, &lt;a href=&quot;https://python.langchain.com/docs/modules/agents/&quot;&gt;Agents&lt;/a&gt;, and &lt;a href=&quot;https://python.langchain.com/docs/modules/chains&quot;&gt;Chains&lt;/a&gt;, though we didn’t dive deep into any of these for StoryBot.&lt;/p&gt;



&lt;p&gt;LangChain simplifies building AI applications, but it’s not without its complexities and limitations. It abstracts a lot, sometimes obscuring the underlying mechanics, and is currently limited to Python and JavaScript ecosystems. There is now an &lt;a href=&quot;https://github.com/brainlid/langchain&quot;&gt;Elixir implementation&lt;/a&gt; of LangChain, which is exciting because we’re huge Elixir fans, but it isn’t nearly as far along as its Python and JS counterparts. This Elixir library also wasn’t around yet at this point in our journey.&lt;/p&gt;



&lt;h4&gt;Looking a Bit More Into StoryBot Code&lt;/h4&gt;



&lt;p&gt;The first code that actually gets executed when you run &lt;code&gt;npx gen.story&lt;/code&gt; generates the initial prompt:&lt;/p&gt;



&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const generateInitialPrompt = () =&amp;gt; {
  const { featureText, techStackText, contextText } = parseArgs();

  return `Context: Act as a product manager at a software development company. Write a user story for the &amp;#39;Feature&amp;#39; defined below. Explain in detailed steps how to implement this in a section called &amp;#39;Implementation Notes&amp;#39; at the end of the story. Please make sure that the implementation notes are complete; do not leave any incomplete sentences. ${contextText}

  ${featureText}

  ${techStackText}

  User Story Spec:
    overview:
      &amp;quot;The goal is to convert your response into a GitHub Issue that a software engineer can use to implement the feature. Start your response with a &amp;#39;Background&amp;#39; section, with a few sentences about why this feature is valuable to the application and why we want the user story written. Follow with one or more &amp;#39;Scenarios&amp;#39; containing the relevant Acceptance Criteria (AC). Use markdown format, with subheaders (e.g. &amp;#39;##&amp;#39; ) for each section (i.e. &amp;#39;## Background&amp;#39;, &amp;#39;## Scenario - [Scenario 1]&amp;#39;, &amp;#39;## Implementation Notes&amp;#39;).&amp;quot;,
    scenarios:
    &amp;quot;detailed stories covering the core loop of the feature requested&amp;quot;,
    style:
      &amp;quot;Use BDD / gherkin style to describe the user scenarios, prefacing each line of acceptance criteria (AC) with a markdown checkbox (e.g. &amp;#39;- [ ]&amp;#39;).&amp;quot;,
  }`;
};

...

const prompt = generateInitialPrompt()&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;You can see that the prompt has specific instructions to format the story in the way that Revelry prefers, which may be different than a lot of other teams. This said, if anyone wanted to use this with different prompts, they could easily fork it and change the prompts. The most important part here is that we are injecting user input into the prompt before we send it to the LLM. In this case, there are 3 potential user inputs:&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;The feature in question, which comes in as the 3rd command line argument (e.g. &lt;code&gt;npx gen.story [feature]&lt;/code&gt;).&lt;/li&gt;



&lt;li&gt;optional &lt;code&gt;--stack&lt;/code&gt; flag to specify the tech stack that the user story will need to be implemented in.&lt;/li&gt;



&lt;li&gt;optional &lt;code&gt;--context&lt;/code&gt; flag to add some additional context around the feature you are writing a user story for.&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;Next, we take that hydrated prompt and send it to OpenAI using the tools provided by LangChain:&lt;/p&gt;



&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const model = new OpenAI({
  streaming: true,
  modelName: &amp;quot;gpt-3.5-turbo&amp;quot;,
  callbacks: [
    {
      handleLLMNewToken(token) {
        process.stdout.write(token)
      },
    },
  ],
});
const memory = new BufferMemory()
const chain = new ConversationChain({llm: model, memory: memory})
const {response} = await chain.call({input: prompt})&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;A few things happen at this point: we are creating a new &lt;code&gt;ConversationChain&lt;/code&gt; object, which is a wrapper around the LangChain &lt;code&gt;Chain&lt;/code&gt; object that we’ll use to send the prompt to the LLM. We are also creating a &lt;code&gt;BufferMemory&lt;/code&gt; object, which is a LangChain &lt;code&gt;Memory&lt;/code&gt; object that we’ll use to store the results of the conversation.&lt;/p&gt;



&lt;p&gt;Sidenote: If we were just using the OpenAI API directly instead of LangChain, it would be easy to pass the chat history alongside the prompt to the API call. (I’m just clarifying that LangChain isn’t &lt;em&gt;necessary&lt;/em&gt; for this, even though it was very easy to set up.)&lt;/p&gt;



&lt;p&gt;Because we set &lt;code&gt;streaming: true&lt;/code&gt; when we initialized the &lt;code&gt;OpenAI&lt;/code&gt; object, the &lt;code&gt;chain.call&lt;/code&gt; method will return immediately, and the LLM will start sending responses to the callback we set up earlier. Since StoryBot is a CLI tool, we’re just outputting to &lt;code&gt;process.stdout&lt;/code&gt; here. If you’re thinking about adapting this for a web app, you’d probably need to figure out how to send JSON responses or stream them to the client. We’ll get more into that later. The main takeaway? It doesn’t take much to start seeing some cool results by plugging user inputs into a well-crafted prompt template and sending it off to GPT-3.&lt;/p&gt;



&lt;p&gt;So at this point, there is a &lt;code&gt;response&lt;/code&gt; that is the generated user story, but also the entire user story has been streamed into the terminal and could easily be copy pasted to wherever. However, there is no ability to make any followup refinements yet. I’m not going to go line-for-line through the rest of the final result, but the long story short is that after we get the initial generated response back, we pass it to a &lt;a href=&quot;https://github.com/revelrylabs/storybot-ai/blob/main/bin/story.js#L104&quot;&gt;function&lt;/a&gt; that creates a &lt;a href=&quot;https://nodejs.org/api/readline.html#readline&quot;&gt;readline&lt;/a&gt; interface that allows us to prompt the user with questions in the terminal, and then we take the users’ response and &lt;a href=&quot;https://github.com/revelrylabs/storybot-ai/blob/main/bin/story.js#L122&quot;&gt;send it back&lt;/a&gt; as another message to the LLM in the chat history. We also added the ability to &lt;a href=&quot;https://github.com/revelrylabs/storybot-ai/blob/main/bin/story.js#L126C7-L126C23&quot;&gt;export the final result to github&lt;/a&gt; if you had the github API token set.&lt;/p&gt;



&lt;p&gt;That’s it, that’s &lt;a href=&quot;https://github.com/revelrylabs/storybot-ai?tab=readme-ov-file#install-storybot&quot;&gt;StoryBot&lt;/a&gt;!&lt;/p&gt;



&lt;p&gt;If you want to play around with it, you can install it via npm.&lt;/p&gt;



&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npm install -g storybot-ai&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Fun? Absolutely. Somewhat useful? Sure. But let’s be real – it was an experiment. The thing is, not everyone wants to write user stories via the command line. Plus, every team has its own style for these stories. Our hardcoded prompts were great for us, but might not hit the mark for teams outside of Revelry, especially since we often work in staff augmentation, where teams have their own preferences.&lt;/p&gt;



&lt;p&gt;Once we had a proof of concept, we started to see the potential. We were able to get a lot of mileage out of it, and it was a great way to get started with generative AI. This got a lot of ideas spinning about how we could get &lt;em&gt;better&lt;/em&gt; user stories based on relevant context, which ultimately led us to the next part of our journey: diving deeper into RAG (Retrieval Augmented Generation) application development.&lt;/p&gt;



&lt;hr/&gt;







&lt;p&gt;&lt;em&gt;This is the first post in a series about Revelry’s journey exploring and developing custom software powered by generative AI. The next post will dive into our next experiment: building a chatbot to answer questions about Revelry based on our company playbook. Stay tuned!&lt;/em&gt;&lt;br/&gt;&lt;br/&gt;&lt;em&gt;Until then, here are a few other articles we’ve put out about AI:&lt;br/&gt;&lt;/em&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://revelry.co/insights/artificial-intelligence/llms-large-context-windows/&quot;&gt;Memory Consumption and Limitations in LLMs with Large Context Windows&lt;/a&gt;&lt;/li&gt;



&lt;li&gt;&lt;a href=&quot;https://revelry.co/insights/artificial-intelligence/memory-consumption-and-limitations-in-llms-part-2/&quot;&gt;Memory Consumption and Limitations in LLMs with Large Context Windows, Pt II&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://revelry.co/insights/artificial-intelligence/comparing-openais-assistants-api-custom-gpts-and-chat-completion-api/&quot;&gt;Comparing OpenAI’s Assistants API, Custom GPTs, and Chat Completion API&lt;/a&gt;&lt;/li&gt;



&lt;li&gt;&lt;a href=&quot;https://revelry.co/insights/artificial-intelligence/creating-an-agent-using-openais-functions-api/?ssp=1&amp;amp;darkschemeovr=1&amp;amp;setlang=en-IN&amp;amp;safesearch=moderate&quot;&gt;Creating an “Agent” Using OpenAI’s Functions API&lt;br/&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;section&gt;
  &lt;div&gt;
    &lt;div&gt;
      
      
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/section&gt;
        &lt;hr/&gt;
        &lt;span&gt;&lt;a href=&quot;https://revelry.co/tag/agile/&quot;&gt;Agile&lt;/a&gt; &lt;a href=&quot;https://revelry.co/tag/ai/&quot;&gt;AI&lt;/a&gt; &lt;a href=&quot;https://revelry.co/tag/artificial-intelligence/&quot;&gt;Artificial Intelligence&lt;/a&gt; &lt;a href=&quot;https://revelry.co/tag/daniel-andrews/&quot;&gt;Daniel Andrews&lt;/a&gt; &lt;a href=&quot;https://revelry.co/tag/elixir/&quot;&gt;Elixir&lt;/a&gt; &lt;a href=&quot;https://revelry.co/tag/emerging-tech/&quot;&gt;Emerging Tech&lt;/a&gt; &lt;a href=&quot;https://revelry.co/tag/innovation/&quot;&gt;Innovation&lt;/a&gt; &lt;a href=&quot;https://revelry.co/tag/lean-agile/&quot;&gt;Lean Agile&lt;/a&gt; &lt;a href=&quot;https://revelry.co/tag/machine-learning/&quot;&gt;Machine Learning&lt;/a&gt;&lt;/span&gt;</content:encoded>
</item>
<item>
<title>Behind the Scenes: Replacing a Black-Box Billing Service</title>
<link>https://bitcrowd.dev/replacing-a-black-box-billing-service</link>
<guid isPermaLink="false">yIVQQMsfuCO1pJ7UZp4-WP3NAJF1LhLcufTM-g==</guid>
<pubDate>Fri, 27 Mar 2026 12:04:11 +0000</pubDate>
<description>In a previous blog post, we looked at the process of migrating 600k users to a new billing service for Steady Media. This was the final step of an internal project called “Charger”. Its aim was to replace the off-the-shelf payment platform they had started with, Chargebee, which had become a roadblock. In this post, we will reveal the behind-the-scenes insights into the process that made that final step a success.</description>
<content:encoded>&lt;p&gt;In a &lt;a href=&quot;https://bitcrowd.dev/migrating-600k-users-to-new-billing-service&quot;&gt;previous blog post&lt;/a&gt;, we looked at the process of migrating 600k users to a new billing service for Steady Media. This was the final step of an internal project called &lt;strong&gt;“Charger”&lt;/strong&gt;. Its aim was to replace the off-the-shelf payment platform they had started with, &lt;a href=&quot;https://www.chargebee.com/&quot;&gt;Chargebee&lt;/a&gt;, which had become a roadblock. In this post, we will reveal the behind-the-scenes insights into the process that made that final step a success.&lt;/p&gt;&lt;h2&gt;The Challenge&lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service#the-challenge&quot;&gt;​&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Steady Media needed to migrate 600,000 users from Chargebee to a new in-house billing system. The complexity was brutal:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Chargebeeʼs billing logic is opaque - we had to reverse-engineer behavior through observation&lt;/li&gt;&lt;li&gt;Historical data had to work in the new system (canʼt rewrite the past, especially not payment transactions!)&lt;/li&gt;&lt;li&gt;Multiple payment gateways (Braintree, GoCardless, PayPal) with different quirks&lt;/li&gt;&lt;li&gt;Complex billing workflows: trial subscriptions, gift subscriptions, mid-term subscription upgrades, subscription cancellations (with or without refunds)&lt;/li&gt;&lt;li&gt;Complex support workflows: issuing refunds, invoice &amp;amp; credit notes document versioning&lt;/li&gt;&lt;li&gt;Legal compliance (invoicing and taxes)&lt;/li&gt;&lt;li&gt;Migration had to happen without downtime or data loss&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Steadyʼs development team asked us to lead the Charger project, so that they could focus on their core application. The tech stack had to be compatible with their team skills: Elixir &amp;amp; Phoenix!&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;div&gt;&lt;p&gt;
🤩 Like what you see? have a look at &lt;a href=&quot;https://bitcrowd.net/en/projects&quot;&gt;the case studies on bitcrowd.net&lt;/a&gt;&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h2&gt;The Journey&lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service#the-journey&quot;&gt;​&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;bitcrowd built Charger across three iterations:&lt;/p&gt;&lt;h3&gt;2021: Core System&lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service#2021-core-system&quot;&gt;​&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;The first iteration aimed at building the data model and implementing the external communication layer of Charger:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;API layer with authentication between Charger and the Media Makers app&lt;/li&gt;&lt;li&gt;Webhook notifications between Charger and the Media Makers app&lt;/li&gt;&lt;li&gt;Payment providers: integrated Braintree and GoCardless services (API &amp;amp; webhooks)&lt;/li&gt;&lt;li&gt;Multiplexed calls from the Media Makers app to hit Chargebee OR Charger depending on where the user lived&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Once these various components were finalized, we could start the implementation of the core flows: create new subscriptions, bill users and trigger payments, upgrade subscription plans with prorated billing, settle payments, and so on.&lt;/p&gt;&lt;h3&gt;2023: Feature Parity&lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service#2023-feature-parity&quot;&gt;​&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;After the first development cycle ended, new features were added to the Media Makers app that needed to be back-ported to Charger. We added new models and new flows to support trial and gift subscriptions. We took the opportunity to add Paypal as a third payment provider. Finally, we also tackled the generation and design of legal billing PDF documents.&lt;/p&gt;&lt;h3&gt;2025: Admin UI, Support Tools, and Migration&lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service#2025-admin-ui-support-tools-and-migration&quot;&gt;​&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;A missing piece in Charger was a simple, lightweight, workable admin UI, that could be useful for Steadyʼs developers, QA engineers, support staff and financial experts. The goal was to minimize the implementation effort, while preserving a clean UX. We opted for &lt;a href=&quot;https://daisyui.com/&quot;&gt;DaisyUI&lt;/a&gt; since we knew the admin UI would use standard components like index tables, description lists, navigation elements etc.&lt;/p&gt;&lt;h4&gt;Powerful search&lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service#powerful-search&quot;&gt;​&lt;/a&gt;&lt;/h4&gt;&lt;p&gt;One strong requirement was to make sure that the index pages had a powerful and performant search as well as sorting and filtering mechanisms, as one of the pain points of Chargebee had been searches that time out once enough data was in the system. We solved this issue by paying attention to the queries&amp;#39; efficiency. Additionally, for cross-table filtering, we implemented a search-typeahead dropdown in order to avoid expensive joins.&lt;/p&gt;&lt;h4&gt;Activity logs&lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service#activity-logs&quot;&gt;​&lt;/a&gt;&lt;/h4&gt;&lt;p&gt;For such a billing product, it was crucial to allow admin users to see a clear changeset history on any resource. Charger comes with an auditing layer, that tracks changes and versions each relevant resource (like payments, invoices...). We built an abstraction on top of the auditing, that allows to plug any resource in it and view its version history with a UI resembling a Git diff.&lt;/p&gt;&lt;h4&gt;Support staff tooling&lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service#support-staff-tooling&quot;&gt;​&lt;/a&gt;&lt;/h4&gt;&lt;p&gt;At that stage, we had a working product for developers, QA engineers and financial experts. But Steadyʼs support team still needed tailored features to enable them to solve specific situations. For example, when a userʼs credit card expired, and the system could not withdraw money from their account, the user might send the missing amount via a bank transfer. On Steadyʼs bank account, the balance is correct, but Charger does not know about this transaction, and the accounting balance is affected. Similarly, the support team might refund a user via a manual bank transfer. To keep the accounting in check, we implemented a solution to record offline refund &amp;amp; payment, as well as various manual actions &amp;amp; flows to fix any situation that would not auto-heal.&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;div&gt;&lt;p&gt;
💡 Have we got you interested? If you have a question or topic you would like to discuss with us, we would like to hear from you.&lt;a href=&quot;https://cal.eu/bitcrowd/30min&quot;&gt;Book a free call with us&lt;/a&gt;&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h2&gt;Outcome&lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service#outcome&quot;&gt;​&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;We migrated all 600,000 users to Charger. The lights stayed green. Steady now controls their billing infrastructure and its support team can handle edge cases and requests through the admin UI. The developer team can also leverage the activity logs and powerful search capabilities to debug or resolve errors.&lt;/p&gt;&lt;p&gt;bitcrowd led on self-contained packages: we took topics Steadyʼs team didnʼt have capacity for and delivered them finished. We were able to coordinate with stakeholders (support team, finance, legal) to turn “we need X” into actionable tickets. We applied our standards of excellent documentation and test coverage, high code quality and review, which ensured a smooth handover with Steadyʼs team.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Migrating 600k users to a new billing service</title>
<link>https://bitcrowd.dev/migrating-600k-users-to-new-billing-service</link>
<guid isPermaLink="false">OOJsEbowS1utV-YB78Bz_ve0rAJ3loq3jak35A==</guid>
<pubDate>Fri, 27 Mar 2026 12:04:11 +0000</pubDate>
<description>In August 2025, we helped our friends at Steady to migrate their 600,000+ users to their own in-house billing system. This was the final step of an internal project called “Charger”. Its aim was to replace the off-the-shelf payment platform they had started with, Chargebee, which had become a roadblock. The work on Charger began already in 2021 in workshops with the Steady team. Together with them, we designed, planned and finally built the internal product that would take over for years ...</description>
<content:encoded>&lt;p&gt;In August 2025, we helped our friends at &lt;strong&gt;&lt;a href=&quot;https://steady.page/en/&quot;&gt;Steady&lt;/a&gt;&lt;/strong&gt; to migrate their 600,000+ users to their own &lt;strong&gt;in-house billing system&lt;/strong&gt;. This was the final step of an internal project called &lt;strong&gt;“Charger”&lt;/strong&gt;. Its aim was to replace the off-the-shelf payment platform they had started with, Chargebee, which had become a roadblock. The work on Charger began already in 2021 in workshops with the Steady team. Together with them, we designed, planned and finally built the internal product that would take over for years later. The process and the implementation happened in multiple iterations over a span of four years.&lt;/p&gt;&lt;p&gt;This blog post explores the strategy that was used to achieve this challenging task.&lt;/p&gt;&lt;h2&gt;The backstory&lt;a href=&quot;https://bitcrowd.dev/migrating-600k-users-to-new-billing-service#the-backstory&quot;&gt;​&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Steady supports Media Makers in building an audience and driving revenue via subscriptions to newsletters, blogs etc. While most of the tools and features for Media Makers live in their main application, they outsourced the subscriptions &amp;amp; billing management to a third-party service (&lt;a href=&quot;https://www.chargebee.com/&quot;&gt;Chargebee&lt;/a&gt;). The Media Makers app was very tightly coupled to Chargebee as it relied on it for core business logic within the app, as well as for accounting.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://bitcrowd.dev/assets/images/steady-and-chargebee-718dd3eb9e01cb110b9e71ccd42807ec.png&quot; alt=&quot;Graph illustrating the relationship between Steady Media Makers app and Chargebee via an API Layer&quot; title=&quot;&quot;/&gt;&lt;/p&gt;&lt;h2&gt;Introducing: Charger, the fraternal twin&lt;a href=&quot;https://bitcrowd.dev/migrating-600k-users-to-new-billing-service#introducing-charger-the-fraternal-twin&quot;&gt;​&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Although Chargebee was useful initially, it became increasingly painful to work around their standard processes over time. This ultimately led to the development of a custom billing tool.&lt;/p&gt;&lt;p&gt;For this we had to reverse-engineer all of what Chargebee was doing. We started by building the required schemas, and exposing the API endpoints and webhook notifications that were needed by the Media Makers app. Any changes in the Media Makers app usage of the API/Webhooks, or to Chargebee itself, needed to be propagated to Charger. &lt;strong&gt;Both billing systems had to behave identical from the outside.&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;The Media Makers app now needed to forward calls to Charger and/or Chargebee. A “multiplexer“ abstraction close to the API &amp;amp; Webhook layer was added, so that for developers &amp;amp; users of the app, the fact that Charger or Chargebee was used for the calls would be transparent.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://bitcrowd.dev/assets/images/introducing-charger-d2bc480de0952bd2d36fbd93bcc61c92.png&quot; alt=&quot;Graph illustrating the new billing system Charger, exposing the same API as Chargebee&quot; title=&quot;&quot;/&gt;&lt;/p&gt;&lt;p&gt;Charger needed to process payments and refunds, with all payment providers supported by the Media Makers app. The first flow of the application was created: subscribe to a publication, generate an invoice, create a payment, and settle that payment 🎉!&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;div&gt;&lt;p&gt;
🔍 Would you like more detail? We have a &amp;#39;behind the scenes&amp;#39; blog post about Steady! &lt;a href=&quot;https://bitcrowd.dev/replacing-a-black-box-billing-service/&quot;&gt;Have a Look!&lt;/a&gt;&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h2&gt;A bird watching experience 🔭&lt;a href=&quot;https://bitcrowd.dev/migrating-600k-users-to-new-billing-service#a-bird-watching-experience-&quot;&gt;​&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;At this stage, no real users were migrated to Charger. No real users were even created on Charger! The system just worked™... Subscription statuses, invoices, payments processing being crucial to the system meant that we had to come up with a transition plan, that allowed the developers to observe and watch the behaviour of the system for just a handful of users. If any problem arose, it would need to affect only such a small amount of users that their support team could resolve it manually. We handpicked 10 users with rather simple data: a single recent subscription, no cancellations, and various payment providers / currencies to cover as much ground as possible.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://bitcrowd.dev/assets/images/migrating-to-charger-c698090ddb6d60b2e507132ca9323214.png&quot; alt=&quot;Graph illustrating the migration job between the Media Makers app and Charger&quot; title=&quot;&quot;/&gt;&lt;/p&gt;&lt;p&gt;Using our beloved &lt;a href=&quot;https://hexdocs.pm/oban/Oban.html&quot;&gt;Oban&lt;/a&gt; for job processing, we built a migration job, that would, for a given user:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Fetch all their data from Chargebee&lt;/li&gt;&lt;li&gt;Format it to please Chargerʼs data model&lt;/li&gt;&lt;li&gt;Send it to a dedicated endpoint in Charger&lt;/li&gt;&lt;li&gt;Charger creates all the given data for the user (subscriptions, invoices, payments, etc.)&lt;/li&gt;&lt;li&gt;Cancel the user subscriptions on Chargebee (so that they donʼt get billed twice!)&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Of course, &lt;strong&gt;all of this must happen in a transaction on both systems&lt;/strong&gt;, fail gracefully and report meaningful errors to the devs. We leveraged Oban configuration to optimise the parallelisation and retry mechanism of the jobs, as we could be rate-limited by Chargebee. At some point, we would want to migrate users in batches of 100-10k users at a time, so the migration mechanism had to be robust and auto-healing.&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;div&gt;&lt;p&gt;
💡 Have we got you interested? If you have a question or topic you would like to discuss with us, we would like to hear from you.&lt;a href=&quot;https://cal.eu/bitcrowd/30min&quot;&gt;Book a free call with us&lt;/a&gt;&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h2&gt;Silent bugs&lt;a href=&quot;https://bitcrowd.dev/migrating-600k-users-to-new-billing-service#silent-bugs&quot;&gt;​&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;A main challenge in this process was to validate the integrity of the migrated data. The Media Makers app and Charger run on different databases, and have their own representation of what is an invoice, a subscription, a user, etc. Forgetting to migrate a field, or mapping the wrong timestamp from one system to the other, would happen silently because a part of the migration code lives in the Media Makers app, and the other part lives in Charger. The data could be silently missing or corrupted, and we would only realise down the road that all users were migrated with errors. Luckily, we had the chance of having access to a Chargebee staging instance with users in all kinds of configuration. This helped us a lot in finding migration issues by comparing the user data on both systems before and after the migration, and strengthening our tests to cover all the edge cases we could find. The developers at Steady also added &lt;a href=&quot;https://www.erlang.org/doc/apps/kernel/erpc.html&quot;&gt;ERPC&lt;/a&gt;, which enabled us to run integration tests evaluating the effects of Charger on the Media Makers app.&lt;/p&gt;&lt;h2&gt;Conclusion&lt;a href=&quot;https://bitcrowd.dev/migrating-600k-users-to-new-billing-service#conclusion&quot;&gt;​&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Once we were confident that our 10 users were rolling, we migrated 10 more users with more complex data. And so on, for weeks, increasing the batch size, and deciding to create &lt;strong&gt;new&lt;/strong&gt; users on Charger with a &lt;code&gt;rand()&lt;/code&gt; choice. This allowed Steady to steer the future of their billing strategy for their users, integrating with other payment providers and simplifying some of the user flows for refunds etc. While it looks simple on paper, this project required a lot of patience, collaboration and planning ahead as the stakes were really high and we had to build up confidence in the new system and in the migration approach.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Building a blog with Elixir and Phoenix | jola.dev</title>
<link>https://jola.dev/posts/building-a-blog-with-elixir-and-phoenix</link>
<enclosure type="image/jpeg" length="0" url="https://jola.dev/images/og-image-2b7872671fc7c11e464dac899d8d3068.png?vsn=d"></enclosure>
<guid isPermaLink="false">-hibchFwAEJ6inLKS9s3riedDTRQov4u9ijPzw==</guid>
<pubDate>Thu, 26 Mar 2026 19:19:26 +0000</pubDate>
<description>Setting up a website using Elixir and Phoenix, leaning on NimblePublisher for the blog posts.</description>
<content:encoded>&lt;p&gt;
TL;DR: it’s an Elixir app using Phoenix server side rendered pages, with the blog post pages generated from Markdown using NimblePublisher. It’s running on a self-hosted Dokploy instance running on &lt;a href=&quot;https://hetzner.cloud/?ref=SjrsM8GhyYOl&quot;&gt;Hetzner&lt;/a&gt;, with &lt;a href=&quot;https://bunny.net?ref=f0l8865b7g&quot;&gt;bunny.net&lt;/a&gt; as a CDN sitting in front of it.&lt;/p&gt;
&lt;p&gt;
This is a very belated write up of how this blog was put together! There’s nothing terribly original here, but I figure it could come in handy for someone out there as a reference. And the world needs more Elixir content.&lt;/p&gt;
&lt;h2&gt;
Why Phoenix&lt;/h2&gt;
&lt;p&gt;
I have used static site generators before to power my blog (shoutout to &lt;a href=&quot;https://jaspervdj.be/hakyll/&quot;&gt;Hakyll&lt;/a&gt;), but I wanted to open the door for myself to also have little experiments on this site, ones that would require more interactivity than a static site allows. Besides, I just like using Phoenix. Although most of my Phoenix projects use LiveView, this felt like a good place to do things old-school with DeadViews.&lt;/p&gt;
&lt;p&gt;
It also means I get full control of what I’m building. Using a tool someone else created means getting a lot for free, but the moment you step outside of the expected you’re having to figure out how to make things work for their tool.&lt;/p&gt;
&lt;p&gt;
So I kept things simple. No Ecto, no DB. Just server-side rendered HTML. It’s blazingly fast, as you can see from this PageSpeed Insights report.&lt;/p&gt;
&lt;img src=&quot;https://jola.dev/images/joladev-speed-test.png&quot; alt=&quot;&quot; title=&quot;&quot;/&gt;
&lt;h2&gt;
NimblePublisher&lt;/h2&gt;
&lt;p&gt;
My setup closely matches the original Dashbit blog post &lt;a href=&quot;https://dashbit.co/blog/welcome-to-our-blog-how-it-was-made&quot;&gt;Welcome to our blog: how it was made!&lt;/a&gt;, which led to the creation of NimblePublisher.&lt;/p&gt;
&lt;p&gt;
The heart of the blog is the &lt;a href=&quot;https://github.com/dashbitco/nimble_publisher&quot;&gt;NimblePublisher&lt;/a&gt; setup, which consists of a &lt;code class=&quot;makeup ok&quot;&gt;use&lt;/code&gt; block:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;makeup ok&quot;&gt;defmodule JolaDev.Blog do

  use NimblePublisher,
    build: JolaDev.Blog.Post,
    from: Application.app_dir(:jola_dev, &amp;quot;priv/posts/**/*.md&amp;quot;),
    as: :posts,
    html_converter: JolaDev.Blog.MarkdownConverter,
    highlighters: [:makeup_elixir]
...&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;
This will load up all the posts, parse the frontmatter, run it through the markdown converter, and compile it into module attributes. This means there’s no work left to be done at runtime, it’s all pre-compiled.&lt;/p&gt;
&lt;p&gt;
Posts are organized by year:  &lt;code class=&quot;makeup ok&quot;&gt;priv/posts/2025/08-18-ruthless-prioritization.md&lt;/code&gt; . We get beautiful code block syntax highlighting through &lt;a href=&quot;https://github.com/elixir-makeup/makeup&quot;&gt;Makeup&lt;/a&gt;. The &lt;code class=&quot;makeup ok&quot;&gt;Blog&lt;/code&gt; module also defines a set of helpers for fetching the posts:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;makeup ok&quot;&gt;@posts Enum.sort_by(@posts, &amp;amp; &amp;amp;1.date, {:desc, Date})

# Let&amp;#39;s also get all tags
@tags @posts
      |&amp;gt; Enum.flat_map(&amp;amp; &amp;amp;1.tags)
      |&amp;gt; Enum.uniq()
      |&amp;gt; Enum.sort()

# And finally export them
def all_posts, do: @posts
def all_tags, do: @tags

def posts_by_tag(tag) do
  Enum.filter(all_posts(), fn post -&amp;gt; tag in post.tags end)
end

def find_by_id(id) do
  Enum.find(all_posts(), fn post -&amp;gt; post.id == id end)
end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;
The only thing that took a bit of figuring out for me was getting Tailwind classes into the outputted HTML. I’m pretty sure I’ve seen better approaches shared since I wrote this, but this works too. Under &lt;code class=&quot;makeup ok&quot;&gt;earmark_options&lt;/code&gt;, pass:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;makeup ok&quot;&gt;Earmark.Options.make_options!(
  registered_processors: [
    Earmark.TagSpecificProcessors.new([
      {&amp;quot;a&amp;quot;, &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1, class: &amp;quot;underline&amp;quot;)},
      {&amp;quot;h1&amp;quot;, &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1, class: &amp;quot;text-3xl py-4&amp;quot;)},
      {&amp;quot;h2&amp;quot;, &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1, class: &amp;quot;text-2xl py-4&amp;quot;)},
      {&amp;quot;h3&amp;quot;, &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1, class: &amp;quot;text-xl py-4&amp;quot;)},
      {&amp;quot;p&amp;quot;, &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1, class: &amp;quot;text-md pb-4&amp;quot;)},
      {&amp;quot;code&amp;quot;, &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1, class: &amp;quot;&amp;quot;)},
      {&amp;quot;pre&amp;quot;,
       &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1,
         class: &amp;quot;mb-4 p-1 py-4 overflow-x-scroll border-y&amp;quot;
       )},
      {&amp;quot;ol&amp;quot;, &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1, class: &amp;quot;list-decimal&amp;quot;)},
      {&amp;quot;ul&amp;quot;, &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1, class: &amp;quot;list-disc pb-4&amp;quot;)},
      {&amp;quot;blockquote&amp;quot;,
       &amp;amp;Earmark.AstTools.merge_atts_in_node(&amp;amp;1,
         class: &amp;quot;pl-4 border-l-2 mb-4 border-purple-700&amp;quot;
       )}
    ])
  ]
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;
You probably have your own preferences for how to set up your classes, but this gives you a pattern you can use to ensure that the tags that come out have the appropriate classes.&lt;/p&gt;
&lt;h2&gt;
The Frontend&lt;/h2&gt;
&lt;p&gt;
As mentioned this is all server-side rendered Phoenix templates. It’s using standard Tailwind CSS. It predates DaisyUI and I don’t think there’s a strong reason for me to make the lift of getting it in, although I wouldn’t have minded it being a part of the scaffolding back when I set up the blog!&lt;/p&gt;
&lt;p&gt;
The only JS snippets in here are a mobile menu toggle and the Phoenix topbar. Apart from the Tailwind library, the custom CSS in here is pretty minimal. You get a lot out of the box with a Phoenix project.&lt;/p&gt;
&lt;p&gt;
And of course, dark mode. I know it’s not everyone’s cup of tea, but it is my website after all.&lt;/p&gt;
&lt;h2&gt;
CI&lt;/h2&gt;
&lt;p&gt;
I’ve got Github Actions set up to run on every push and PR, just the basic Elixir quality assurance tools.&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;
&lt;code class=&quot;makeup ok&quot;&gt;mix compile --warnings-as-errors&lt;/code&gt;  &lt;/li&gt;
  &lt;li&gt;
&lt;code class=&quot;makeup ok&quot;&gt;mix format --check-formatted&lt;/code&gt;  &lt;/li&gt;
  &lt;li&gt;
&lt;code class=&quot;makeup ok&quot;&gt;mix credo --strict&lt;/code&gt;  &lt;/li&gt;
  &lt;li&gt;
&lt;code class=&quot;makeup ok&quot;&gt;mix test&lt;/code&gt;  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
And then I’ve got Dependabot set up as well. I’ve been hearing and thinking a lot about how it creates a lot of noise, but I feel like that’s less of an issue in the Elixir community. Packages tend to not have a lot of dependencies, and so you don’t get the same waves of bumps going out that npm does. And merging them is satisfying.&lt;/p&gt;
&lt;h2&gt;
Deployment&lt;/h2&gt;
&lt;p&gt;
On the hosting side things get a bit more spicy. The repo includes a &lt;a href=&quot;https://github.com/joladev/jola.dev/blob/main/Dockerfile&quot;&gt;multi-stage Docker file&lt;/a&gt;, roughly based on the Phoenix recommended example file. This means that most of the dependencies are only pulled in at build time, and the image you get out on the other side is a bit smaller. I’m using Elixir &lt;code class=&quot;makeup ok&quot;&gt;1.18.4&lt;/code&gt;, Erlang &lt;code class=&quot;makeup ok&quot;&gt;28.0.2&lt;/code&gt;, and Debian &lt;code class=&quot;makeup ok&quot;&gt;trixie-20250721-slim&lt;/code&gt; at the time of writing this, but that’s likely to change. There’s something very satisfying about bumping dependencies.&lt;/p&gt;
&lt;p&gt;
And now we’re arriving at &lt;a href=&quot;https://dokploy.com/&quot;&gt;Dokploy&lt;/a&gt;, an open source platform as a service (PaaS) for running apps, basically a self-hosted Heroku. It does everything, automatic builds and deploys from Github updates, built-in Docker Swarm, networking, orchestration of replicas across the cluster, rolling deploys, rollbacks, preview builds, and much more.&lt;/p&gt;
&lt;p&gt;
So my publish flow is basically: create a PR and wait for CI to finish (I could skip this but it’s nice to know I didn’t mess something up). When I merge the PR Dokploy automatically picks that up and triggers a checkout and build of the repo. Once that finishes, it starts a rolling deploy to replace the running replicas. And we’re live. With cached layers on the server, deploys can finish in 30s, zero effort.&lt;/p&gt;
&lt;p&gt;
I run this Dokploy instance on &lt;a href=&quot;https://hetzner.cloud/?ref=SjrsM8GhyYOl&quot;&gt;Hetzner&lt;/a&gt; and my experience has been really positive. The pricing is unbeatable, even with the recent increase, and it’s been rock solid for me. Really, with the Dokploy instance, there’s nothing stopping me from packing up and going somewhere else. Having that kind of freedom is very nice. But I’m more than happy to stick with Hetzner.&lt;/p&gt;
&lt;h2&gt;
The Little Things&lt;/h2&gt;
&lt;p&gt;
I’ve set up a few little conveniences for my app so I’ll share some example code for them here.&lt;/p&gt;
&lt;h3&gt;
RSS&lt;/h3&gt;
&lt;p&gt;
RSS is managed by a plain Phoenix controller that looks something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;makeup ok&quot;&gt;defmodule JolaDevWeb.RssXML do
  use JolaDevWeb, :html

  embed_templates &amp;quot;rss_xml/*&amp;quot;

  def format_rfc822(%Date{} = date) do
    date
    |&amp;gt; DateTime.new!(~T[00:00:00], &amp;quot;Etc/UTC&amp;quot;)
    |&amp;gt; format_rfc822()
  end

  def format_rfc822(%DateTime{} = datetime) do
    Calendar.strftime(datetime, &amp;quot;%a, %d %b %Y %H:%M:%S +0000&amp;quot;)
  end
end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;
and the corresponding XML:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;makeup ok&quot;&gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;UTF-8&amp;quot;?&amp;gt;
&amp;lt;rss version=&amp;quot;2.0&amp;quot; xmlns:atom=&amp;quot;http://www.w3.org/2005/Atom&amp;quot; xmlns:content=&amp;quot;http://purl.org/rss/1.0/modules/content/&amp;quot;&amp;gt;
  &amp;lt;channel&amp;gt;
    &amp;lt;title&amp;gt;jola.dev&amp;lt;/title&amp;gt;
    &amp;lt;link&amp;gt;&amp;lt;%= url(~p&amp;quot;/&amp;quot;) %&amp;gt;&amp;lt;/link&amp;gt;
    &amp;lt;description&amp;gt;Blog posts from jola.dev&amp;lt;/description&amp;gt;
    &amp;lt;language&amp;gt;en-us&amp;lt;/language&amp;gt;
    &amp;lt;lastBuildDate&amp;gt;&amp;lt;%= JolaDevWeb.RssXML.format_rfc822(DateTime.utc_now()) %&amp;gt;&amp;lt;/lastBuildDate&amp;gt;
    &amp;lt;atom:link href=&amp;quot;&amp;lt;%= url(~p&amp;quot;/rss.xml&amp;quot;) %&amp;gt;&amp;quot; rel=&amp;quot;self&amp;quot; type=&amp;quot;application/rss+xml&amp;quot; /&amp;gt;

    &amp;lt;%= for post &amp;lt;- @posts do %&amp;gt;
    &amp;lt;item&amp;gt;
      &amp;lt;title&amp;gt;&amp;lt;%= post.title %&amp;gt;&amp;lt;/title&amp;gt;
      &amp;lt;link&amp;gt;&amp;lt;%= url(~p&amp;quot;/posts/#{post.id}&amp;quot;) %&amp;gt;&amp;lt;/link&amp;gt;
      &amp;lt;description&amp;gt;&amp;lt;![CDATA[&amp;lt;%= post.description %&amp;gt;]]&amp;gt;&amp;lt;/description&amp;gt;
      &amp;lt;content:encoded&amp;gt;&amp;lt;![CDATA[&amp;lt;%= post.body %&amp;gt;]]&amp;gt;&amp;lt;/content:encoded&amp;gt;
      &amp;lt;pubDate&amp;gt;&amp;lt;%= JolaDevWeb.RssXML.format_rfc822(post.date) %&amp;gt;&amp;lt;/pubDate&amp;gt;
      &amp;lt;guid isPermaLink=&amp;quot;true&amp;quot;&amp;gt;&amp;lt;%= url(~p&amp;quot;/posts/#{post.id}&amp;quot;) %&amp;gt;&amp;lt;/guid&amp;gt;
      &amp;lt;author&amp;gt;&amp;lt;%= post.author %&amp;gt;&amp;lt;/author&amp;gt;
    &amp;lt;/item&amp;gt;
    &amp;lt;% end %&amp;gt;
  &amp;lt;/channel&amp;gt;
&amp;lt;/rss&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;
Sitemap&lt;/h3&gt;
&lt;p&gt;
I was a bit surprised not to find a clean little library for generating the sitemap (this may have changed since I wrote the code!), but I guess the implementation is just going to heavily depend on your setup. Anyway, just sharing this for reference.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;makeup ok&quot;&gt;defmodule JolaDevWeb.SitemapController do
  use JolaDevWeb, :controller

  def index(conn, _params) do
    sitemap = JolaDev.Sitemap.generate()

    conn
    |&amp;gt; put_resp_content_type(&amp;quot;text/xml&amp;quot;)
    |&amp;gt; send_resp(200, sitemap)
  end
end

defmodule JolaDev.Sitemap do
  alias JolaDev.Blog

  @host &amp;quot;https://jola.dev&amp;quot;

  def generate do
    &amp;quot;&amp;quot;&amp;quot;
    &amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;UTF-8&amp;quot;?&amp;gt;
    &amp;lt;urlset xmlns=&amp;quot;http://www.sitemaps.org/schemas/sitemap/0.9&amp;quot;&amp;gt;
    #{generate_static_pages()}#{generate_tag_pages()}#{generate_blog_posts()}
    &amp;lt;/urlset&amp;gt;
    &amp;quot;&amp;quot;&amp;quot;
  end

  defp generate_static_pages do
    pages = [
      %{loc: @host, changefreq: &amp;quot;monthly&amp;quot;, priority: &amp;quot;1.0&amp;quot;},
      %{loc: &amp;quot;#{@host}/about&amp;quot;, changefreq: &amp;quot;monthly&amp;quot;, priority: &amp;quot;0.8&amp;quot;},
      %{loc: &amp;quot;#{@host}/projects&amp;quot;, changefreq: &amp;quot;weekly&amp;quot;, priority: &amp;quot;0.9&amp;quot;},
      %{loc: &amp;quot;#{@host}/talks&amp;quot;, changefreq: &amp;quot;monthly&amp;quot;, priority: &amp;quot;0.7&amp;quot;},
      %{loc: &amp;quot;#{@host}/posts&amp;quot;, changefreq: &amp;quot;weekly&amp;quot;, priority: &amp;quot;0.9&amp;quot;}
    ]

    Enum.map_join(pages, &amp;quot;\n&amp;quot;, &amp;amp;url_entry/1)
  end

  defp generate_tag_pages do
    Blog.all_tags()
    |&amp;gt; Enum.map(fn tag -&amp;gt;
      %{loc: &amp;quot;#{@host}/posts/tag/#{tag}&amp;quot;, changefreq: &amp;quot;weekly&amp;quot;, priority: &amp;quot;0.6&amp;quot;}
    end)
    |&amp;gt; Enum.map_join(&amp;quot;\n&amp;quot;, &amp;amp;url_entry/1)
  end

  defp generate_blog_posts do
    Blog.all_posts()
    |&amp;gt; Enum.map(fn post -&amp;gt;
      %{
        loc: &amp;quot;#{@host}/posts/#{post.id}&amp;quot;,
        lastmod: Date.to_iso8601(post.date),
        changefreq: &amp;quot;monthly&amp;quot;,
        priority: &amp;quot;0.8&amp;quot;
      }
    end)
    |&amp;gt; Enum.map_join(&amp;quot;\n&amp;quot;, &amp;amp;url_entry/1)
  end

  defp url_entry(params) do
    &amp;quot;&amp;quot;&amp;quot;
      &amp;lt;url&amp;gt;
        &amp;lt;loc&amp;gt;#{params.loc}&amp;lt;/loc&amp;gt;
        #{if params[:lastmod], do: &amp;quot;&amp;lt;lastmod&amp;gt;#{params.lastmod}&amp;lt;/lastmod&amp;gt;&amp;quot;, else: &amp;quot;&amp;quot;}
        &amp;lt;changefreq&amp;gt;#{params.changefreq}&amp;lt;/changefreq&amp;gt;
        &amp;lt;priority&amp;gt;#{params.priority}&amp;lt;/priority&amp;gt;
      &amp;lt;/url&amp;gt;
    &amp;quot;&amp;quot;&amp;quot;
  end
end&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;
Blog redirect plug&lt;/h3&gt;
&lt;p&gt;
When I first moved over to this new app I wanted to ensure that I kept my old blog post links alive, so I set up this little plug to rewrite requests to match the new layout.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;makeup ok&quot;&gt;defmodule JolaDevWeb.Plugs.BlogRedirect do
  import Plug.Conn

  def init(_), do: []

  def call(conn, _opts) do
    if conn.host == &amp;quot;blog.jola.dev&amp;quot; do
      ids = JolaDev.Blog.ids()
      path = strip_path(conn.request_path)

      path =
        if path in ids do
          &amp;quot;posts/&amp;quot; &amp;lt;&amp;gt; path
        else
          path
        end

      conn
      |&amp;gt; put_resp_header(&amp;quot;location&amp;quot;, &amp;quot;https://jola.dev/&amp;quot; &amp;lt;&amp;gt; path)
      |&amp;gt; send_resp(:moved_permanently, &amp;quot;&amp;quot;)
      |&amp;gt; halt()
    else
      conn
    end
  end

  defp strip_path(&amp;quot;/&amp;quot; &amp;lt;&amp;gt; path), do: path
  defp strip_path(path), do: path
end&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;
SEO&lt;/h3&gt;
&lt;p&gt;
I went a bit further on this one. Each page has its own meta description, Open Graph tags, and Twitter Card tags — all driven by assigns passed from the controllers. Blog posts automatically get &lt;code class=&quot;makeup ok&quot;&gt;og:type=&amp;quot;article&amp;quot;&lt;/code&gt; with &lt;code class=&quot;makeup ok&quot;&gt;article:published_time&lt;/code&gt; and &lt;code class=&quot;makeup ok&quot;&gt;article:tag&lt;/code&gt; set from the post metadata. The layout just reads from &lt;code class=&quot;makeup ok&quot;&gt;conn.assigns&lt;/code&gt; with sensible fallbacks, so adding SEO to a new page is just a matter of passing the right assigns. Here’s what the blog-post-specific bits look like in the layout:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;makeup ok&quot;&gt;&amp;lt;meta property=&amp;quot;og:type&amp;quot; content={if(@conn.assigns[:post], do: &amp;quot;article&amp;quot;, else: &amp;quot;website&amp;quot;)} /&amp;gt;
&amp;lt;%= if post = @conn.assigns[:post] do %&amp;gt;
  &amp;lt;meta property=&amp;quot;article:published_time&amp;quot; content={Date.to_iso8601(post.date)} /&amp;gt;
  &amp;lt;meta property=&amp;quot;article:author&amp;quot; content=&amp;quot;https://jola.dev/about&amp;quot; /&amp;gt;
  &amp;lt;%= for tag &amp;lt;- post.tags do %&amp;gt;
    &amp;lt;meta property=&amp;quot;article:tag&amp;quot; content={tag} /&amp;gt;
  &amp;lt;% end %&amp;gt;
&amp;lt;% end %&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;
Same idea for the Twitter Card and description tags — one place in the layout, driven entirely by what the controller passes in.&lt;/p&gt;
&lt;p&gt;
I also added &lt;a href=&quot;https://llmstxt.org/&quot;&gt;&lt;code class=&quot;makeup ok&quot;&gt;llms.txt&lt;/code&gt;&lt;/a&gt; and &lt;code class=&quot;makeup ok&quot;&gt;llms-full.txt&lt;/code&gt; endpoints, this is a newer standard that helps AI systems understand your site. It follows the same pattern as the sitemap: a module that generates the content from &lt;code class=&quot;makeup ok&quot;&gt;Blog.all_posts()&lt;/code&gt;, and a controller that serves it as plain text. Whether it actually matters yet, who knows, but it was trivial to add and I figure it can’t hurt.&lt;/p&gt;
&lt;h2&gt;
Wrapping Up&lt;/h2&gt;
&lt;p&gt;
This app is intentionally kept simple but powerful. Everything is set up the way I want it and I have a zero effort and very fast pipeline for publishing new posts. If you’re an Elixir dev thinking about a personal site, consider just using Phoenix. Combined with NimblePublisher you’ve got a really powerful and blazing fast blog framework right there.&lt;/p&gt;
&lt;p&gt;
And while you’re at it, why not host it on Hetzner! If you use the &lt;a href=&quot;https://hetzner.cloud/?ref=SjrsM8GhyYOl&quot;&gt;referral link to sign up you get €20 and I get €10&lt;/a&gt;. If you prefer not to use the referral link, here’s a plain link: &lt;a href=&quot;https://www.hetzner.com/cloud/&quot;&gt;https://www.hetzner.com/cloud/&lt;/a&gt;. Also consider joining me in &lt;a href=&quot;https://github.com/sponsors/Dokploy&quot;&gt;sponsoring Dokploy&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
Source code is available at: &lt;a href=&quot;https://github.com/joladev/jola.dev&quot;&gt;https://github.com/joladev/jola.dev&lt;/a&gt;. Next up I’ll talk about setting up &lt;a href=&quot;https://bunny.net?ref=f0l8865b7g&quot;&gt;bunny.net&lt;/a&gt; and a separate post on Dokploy on Hetzner.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Software Development in 2026</title>
<link>https://rocket-science.ru/hacking/2026/03/23/software-development-in-2026</link>
<guid isPermaLink="false">wz0KP8VGMC6EKD0YbSz0XdDPfEn0ZTAf7bmz_g==</guid>
<pubDate>Wed, 25 Mar 2026 16:08:30 +0000</pubDate>
<description>Three years ago I wrote a fairly coherent piece on four key developer skills, but a good deal of water has flowed under the bridge since then, and while the theses laid out there remain sound, they need a slight adjustment in light of the boon that has descended upon us in the form of large language models.</description>
<content:encoded>&lt;p&gt;Three years ago I wrote a fairly coherent piece on &lt;a href=&quot;https://rocket-science.ru/hacking/2023/11/03/software-development-in-2023&quot;&gt;four key developer skills&lt;/a&gt;, but a good deal of water has flowed under the bridge since then, and while the theses laid out there remain sound, they need a slight adjustment in light of the boon that has descended upon us in the form of large language models.&lt;/p&gt;&lt;p&gt;I went through all five stages of the inevitable over the course of a year.&lt;/p&gt;&lt;p&gt;① &lt;strong&gt;Denial&lt;/strong&gt;—I watched my colleagues a year ago raving about autocomplete and hallucinations, and even won a few bets—not unlike the one the Adriano Celentano character &lt;a href=&quot;https://youtu.be/GBFt3FF7i2Q&quot;&gt;proposes to his accountant&lt;/a&gt; in that great film.&lt;/p&gt;&lt;p&gt;② &lt;strong&gt;Anger&lt;/strong&gt;—I kept writing code by hand, but part of my job involves cleaning up colleagues’ messes after shifted concepts (I do a lot of reviews), and the volume of neural slop had crossed every conceivable boundary: even when the code compiled, it looked like Rome from a bird’s-eye view—slovenly fragments of good practices scattered here and there, interspersed with the slums of deeply nested conditionals. I found myself literally rewriting large chunks after the little models, because I take code review seriously and still see it as a tool for teaching apprentices.&lt;/p&gt;&lt;p&gt;③ &lt;strong&gt;Bargaining&lt;/strong&gt;—about seven months ago I tried unleashing a model on an old library of mine that was desperately in need of proper documentation; to my surprise, the documentation turned out coherent, nearly complete, and unquestionably better than nothing. I screwed my eyes shut and asked for tests. Half of them tested the standard library and implementation details—but the other half brought genuine value. Like those gruff stubborn men from the joke, I said: “Welllll, damn.” And paid for &lt;a href=&quot;https://www.warp.dev/&quot;&gt;Warp&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;④ &lt;strong&gt;Depression&lt;/strong&gt;—in the first month of use I finished two personal projects that had been gathering dust for years, equipped all my libraries with detailed documentation and the missing tests, played around with creating my own programming language, and even trusted the model to fully solve a take-home assignment for one of the companies that had written to me directly with an offer (the assignment sailed through, but I never would have taken the job anyway—their HR had violated every conceivable rule of decency in headhunting). I wasn’t afraid, of course, that I’d be thrown out and Claude hired in my place—the models won’t reach my level of expertise before I retire—but writing code is one of my most beloved occupations in life, and I felt it being taken away from me.&lt;/p&gt;&lt;p&gt;⑤ &lt;strong&gt;Acceptance&lt;/strong&gt;—it is now the end of March 2026, and I can say with confidence that language models provide me with substantial help in development, without particularly intruding on the part of my life I cherish: they write excellent documentation, reasonably coherent tests, and—under supervision and with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Ctrl&lt;/code&gt;+&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C&lt;/code&gt; at the ready—can shuffle JSON around. Complex code I still write myself, and I’m confident this will continue until my death at the wheel of a sports motorcycle.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;At the same time, the internet is proliferating with success stories from every variety of blowhard—from startup founders with three grades of parochial school, armed with enthusiasm, a narrow worldview, and the free tier of &lt;em&gt;ChatGPT&lt;/em&gt;—to bedroom traders whose imaginary profits already permit them to purchase a paper yacht in Bali. In the hands of people far removed from software development, large language models are doomed to two low-yield applications: you can amuse yourself generating memes of piglets drinking mojitos in the Kremlin, or you can reproduce a product that already exists on the market and that in its niche you’ll never catch up to.&lt;/p&gt;&lt;p&gt;To use models effectively (you can now also deploy agents, but this changes nothing whatsoever about the substance), you need—at minimum—to understand the principles by which they operate. It took the automobile industry a hundred years to give users the joy of driving a vehicle without ever having opened the bonnet. Aircraft haven’t reached that stage yet. I see no reason to suppose that very advanced autocomplete is capable of repairing itself. And this is besides the fact that language models are, in principle, a dead end in the development of &lt;em&gt;artificial intelligence&lt;/em&gt;. Even Yann LeCun has &lt;a href=&quot;https://amilabs.xyz&quot;&gt;understood this&lt;/a&gt;, though unfortunately it remains unclear whether his own ideas aren’t yet another dead end.&lt;/p&gt;&lt;p&gt;Yeah.&lt;/p&gt;&lt;p&gt;If you need to knock together from scratch a shop for the worthless trinkets your wife makes—a modern model will handle it with flying colours. Clean unpretentious design, convenient addition of new bracelets, photos, payment system. The little model will knock you up a website, a mobile app, and lord knows what else. And it will work first try, most likely—because in training it has stared at such dreck in billions of different variations. What everyone finds so astonishing is, in essence, the output of four shell commands: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;grep&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sed&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;awk&lt;/code&gt;. Last century this technology bore the proud name “snippet.” For cloning already-existing things, then, the model is a perfectly fine assistant. Web studios that build landing pages will probably indeed die (they should never have existed in principle, but that’s a different question).&lt;/p&gt;&lt;p&gt;As for more complex projects—not necessarily more complex &lt;em&gt;per se&lt;/em&gt;, but more &lt;em&gt;unusual&lt;/em&gt;—without an architect to guide it, the model will disgrace itself at the very first hurdle. Because a vaguely described result can be achieved in fifteen different ways, and sooner or later the rough-and-ready decisions of a spring balance (a steelyard, not a precision scale) will lead into a swamp from which there’s no escape—because admitting defeat is not our way, and the model will play roulette to the bitter end, like Dostoevsky in Baden-Baden.&lt;/p&gt;&lt;p&gt;I recently wrote about why elaborate prompts and detailed descriptions cannot produce a good result—see part two of &lt;a href=&quot;https://rocket-science.ru/hacking/2026/03/13/artificial-intelligence&quot;&gt;Artificial ‘Intelligence’&lt;/a&gt;—I won’t repeat myself here, but will quote the key thesis:&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;A person capable of breaking a large task down into the minimum number of smaller ones that satisfy the condition of “unambiguous solution” is capable, in today’s reality, of writing a reasonably complex application in a couple of days.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;What I mean is that the ability to distinguish tasks with branching logic from multi-step syllogisms is more in demand than ever. Draw a mental flowchart of the execution—and if there are &lt;a href=&quot;https://en.wikipedia.org/wiki/Flowchart#Common_symbols&quot;&gt;decision diamonds&lt;/a&gt; in it, you need to work them out for the model explicitly (go left here, go right here, snow on head here, very painful). Better still—break the main task into several, to eliminate those “conditions/decisions” entirely. But this is hardly possible if you have no idea how to write such code yourself.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;Besides the fools who tried to build a website and succeeded—there are also saboteurs. They are considerably more dangerous in that they give the unprepared reader the impression of being “people in the know.” Did I mention that before paying for a subscription to a cloud model I thoroughly understood exactly how they work? I always do. If I get it into my head to drag library &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;xyz&lt;/code&gt; into a project, I start by digging into its source code. When choosing a technology I don’t read adoption success stories and watch benchmarks—I write my own “trinket shop in the Bronx” with it. To use language models in daily work, and actually pay for the privilege, Altman’s feverish ravings and Karpathy’s coquettish napkin-code are not enough for me. I need to understand how it functions under the hood, so as to preemptively avoid disappointment and the collapse of hopes.&lt;/p&gt;&lt;p&gt;In short, those who have learned by instruction to run models and ask them questions pulled from thin air—supposedly testing or even proving something—would frankly be better off keeping quiet. Because everything that has crossed my field of vision resembles hallucinations from the first version of ChatGPT, Joyce’s &lt;em&gt;Ulysses&lt;/em&gt;, Castaneda’s astral flights, and in principle any address at a party congress by the secretary of the Upper Walrusville cell, having gorged himself on fly agaric.&lt;/p&gt;&lt;p&gt;Let me skim glissando—or, as Dovlatov used to say, in dotted lines—across the main points.&lt;/p&gt;&lt;h3&gt;Tests and Benchmarks&lt;/h3&gt;&lt;p&gt;Comparing different models based on sets of cardboard tests and plastic benchmarks is pure, undiluted charlatanism. It’s enough to look at the language summary table at &lt;a href=&quot;https://autocodebench.github.io/&quot;&gt;AutoCodeBench → Experimental Results&lt;/a&gt; (yes, this is sarcasm). Claude Opus 4 fails to hit 50% for TypeScript but clears 80% for Elixir. Translating from accountant-speak into plain language (and exaggerating slightly, of course)—if your project is in TypeScript, the model is more of a hindrance; if it’s in Elixir, you can hand it a 1,000-line refactor.&lt;/p&gt;&lt;h3&gt;Context&lt;/h3&gt;&lt;p&gt;RAG in any moderately complex project (or more precisely, its “RA” part) matters hundreds of times more than the model itself. Why do all these Claudes burrow into your computers with their IDEs (and lately—CLIs), do you think?—It’s simple: shipping the entire context to the server every time is expensive and inefficient (and despite the flagship advertising promises, nobody can actually process more than a hundred thousand tokens without noticeable quality loss). So every model sends some kind of “distillate” to the server.&lt;/p&gt;&lt;p&gt;It’s important to understand that any language model operates as a finite state machine: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;start&lt;/code&gt; → &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;context&lt;/code&gt; → &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;query&lt;/code&gt; → &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;response&lt;/code&gt; → &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;stop&lt;/code&gt;. There are no “sessions.” You cannot “launch” a model and converse with it—every request you make starts from a blank slate. Models as such have no “memory”—which is why preserving context is critically important. But unlike Hoffman’s character from &lt;em&gt;Rain Man&lt;/em&gt;, even we cannot memorize six decks of cards—let alone models, with their context window as narrow as the Strait of Hormuz. My slowly-simmering project &lt;a href=&quot;https://hexdocs.pm/ragex/&quot;&gt;Ragex&lt;/a&gt; is an attempt to somehow formalize—and minimize without loss of generality—the context required for processing sizeable codebases. We’ll see whether I manage it—but the fact that I appear to be alone in this on the visible horizon is not exactly inspiring.&lt;/p&gt;&lt;h3&gt;Plans and Reasoning&lt;/h3&gt;&lt;p&gt;When tackling any reasonably non-trivial task, you need to force the model to sketch a plan and ask all the questions that arose for it while creating that plan. It will gladly surrender to you all the internal decision branches from the flowchart. These questions must be answered as clearly as possible—in blunt, clipped phrases that admit no double interpretation. The requirement to supply each phase of the plan with tests and bring all project documentation fully in line with the current state of the codebase must be baked into the general rules.&lt;/p&gt;&lt;p&gt;Each stage calls for a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git diff&lt;/code&gt; with an informal review. I also always use “thinking” models, and watch those reasoning traces in real time—so that the moment it tries to veer off course (and on non-trivial tasks it will always try)—I can kill the little pest with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Ctrl&lt;/code&gt;+&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C&lt;/code&gt; and explain why it’s wrong.&lt;/p&gt;&lt;h3&gt;Language&lt;/h3&gt;&lt;p&gt;In my experience, it’s best to use the language in which an answer is easiest to find on the internet. For code—always English; for obscure details of Lorca’s biography—Spanish; for a survey of the sexual services market in Berlin—German.&lt;/p&gt;&lt;p&gt;I have no hard evidence, unfortunately—only a general understanding of how this T9 on steroids operates—but the empirical record is copious. My Spanish hovers somewhere around B2–C1 level, so I still try English first; if the result looks underwhelming, I strain my fingers in Spanish—and in the vast majority of cases, I don’t regret it.&lt;/p&gt;&lt;h3&gt;Politeness&lt;/h3&gt;&lt;p&gt;After each successful task completion I say something like “Awesome,” or “Astonishing,” or “Stunning,” or words to that effect. I phrase every request as a request and always add “please.” My behaviour affects carbon emissions about as much as it affects democratic elections in a country of fifty million—but it makes things simpler and more pleasant for me. Besides, as Niels Bohr once said: “Of course I am not superstitious. But they say a horseshoe brings luck even to those who don’t believe in such nonsense.”&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Horizontal Scaling</title>
<link>https://rocket-science.ru/hacking/2023/12/26/horizontal-scaling</link>
<guid isPermaLink="false">h0lE6LRnT3TqEqyaGkLNnn11FcirlP9cJ3Te4A==</guid>
<pubDate>Wed, 25 Mar 2026 16:08:30 +0000</pubDate>
<description>Part four of four key developer skills.</description>
<content:encoded>&lt;p&gt;&lt;em&gt;Part four of &lt;a href=&quot;https://rocket-science.ru/hacking/2023/11/03/software-development-in-2023&quot;&gt;four key developer skills&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Ability to immediately build a horizontally scalable solution without adding any special code for it in the first version.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;img src=&quot;https://rocket-science.ru/img/horizontal-scaling.jpg&quot; alt=&quot;Sometimes it’s better to keep quiet than to speak&quot; title=&quot;&quot;/&gt;&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;This turned out to be the hardest piece in the series, because literally a handful of developers actually understand what “horizontal scaling” is. Shown above is a screenshot of an amusing tweet by Tobi Lütke, which demonstrates either his supreme professionalism in sophistry, or his complete technical incompetence.&lt;/p&gt;&lt;p&gt;Serving ten billion clients does not mean your service is scalable. Buses carry people along Nevsky Prospect, and buses carry people along the Garden Ring; anyone who has completed three grades of parochial logistics school will tell you that speaking of “reliable connections between Leningrad and Moscow” on the basis of this incontrovertible fact is premature.&lt;/p&gt;&lt;p&gt;Even within a single city, the situation can easily spiral out of control. Scaling a bus fleet to match the expanding footprint of a metropolis is not about buying and deploying more buses. I lived in Berlin twenty years ago and could not stop marveling at how well the transport was organized. Judging by today’s passenger reviews, as the city spread outward—BVG did not cope.&lt;/p&gt;&lt;p&gt;I invite the thoughtful reader to pause here and think: so what exactly went wrong with the service’s scaling? For the superficial blockheads who can’t be bothered to think through what they’re reading—the answer is: logistics.&lt;/p&gt;&lt;p&gt;Deploying more buses onto the streets of a multi-million-person city doesn’t require much brainpower. But each individual passenger needs exactly one bus: the one that arrives immediately after they step off the previous one. Not an hour later, not a minute too early—within a three-to-five minute transfer window. As long as the bus schedule is built around minimized (and guaranteed) transfer times—we can speak of scaling. If bus A dumps its passengers at the terminal thirty seconds after connecting bus B has already pulled away—scaling has failed.&lt;/p&gt;&lt;p&gt;All right, to hell with Berlin—even with good public transport, living there was impossible. Let’s get back to Tobi.&lt;/p&gt;&lt;p&gt;Each Shopify user requires one persistent connection (a WebSocket) to one server. Adding new users is simply adding servers to the rack. If there are any scaling problems—they’re all clustered around the database, not Rails itself. Rails doesn’t care at all how many total servers are serving users: each server only needs to handle its own load.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;This is not scaling. This is adding isolated capacity. It’s like if the zoom on your phone’s camera did not actually zoom in without unacceptable quality loss, but simply tiled more tiny copies of the image.&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;What the computing industry calls scaling is adding new hardware resources that increase the capacity of a &lt;em&gt;connected&lt;/em&gt; node. Deploying more buses on new streets—no. Changing the schedule so that new buses coordinate with the existing ones—yes.&lt;/p&gt;&lt;p&gt;A chess server, for example, requires no scaling whatsoever. The server overheating?—Put another one next to it; in the end we’re serving disconnected pairs of players anyway. A server for the simultaneous slaughter of a billion sweaty nerds, on the other hand—that one needs scaling.&lt;/p&gt;&lt;p&gt;Let me briefly mention one example from personal practice before getting to the point. We process an incoming stream of currency exchange rates, perform some mathematical manipulations on them, and spit out peculiar results. Two hundred-plus currencies—that’s forty thousand pairs—with values arriving on average roughly once a second (in practice, more often). The mathematical manipulations can involve any number of currencies. All of this happens in real time. Because values must be available at any given moment, we cannot simply partition the pairs and split the streams across multiple servers. And a single server physically cannot handle it. Which means every node in the cluster must be able to exchange information with every other node—and the information is not static, so Redis won’t do. This is where the architecture needs the ability to scale horizontally.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;The preamble has dragged on a bit. I hope I’ve at least clarified the terminology somewhat. So: if in the course of discussing an architecture you’ve concluded that the project will need genuine horizontal scaling—you cannot do without finite state machines. (In general it’s better to build all business logic on FSMs, but in a standalone system you can hobble along without them—in a cluster, there’s no way.) I rarely recommend reference material, but &lt;a href=&quot;https://en.wikipedia.org/wiki/Introduction_to_the_Theory_of_Computation&quot;&gt;Introduction to the Theory of Computation&lt;/a&gt; by Michael Sipser is well worth a look. A finite state machine—for all its apparent simplicity—is something so powerful it genuinely astonishes.&lt;/p&gt;&lt;p&gt;Unfortunately, many believe that an FSM is simply a set of states—an extremely dangerous misconception that completely negates the entire body of formal mathematics upon which the power of finite state machines rests.&lt;/p&gt;&lt;p&gt;In any case, if you want to be ready to scale horizontally—build critical processes on finite state machines and make them fully asynchronous. If subsystem A must interact with subsystem B—forget about direct calls. In HTTP terms—&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;201&lt;/code&gt; is good, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;200&lt;/code&gt; is dreadful. Under no circumstances will you ever be able to later convert a request→response call sequence into request→acknowledgement→await response (without the destructive force of a complete refactor).&lt;/p&gt;&lt;p&gt;Asynchronous interactions built on top of FSMs, on the other hand, will make future scaling painless—because in this paradigm it makes absolutely no difference on which node the code handling a request actually runs.&lt;/p&gt;&lt;p&gt;There are languages in which this is easier (Elixir, Erlang), and those in which it’s harder. But in principle it is achievable in any environment. Once you get the hang of it, writing asynchronous code becomes no harder than synchronous. I speak from experience.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;And finally, the perfect litmus test for determining whether your system is truly distributed or just a few servers standing in corners. If you’ve ever had to decide which letter from the &lt;a href=&quot;https://en.wikipedia.org/wiki/CAP_theorem&quot;&gt;CAP theorem&lt;/a&gt; set—‘C,’ ‘A,’ ‘P’—to sacrifice, you most likely have a genuine scalable cluster. If not—forget about horizontal scaling and simply add capacity as the business blooms.&lt;/p&gt;&lt;p&gt;And while I have the opportunity, I can’t resist recommending my own library &lt;a href=&quot;https://hexdocs.pm/finitomata&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Finitomata&lt;/code&gt;&lt;/a&gt;, which I prototyped in &lt;a href=&quot;https://www.idris-lang.org/&quot;&gt;Idris&lt;/a&gt; and which is designed to be completely asynchronous (there is no way to determine from a response whether a state transition succeeded)—it provably prevents the programmer from violating a single law of finite state machine management.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>YAGNIN, but YAGNIL</title>
<link>https://rocket-science.ru/hacking/2023/12/25/yagnin-but-yagnil</link>
<guid isPermaLink="false">eowT76W406F2_m8S3IEHshTfq4RqAFt5nXDUUw==</guid>
<pubDate>Wed, 25 Mar 2026 16:08:30 +0000</pubDate>
<description>Part three of four key developer skills.</description>
<content:encoded>&lt;p&gt;&lt;em&gt;Part three of &lt;a href=&quot;https://rocket-science.ru/hacking/2023/11/03/software-development-in-2023&quot;&gt;four key developer skills&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Ability to design software such that the first version contains not a single line “in anticipation of future changes,” while future changes don’t touch existing code in any way.&lt;/p&gt;&lt;/blockquote&gt;&lt;hr/&gt;&lt;p&gt;One of the most repulsive “principles” of development to emerge in the last decade can safely be identified as &lt;a href=&quot;https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it&quot;&gt;YAGNI&lt;/a&gt;. Despite growing out of Donald Knuth’s coquettish but not entirely thoughtless “premature optimization is the root of all evil”—stated nearly 60 years ago—the modern interpretation hands idlers, under-qualified practitioners, and outright scoundrels enormous latitude to dodge solving problems correctly.&lt;/p&gt;&lt;p&gt;The correct formulation is right there in the title of this post: “You aren’t gonna need it &lt;em&gt;now&lt;/em&gt;, but you are gonna need it &lt;em&gt;later&lt;/em&gt;.” If that were not so—if “architecting for future requirements / applications turns out net-positive” (in John Carmack’s words) really were needed only rarely—we would never encounter the reaction “this code is easier to rewrite from scratch than to change.” The situation “we’ll need to refactor three modules deep” would never arise when adding a new parameter to a function call. People would never break backward compatibility—since Ron’s quote yanked from context (available at the link above) promises us simple addition of new things as needed.&lt;/p&gt;&lt;p&gt;Unfortunately, none of the above works. At least not out of the box, in the form in which the programming extremists try to sell it to us.&lt;/p&gt;&lt;p&gt;On the other hand, Knuth, Jeffries, and Carmack are respectable people who have each written heaps of code tested by decades—and most likely wouldn’t talk complete nonsense.&lt;/p&gt;&lt;p&gt;As always, it’s all in the nuances. Immediately implementing every potential feature that might someday pop into the head of a deranged client is obviously a thankless task. You shouldn’t do that, and you can’t guess all those potential ideas right now anyway. But you must be &lt;em&gt;prepared&lt;/em&gt; for them.&lt;/p&gt;&lt;p&gt;What does that mean?—Let me try to explain with an example.&lt;/p&gt;&lt;p&gt;Suppose the task at hand is writing a Mastodon text publisher. A command-line utility, literally. Here—a text file, there—a toot. Called with something like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$ post file.txt&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;The first thing to settle: will anyone else be using this? Because if we’re talking about a single-user program that runs once every five years on my machine—you don’t even need to handle errors. It crashes with an exception?—I’ll tweak it and restart, no big deal.&lt;/p&gt;&lt;p&gt;But if we’re talking about a project for actual people—whether a paying client or open-source freebie-hunters—things become somewhat more complex. And it’s not even about bugs: loyal users will forgive bugs, and they’re easy enough to fix. The point is that if the project is going to grow, you need to prepare for that a little. Because rewriting everything from scratch with every new requirement is fun, but inefficient.&lt;/p&gt;&lt;p&gt;So where might it grow?&lt;/p&gt;&lt;ul&gt;&lt;li&gt;a new publishing service gets added;&lt;/li&gt;&lt;li&gt;a new text format gets added;&lt;/li&gt;&lt;li&gt;batch processing gets added;&lt;/li&gt;&lt;li&gt;something else, probably—but that’ll do for now.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;It’s time to utter the two magic words: &lt;em&gt;dependency injection&lt;/em&gt;. Everywhere you expect requirements to scale outward, you can give yourself insurance without writing a single superfluous line. Instead of hardcoding it like this:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;@specpublish(String.t()):::okdefpublish(text)dotext|&amp;gt;Markdown.format()|&amp;gt;Mastodon.publish()end&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;you can inject dependencies, with a sensible default:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;@specpublish(String.t()|[String.t()],Formatter.t(),Publisher.t()):::okdefpublish(texts,formatter\\Markdown,publisher\\Mastodon)dotexts|&amp;gt;List.wrap()# accept a single string as well|&amp;gt;Enum.each(fntext-&amp;gt;text|&amp;gt;formatter.format()|&amp;gt;publisher.publish()end)end&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Done. This code is now ready to accept multiple texts at once (as well as a single one), format with anything, and publish anywhere. We’ve covered all three of the potential future requirements that came to mind, without writing a single superfluous line of code (well, about 60 characters were added—apologies).&lt;/p&gt;&lt;p&gt;Of course, we cannot anticipate all of tomorrow’s needs. But some are so obviously staring you in the face that you can handle them without even getting up from your chair. Everyone knows that magic constants are bad. So people do this:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;@pi3.14159265defpi,do:@pi&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;There’s no need to do it this way, for two reasons: if it’s π, it will never change, and no one will confuse it with the CEO’s full name. But if it’s something that might change tomorrow—say, the maximum number of characters for a repost—&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;@max_symbols 140&lt;/code&gt; won’t help much: you’ll still have to find it and change it directly in the code. This, on the other hand, is good:&lt;/p&gt;&lt;div&gt;&lt;div&gt;&lt;pre&gt;&lt;code&gt;@max_symbolsApplication.compile_env(:my_app,:max_symbols,140)deftruncate(text,symbols\\@max_symbols),do:String.slice(0..symbols-1)&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Need to change the default value?—Welcome to the config. Need the ability to truncate at different lengths?—Here’s a parameter, hello.&lt;/p&gt;&lt;p&gt;In fact, this example scales easily to any “grey zone” of architectural change. The architecture should not change, in principle: the &lt;em&gt;parts&lt;/em&gt; should change. The business is young and there are only five clients so far?—Then the function for sending a New Year’s greeting by email (forgive the example) should be ready to parallelize. It’s a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MailSender&lt;/code&gt; that exports a single function &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;send/2&lt;/code&gt;, which for now accepts a text and a list of email addresses &lt;em&gt;and simply iterates over them sequentially&lt;/em&gt;. When the hundredth client arrives—we’ll rewrite it to use multiple threads, without touching the rest of the code—and that’s it.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;Strive to ensure that every operation in your code is handled by a separately taken, extremely simple, isolated, and tested module. After all, one can spend a lifetime preparing for the day the toilet in one’s bathroom gets clogged, and achieve mastery in the art of unblocking it. Or, when the hour comes, one can simply call a plumber who does only that—but does it well. That is &lt;em&gt;dependency injection&lt;/em&gt;.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Divide et Impera</title>
<link>https://rocket-science.ru/hacking/2023/11/09/divide-et-impera</link>
<guid isPermaLink="false">VstpXRVUK8QlOrKw29pN69cwqOXxWQlqnQrTxg==</guid>
<pubDate>Wed, 25 Mar 2026 16:08:30 +0000</pubDate>
<description>Part two of four key developer skills.</description>
<content:encoded>&lt;p&gt;&lt;em&gt;Part two of &lt;a href=&quot;https://rocket-science.ru/hacking/2023/11/03/software-development-in-2023&quot;&gt;four key developer skills&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Ability to break the whole into parts and implement small pieces of a large system in complete isolation from one another.&lt;/p&gt;&lt;/blockquote&gt;&lt;hr/&gt;&lt;p&gt;“Divide and conquer”—a time-tested slogan reflecting the strategy for managing large projects, empires included. In English it’s usually rendered as “divide and rule” or “divide and conquer.” You’ll notice that the second verb varies considerably across time—from “conquer” through “rule” to “govern.” The first, however, is immovable, cast in bronze: &lt;em&gt;divide&lt;/em&gt;. Slash. Cut the excess from the marble block. Gnaw off your own leg to escape the trap. But I digress.&lt;/p&gt;&lt;p&gt;Whenever humanity invents something new, it first tries to master it by brute force, and then applies well-worn tactics. Programming was no different: at first everyone rushed to bash out code, attempting to write “War and Peace,” and then decided to muddy the waters and, like wound-up toys, began inventing all manner of practices, and—forgive the word—patterns.&lt;/p&gt;&lt;p&gt;I never knew what “&lt;a href=&quot;https://en.wikipedia.org/wiki/SOLID&quot;&gt;SOLID&lt;/a&gt;” stands for, because apart from SRP, all the other principles in there are extremely contradictory and generally inapplicable due to their incorrectness. SRP, however, is applicable at all times, and in any application—it ennobles the code, often beyond recognition.&lt;/p&gt;&lt;p&gt;In any course of lectures, in any conference talk, I always say: take scrupulous care to ensure that every piece of your code performs exactly one task, exposes a clear API, and is packaged as a library—not dragged wholesale into the main project. Need spline interpolation?—A library, with its own tests, with a single exported function. Some peculiar grouping operation?—A library. A different grouping?—Extend the exported API of the previous library. Logging in the application?—A library, your own wrapper over the standard logger. And so on. Only bare, dry business logic has the right to live in the application itself.&lt;/p&gt;&lt;p&gt;This approach comes with several pleasant side effects at once:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;if a bug is found, it’s far easier to fix in the library than in a gigantic project;&lt;/li&gt;&lt;li&gt;if a colleague needs logging, they’ll use your implementation and the logs will be consistent;&lt;/li&gt;&lt;li&gt;if you want to test calls to the library, it can export &lt;a href=&quot;https://hexdocs.pm/finitomata/Finitomata.ExUnit.html&quot;&gt;convenient helpers&lt;/a&gt; for that purpose;&lt;/li&gt;&lt;li&gt;if the library needs new capabilities, you can cover the new API with &lt;a href=&quot;https://dashbit.co/blog/mocks-and-explicit-contracts&quot;&gt;mocks&lt;/a&gt; in five minutes and continue developing the application;&lt;/li&gt;&lt;li&gt;if the library isn’t coupled to business logic, you can always open-source it and get +5 to testing, +10 to new features, and +50 to karma.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;When I hear that “this feature can’t be started until that ticket over there is closed,” I want to strike the speaker with a keyboard. How are you going to test it, you muppet, if it’s coupled to third-party code? How can you be confident it works at all, if without someone else’s code—it won’t even start? Why on earth did you go into programming, you numpty, when there are so many wonderful professions around—excavator operator, poet, TikToker.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;If anyone tells you that code cannot be tested without other code, or that this particular function is designed to process data, write to the database, and brew coffee simultaneously—that person is a saboteur and should be shown the door.&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Any code can be tested in isolation from the project (integration tests are an entirely different matter). Any code doing three things at once can be split into three different places and tested separately, independently of one another. Anyone currently itching to produce a counterexample—would do well to cool down first and try following the advice given above. It’s rather stupid to suddenly make a fool of yourself over such a trifle.&lt;/p&gt;&lt;p&gt;Strive to ensure that every cup in your dinner service can be used on its own. It’s foolish to take out, dirty, and then wash the soup tureen and 24 saucers—just because you had a cup of coffee.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>No Failures Despite Bugs</title>
<link>https://rocket-science.ru/hacking/2023/11/04/no-failures-despite-bugs</link>
<guid isPermaLink="false">ussLTPHdlwgXT4MY7mgftSl8NgsgUTugUtzOrQ==</guid>
<pubDate>Wed, 25 Mar 2026 16:08:30 +0000</pubDate>
<description>Part one of four key developer skills.</description>
<content:encoded>&lt;p&gt;&lt;em&gt;Part one of &lt;a href=&quot;https://rocket-science.ru/hacking/2023/11/03/software-development-in-2023&quot;&gt;four key developer skills&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Ability to guarantee fault tolerance even in the presence of bugs in the code.&lt;/p&gt;&lt;/blockquote&gt;&lt;hr/&gt;&lt;blockquote&gt;&lt;p&gt;Large systems will probably always be delivered containing a number of errors in the software, nevertheless such systems are expected to behave in a reasonable manner.
~ Joe Armstrong&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Ever since the concept of “software” came into existence, people have been forced to live with the fact that it doesn’t always work as expected. Whether it’s a cockroach in the system unit, rats gnawing through a cable, a Norwegian construction worker accidentally slicing it with an excavator bucket, or even—terrifying to utter—errors in the software, introduced by a developer.&lt;/p&gt;&lt;p&gt;People have desperately fought against bugs in source code—Donald Knuth even &lt;a href=&quot;https://tex.stackexchange.com/a/113658&quot;&gt;established a reward&lt;/a&gt; for problems found in TeX. Unfortunately, Knuth’s approach scales rather poorly. In other words, we cannot ask him to write all the software in the world.&lt;/p&gt;&lt;p&gt;To minimize the number of bugs reaching users, humanity invented tests, types, code reviews, static analysis, and lord knows what else. But spending a couple of hours simply browsing the internet is enough to observe firsthand: none of it works worth a damn. Well, of course, if you measure KPIs, or whatever they measure, the first derivative shows a reduction in the number of bugs, and the second—the speed of that reduction. But if you just imagine yourself to be an ordinary person—nothing works.&lt;/p&gt;&lt;p&gt;Some bugs are non-critical, though still annoying. Literally just the other day I witnessed a user forget to switch their keyboard layout while entering a password into Gmail, and Google displayed an “Error 403” page. Seems fine: the login failed, there’s a specific error code for that, here’s your grenade. The user was thoroughly stumped by this page, decided they had done something forbidden—and wrote to me in near-panic. Grumbling about “user literacy” I leave with distaste to the dimwits who, themselves, think nothing of taking their car to a mechanic, dining at restaurants, and calling an electrician to tighten a tap.&lt;/p&gt;&lt;p&gt;Users will make mistakes—that’s normal. What’s not normal is showing them a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;403&lt;/code&gt; instead of simply redirecting them back to the login page. But all of this, I repeat, is non-critical.&lt;/p&gt;&lt;p&gt;The core problem is that programmers make mistakes too. And they will continue to make mistakes. And no tests-schmests and types-schmypes will save them (us) from this. Sooner or later we will introduce a critical bug. And our goal is to protect ourselves from that proactively.&lt;/p&gt;&lt;p&gt;The human brain is a complex thing, but one fact about it is well-established: it distorts objective reality, most often in the direction most convenient to us (the opposite behavior requires at least outpatient treatment). When I test the behavior of my piece of …ahem… code, I knowingly deceive myself that these two obvious and three non-obvious cases will cover all possible scenarios. Then the tester arrives and tries to pass a lizard as a parameter. The programmer fixes the code. And then the &lt;a href=&quot;https://www.reddit.com/r/Jokes/comments/prdi4x/a_software_tester_walks_into_a_bar/&quot;&gt;product ships to users&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;In every piece of writing longer than a limerick I recommend &lt;a href=&quot;https://en.wikipedia.org/wiki/Property_testing&quot;&gt;Property Based Testing&lt;/a&gt;—I’ll recommend it here too. This technique will help relieve the tester of duty and keep his lizards at home. But it won’t protect against a non-obvious bug that manifests, for example, when the clocks are moved forward for daylight saving time.&lt;/p&gt;&lt;p&gt;So what is to be done?—Just listen to what Joe Armstrong was saying more than thirty years ago: bugs in code will always exist. Trying to eliminate them with tests and types is like defending yourself against a downpour by putting a plastic bag on your head: your head stays dry, but you look like an idiot, and everything below the neck is soaked through.&lt;/p&gt;&lt;p&gt;What then? Well, for starters, accept that bugs will always exist. You won’t be able to fix them all (unless you are Donald Knuth, of course). After the “acceptance” stage—you might try reading what Joe goes on to say about this.&lt;/p&gt;&lt;p&gt;And what he says is roughly this: in all unexpected cases—stop execution. Do not try to cover every possible path a situation might take. Handle success and the expected error where that makes sense (for example, on a failed login—redirect to the login form). In all other cases, including a lizard crawling in over the network—stop execution. In Erlang, this principle gave the language its famous slogan “Let it crash!” Which means: if something has gone wrong—fail immediately. Code only the narrow road of correct, expected execution—as narrow as a Formula One track.&lt;/p&gt;&lt;p&gt;Then the bugs you do introduce will also be handled—not all of them, but many.&lt;/p&gt;&lt;p&gt;Now simply restart the execution that led to the failure, with the same input data. Something unforeseen may have happened—a connection limit exceeded, a third-party service not responding, anything at all. This is not the time to investigate; just try again. Yes, it’s like the famous “have you tried turning it off and on again”—and it works sometimes. If something else depended on your piece—restart that too. Automate this restart so you don’t have to write it from scratch every time or copy-paste it from the previous project. If after a number of attempts it still hasn’t worked—dutifully dump the error to the log and skip this particular set of input data (without losing it).&lt;/p&gt;&lt;p&gt;Congratulations—we’ve just reinvented an underspecified, buggy, and slow implementation of half an Erlang supervisor tree. The Kubernetes people went roughly down this road too: they even managed to sell their pitiful copy of OTP to the people who swim exclusively in the mainstream.&lt;/p&gt;&lt;p&gt;With this simple trick you can forever protect yourself against unexpected failures: if a failure is normal, expected behavior of your program—it automatically transforms from a repulsive caterpillar into a beautiful butterfly. The most remarkable thing is that such an ecosystem is, in principle, not terribly difficult to implement even in Go—but the language’s concepts inherently assume violence against the programmer rather than lightening his burden, so I doubt anything like this will ever appear there. Especially given that Kubernetes exists and you can just not bother—simply restart the entire world, cooling down the misplaced enthusiasm of the cache, serving &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;503&lt;/code&gt;s, and generally tormenting your users in every way imaginable; but the entire industry has spent decades working to ensure that users have grown accustomed to being hated and nothing working.&lt;/p&gt;&lt;p&gt;&lt;a href=&quot;https://twitter.com/guieevc/status/1002494428748140544&quot;&gt;90% of all internet traffic is handled by Erlang&lt;/a&gt;—and this is no accident whatsoever.&lt;/p&gt;&lt;p&gt;Allow me to quote one more passage, explaining precisely what problem Joe Armstrong was solving (and solved).&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;At the time, Ericsson built large telephone exchanges that had hundreds of thousands of users, and a key requirement in building these was that they should never go down. In other words, they had to be completely fault-tolerant.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Like all brilliant solutions in life, this one turned out to be fairly simple. To simplify it even further—one must completely forbid object mutability, implement lightweight processes, and allow them to communicate exclusively through asynchronous messages. Then everything described above simply falls into your lap as a gift: supervisor trees practically require no special implementation—they emerge almost out of the box on their own. But that is perhaps a subject better left for next time.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Software Development in 2023</title>
<link>https://rocket-science.ru/hacking/2023/11/03/software-development-in-2023</link>
<guid isPermaLink="false">DyBDyHVjpKjGIE3nN2j_xJO236ha5cATgaIPRg==</guid>
<pubDate>Wed, 25 Mar 2026 16:08:30 +0000</pubDate>
<description>Four skills that actually matter in 2023, or: a blanket glorification of Elixir masquerading as a balanced opinion piece.</description>
<content:encoded>&lt;p&gt;I was tempted to call this piece “five myths” to fully merge in ecstasy with the bullshit that has been pouring into the ears of the conscientious reader lately, but it seemed like overkill.&lt;/p&gt;

&lt;p&gt;I’ve been writing code since 1986, when my Euclidean algorithm in Fortran booted on the third attempt from punch cards on an ES-1060. Since then I’ve written in so many different languages that I’ve lost count. For roughly a dozen of them I’ve been paid actual money. All my life I carefully avoided any managerial roles, but CEOs turned out to be perceptive, and after some time I’d find myself with actual human beings reporting directly to me. I wanted to write code and wasn’t particularly dreaming of associating with my own kind.&lt;/p&gt;

&lt;p&gt;The greatest compliment I’ve ever received in my life was a casually dropped phrase by Roma Ivanov, when at some meeting about product development at Yandex (the only time in my career when my job title on the payroll contained the word “manager”), I started going deep into technical implementation details and someone waved their hand: “What does it matter, you won’t be the one implementing it anyway!” Roman snorted and loudly muttered: “Him?—He will.”&lt;/p&gt;

&lt;p&gt;When the HR department at my current firm went berserk and issued a demand—for that salary, this person ought to have their own team—the CTO met me halfway once again, and now I officially head “Aleksei Team,” consisting of one person.&lt;/p&gt;

&lt;hr/&gt;

&lt;p&gt;In the time I’ve been pressing buttons on a keyboard for sustenance, several eras have come and gone. Someone else’s code finally acquired something resembling documentation. Programming languages proliferated—written by clever people, professionals in their field, who mistakenly assumed that a programmer was, by default, not stupid, and should be given as many capabilities as possible. Perl, Ruby, even Python were created to make the process of writing code exciting, and the programmer’s expressive power practically limitless. This substantially lowered the profession’s entry threshold and spawned a crowd of craftsmen who literally achieved results by random poking. When the fraction of dimwits in the profession exceeded any conceivable limit, something had to be done, and the most farsighted ones devised making it as difficult as possible to write non-working code: thus tests appeared, and in their wake—types (in those languages where they’re not needed in the slightest).&lt;/p&gt;

&lt;p&gt;Matz was creating a language in which literally nothing would obstruct the programmer, and that’s exactly why Ruby is adored by its devotees (of which I am one). But this approach works well only on the condition that the developer actually knows what they’re doing. Unfortunately, in the modern world one cannot count on that at all, which is why the coherent and actually quite interesting-in-concept &lt;em&gt;JavaScript&lt;/em&gt; grew heavy with ponderous and pointless &lt;em&gt;TypeScript&lt;/em&gt;. &lt;em&gt;Python&lt;/em&gt;—with those damned annotations that literally affect nothing. &lt;em&gt;Perl&lt;/em&gt; was simply forgotten, because the nearly unlimited power of its syntax demands too significant a cognitive effort. &lt;em&gt;PHP&lt;/em&gt; lives only thanks to Facebook.&lt;/p&gt;

&lt;p&gt;Amid all this obscurantism, people with pretensions dusted off &lt;em&gt;Haskell&lt;/em&gt; from a dusty closet (which I worked with in 2000–2002, and which ultimately ruined our project back then due to the absence of dependent types). It is 2023, and there are still no dependent types, which makes the choice of Haskell rather dubious for any project—given the absence of any pronounced upsides and the enormous number of equally pronounced downsides (which, in principle, the authors never concealed: the language is academic, experimental, “avoid success at all costs,” and so on). Idris, ideologically absolutely correct, turned out to be too complex for dysfunctional poseurs and is essentially being chiseled along by its author alone.&lt;/p&gt;

&lt;p&gt;Also, while we were striding in seven-league boots toward the bright future, the language support teams completely ignored the thoroughly changed paradigm of use. While the community in chorus was hammering type wedges between the tightly fitted logs of code and pursuing one hundred percent test coverage, computers became multi-core and the network became more accessible than hard drives.&lt;/p&gt;

&lt;p&gt;In the modern world, fast algorithms are not needed (because the bottleneck will most likely be elsewhere). Micro-optimizations are not needed (because throwing more memory at the problem is cheaper than debugging an assembly insert). Approaches and practices dragged in from the previous century become meaningless—things like properly naming variables and splitting code into modules.&lt;/p&gt;

&lt;p&gt;On the other hand, people who understand the direction of applied programming’s evolution and can write code that scales in every direction simply by plugging in additional capacity are worth their weight in gold. It suddenly turned out that such people existed even in the era of antediluvian computers; &lt;a href=&quot;https://erlang.org/download/armstrong_thesis_2003.pdf&quot;&gt;Joe Armstrong’s dissertation&lt;/a&gt; contains everything one needs to understand about designing fault-tolerant distributed systems, and &lt;a href=&quot;https://erlang.org&quot;&gt;erlang&lt;/a&gt;—everything required for writing such code.&lt;/p&gt;

&lt;p&gt;I’ve often heard from various people that Erlang’s syntax is …umm… a bit exotic. I strongly disagree with this claim (like all other brilliant decisions, Joe took it from real life—it mirrors the syntax of the English language), but I’ve still chosen &lt;a href=&quot;https://elixir-lang.org&quot;&gt;elixir&lt;/a&gt; for my work—not for the grammar, of course, but for its truly impeccably implemented metaprogramming. Comparable metaprogramming existed only in LISP, but LISP is too cumbersome and its infrastructure is dreadful.&lt;/p&gt;

&lt;p&gt;So, to keep this text from appearing to be blanket praise for Elixir (which it genuinely is, but I’d like to conceal that fact)—let me present the four most important skills, in my not entirely amateur opinion, that are in demand in 2023:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://rocket-science.ru/hacking/2023/11/04/no-failures-despite-bugs&quot;&gt;ability to guarantee fault tolerance even in the presence of bugs in the code&lt;/a&gt;;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://rocket-science.ru/hacking/2023/11/09/divide-et-impera&quot;&gt;ability to break the whole into parts and implement small pieces of a large system in complete isolation from one another&lt;/a&gt;;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://rocket-science.ru/hacking/2023/12/25/yagnin-but-yagnil&quot;&gt;ability to design software such that the first version contains not a single line “in anticipation of future changes,” while future changes don’t touch existing code in any way&lt;/a&gt;;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://rocket-science.ru/hacking/2023/12/26/horizontal-scaling&quot;&gt;ability to immediately build a horizontally scalable solution without adding any special code for it in the first version&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s about the lot. As for all those algorithms, optimizations, memory consumption…—well, it’s better if you can estimate algorithmic complexity and avoid traversing a list back and forth with nested loops. Probably &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;O(n⁵)&lt;/code&gt; is worth avoiding right from the start. But pursuing &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;O(log n)&lt;/code&gt; instead of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;O(n)&lt;/code&gt; is almost never necessary, and if it is—there’s almost certainly a working implementation out there already. “You can’t read the entire table into memory!”—a colleague once told me, in horror. I asked him: “Why?”—“There could be a million records!” At that point there were about five hundred records, and each was under a kilobyte. I waved my hand: “Even a billion—this machine has 120 gigs of RAM; if we hit failures, we’ll add paging.”&lt;/p&gt;

&lt;p&gt;There was already a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;get_page(from = 0, count = -1)&lt;/code&gt; function in my code that always returned the entire table. It’s four years old now, and it will be implemented more seriously someday. Around 2037.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>BEAM Metrics in ClickHouse – Andrea Leopardi</title>
<link>https://andrealeopardi.com/posts/beam-metrics-in-clickhouse/</link>
<enclosure type="image/jpeg" length="0" url="https://andrealeopardi.com/posts/beam-metrics-in-clickhouse/cover-image.png"></enclosure>
<guid isPermaLink="false">E8oAHIBgdTcTxrHA4ZzJChfjOfUE1cFXDZeWvA==</guid>
<pubDate>Wed, 18 Mar 2026 16:51:45 +0000</pubDate>
<description>How we are periodically dumping metrics about our most demanding BEAM processes into an easy-to-query ClickHouse table.</description>
<content:encoded>&lt;p&gt;At &lt;a href=&quot;https://knock.app&quot;&gt;Knock&lt;/a&gt;, we&amp;#39;re big into observability. Who isn&amp;#39;t! We&amp;#39;re also huge fans of &lt;a href=&quot;https://clickhouse.com&quot;&gt;ClickHouse&lt;/a&gt;. This post is an overview of how we started using ClickHouse to collect detailed, high-cardinality BEAM metrics for all sorts of reasons.&lt;/p&gt;

&lt;p&gt;We use Datadog for all of our metrics (and monitoring, and logging!). AWS metrics, database metrics, Kubernetes, application—all of it ends up in Datadog. We know how to use the tool and generally find it pleasant. However, you &lt;strong&gt;pay&lt;/strong&gt; for that. It&amp;#39;s not like this is a hot take: Datadog is ✨expensive✨.&lt;/p&gt;
&lt;p&gt;The thing we pay the most for is definitely &lt;a href=&quot;https://docs.datadoghq.com/metrics/custom_metrics/&quot;&gt;custom metrics&lt;/a&gt;, that is, unique combinations of metric name and tag values. A classic example:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://andrealeopardi.com/posts/beam-metrics-in-clickhouse/hand-drawn-metrics.png&quot; alt=&quot;Hand drawing of a &amp;quot;fetched_buckets&amp;quot; metric flowing into two tags, &amp;quot;env:staging&amp;quot; and &amp;quot;env:production&amp;quot;. Each of those then in turn flows into three &amp;quot;account_type:...&amp;quot; tags. At the bottom, all this equals one metric per tag combination. This shows how the cardinality of metrics multiplies by the tags.&quot; title=&quot;&quot;/&gt;&lt;/p&gt;
&lt;p&gt;That&amp;#39;s only a single &lt;em&gt;counter&lt;/em&gt; metric, &lt;code&gt;fetched_buckets&lt;/code&gt;. However, tag cardinality means you have:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Two &lt;code&gt;env&lt;/code&gt; values&lt;/li&gt;
&lt;li&gt;Three &lt;code&gt;account_type&lt;/code&gt; values&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That means 2 × 3 = 6 unique custom metrics! You can easily see how that scales. If you add a &lt;code&gt;backend&lt;/code&gt; tag to determine the storage buckets were fetched from, with say two values, you&amp;#39;re essentially multiplying that &lt;code&gt;6&lt;/code&gt; by yet another &lt;code&gt;2&lt;/code&gt;, landing on &lt;code&gt;12&lt;/code&gt; custom metrics now. Really easy to explode.&lt;/p&gt;
&lt;h2&gt;High Cardinality Metrics&lt;/h2&gt;
&lt;p&gt;It&amp;#39;s not really these &amp;quot;low-cardinality&amp;quot; metrics that cause issues. You&amp;#39;ll end up with thousands of metrics this way, but that&amp;#39;s quite a normal range for any Datadog-like service.&lt;/p&gt;
&lt;p&gt;The problem is &lt;em&gt;high-cardinality&lt;/em&gt; metrics. For example, the Datadog agent running on our Kubernetes nodes attaches a whole slew of tags to metrics, including a &lt;code&gt;kube_node&lt;/code&gt; tag for the cluster node that the metric comes from. If you have many nodes, or scale nodes up and down, &lt;em&gt;every&lt;/em&gt; metric goes up in cardinality.&lt;/p&gt;
&lt;p&gt;For us, one of the main causes of problems was the &lt;code&gt;pod_name&lt;/code&gt; tag, which we use to tag some of our metrics with the Kubernetes pod they come from. Application-level metrics don&amp;#39;t generally benefit from such granularity, but there&amp;#39;s a whole set of metrics that absolutely does: ones about the BEAM. Each pod runs its own Erlang node. Knowing the total number of running processes &lt;em&gt;without&lt;/em&gt; slicing that by Erlang node is meaningless: great, you know you have 800k processes running on your cluster, but you&amp;#39;ll never know if a pod runs amok and makes for a big chunk of that number.&lt;/p&gt;
&lt;p&gt;The dilemma? If we have 100 pods running at any given time, that&amp;#39;s every BEAM metric times 100. The explosion, though, is that repeats every time we deploy! New versions of the code cause new Docker images, which cause new pod names, which means 100 more unique combinations of each metric that is tagged with &lt;code&gt;pod_name&lt;/code&gt;. Boom.&lt;/p&gt;

&lt;p&gt;Data like this, however, is paramount for a platform team operating a system like this. It&amp;#39;s critical during incident, to spot trends and patterns, and to understand what our system is doing. We don&amp;#39;t want to give it up. If anything, we want &lt;em&gt;more&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;For example, recently we had to deal with some nodes running hot. BEAM&amp;#39;s runtime inspection capabilities are fantastic. We jumped on the problematic boxes, fired up IEx, and snooped around using &lt;a href=&quot;https://hexdocs.pm/recon/overview.html&quot;&gt;&lt;code&gt;recon&lt;/code&gt;&lt;/a&gt; to our hearts&amp;#39; content. Nothing better than being able to do&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;biggest_offenders = :recon.proc_window(:reductions, 5, _ms = 1000)

# Now just get all the info + stacktrace to get some more context on these processes:
Enum.map(biggest_offenders, fn {pid, _reductions, _some_info} -&amp;gt;
  {pid, Process.info(pid)}
end)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;on a &lt;em&gt;live&lt;/em&gt;, running system and just get a peek at what&amp;#39;s going on in there.&lt;/p&gt;
&lt;p&gt;This is invaluable, but it&amp;#39;s not the solution for &lt;strong&gt;constantly&lt;/strong&gt; monitoring our systems. We have no historical data on, say, the five processes consuming the most memory at any given time.&lt;/p&gt;
&lt;h2&gt;Enter ClickHouse&lt;/h2&gt;
&lt;p&gt;ClickHouse is a &lt;em&gt;really fast&lt;/em&gt;, columnar, OLAP database. It&amp;#39;s a workhorse, and we use it to power product features, like segmentation or audit logs, and analytics. The most recent thing we were able to trivially ship thanks to ClickHouse was &lt;a href=&quot;https://email.info&quot;&gt;email.info&lt;/a&gt;, a breakdown of email provider performance: basically, we had ClickHouse chew through our provider delivery data and spit out analytics from that. It might not be that impressive on the surface, but I was baffled by &lt;strong&gt;how easy&lt;/strong&gt; it was to go from thinking about doing this to live page.&lt;/p&gt;
&lt;p&gt;Back to our BEAM analytics. My awesome coworker &lt;a href=&quot;https://www.linkedin.com/in/victor-lymar/&quot;&gt;Victor&lt;/a&gt;, who got us all into ClickHouse in the first place, kind of opened a Pandora&amp;#39;s box here: what if we store this high-cardinality, high-volume internal telemetry data &lt;em&gt;in ClickHouse&lt;/em&gt;?&lt;/p&gt;
&lt;p&gt;This makes &lt;em&gt;so much&lt;/em&gt; sense. We can periodically sample the top-&lt;code&gt;n&lt;/code&gt; processes based on some metric (memory usage, reductions, mailbox length) and dump a bunch of information about them in a ClickHouse table. We created a new &lt;code&gt;internal_telemetry&lt;/code&gt; database and got to work. &amp;quot;Got to work&amp;quot; makes it sound way worse than it was: the whole thing was done in a few hours.&lt;/p&gt;
&lt;h3&gt;Table Design&lt;/h3&gt;
&lt;p&gt;We didn&amp;#39;t spend too much time thinking about the table structure here. This is internal data that only &lt;em&gt;we&lt;/em&gt; will be querying, so query performance is not the most important factor.&lt;/p&gt;

&lt;p&gt;This is where we landed for the first iteration:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;CREATE TABLE top_processes (
    metric LowCardinality(String),
    deployment LowCardinality(String),
    pod_name LowCardinality(String),
    timestamp DateTime64(3, &amp;#39;UTC&amp;#39;),
    pid String,
    registered_name String DEFAULT &amp;#39;&amp;#39;,
    current_function String DEFAULT &amp;#39;&amp;#39;,
    initial_call String DEFAULT &amp;#39;&amp;#39;,
    reductions UInt64,
    message_queue_len UInt32,
    memory UInt64,
    label String,
    memory_details Map(String, UInt128)
)
ENGINE = MergeTree
PARTITION BY toMonday(timestamp)
ORDER BY (metric, deployment, pod_name, timestamp)
TTL toDate(timestamp) + toIntervalDay(30)
SETTINGS ttl_only_drop_parts = 1;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Some notes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;metric&lt;/code&gt; is the metric we &amp;quot;found&amp;quot; this process through. For example, &lt;code&gt;metric = &amp;#39;memory&amp;#39;&lt;/code&gt; means that this row represents a process that was one of the top-n processes by memory usage at the time of scanning the system.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;deployment&lt;/code&gt; is our Kubernetes deployment.&lt;/li&gt;
&lt;li&gt;The TTL is there because we have not felt the need for this level of detailed metrics to be retained for months. ClickHouse takes care of dropping partitions older than one month here, so the whole thing doesn&amp;#39;t explode in storage size. The funny thing is this though: storage is &lt;em&gt;very&lt;/em&gt; cheap, and we mostly query over short time ranges; we could bump the &lt;code&gt;TTL&lt;/code&gt; to be much longer without paying much more.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;memory_details&lt;/code&gt; is a map so we can easily add to it later on without having to change the schema. This felt like the right compromise.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The rest of the columns should be self-explanatory.&lt;/p&gt;
&lt;p&gt;The ordering key (&lt;code&gt;ORDER BY&lt;/code&gt;) is one of the keys to ClickHouse queries. It lets ClickHouse find data quickly and scan as little as possible. A good rule of thumb is to go with increasing cardinality, which is exactly what we did here. The query patterns we expect to issue are mostly:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For a single deployment, as it rarely makes sense to look at metrics aggregated over &lt;em&gt;different applications&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Most often for a single pod, to spot things going wrong within a given Erlang node.&lt;/li&gt;
&lt;li&gt;Focused on a window of time, thus the ordering by &lt;code&gt;timestamp&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An ordering key like this doesn&amp;#39;t mean you cannot &lt;em&gt;skip&lt;/em&gt; parts of it in your queries—it just means doing that results in less-optimized queries and data scans. If we&amp;#39;ll ever find ourselves, say, querying all &lt;code&gt;metric&lt;/code&gt; across all &lt;code&gt;pod_name&lt;/code&gt;, we could easily deploy a materialized view that writes this data into a separate table optimized for that query pattern. &lt;strong&gt;Storage is cheap&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;Periodically Dumping System State&lt;/h3&gt;
&lt;p&gt;Honestly, I was conflicted on whether I should even include this section. Nothing about this is new to anyone doing Erlang/Elixir!&lt;/p&gt;
&lt;p&gt;We&amp;#39;re just using the lovely &lt;a href=&quot;https://github.com/beam-telemetry/telemetry_poller&quot;&gt;&lt;code&gt;telemetry_poller&lt;/code&gt;&lt;/a&gt; to collect and dump system state every few seconds. The child spec for it goes like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@doc &amp;quot;&amp;quot;&amp;quot;
Returns a child spec for a `telemetry_poller` that periodically
logs top processes.
&amp;quot;&amp;quot;&amp;quot;
def child_spec([] = _opts) do
  Supervisor.child_spec(
    {:telemetry_poller,
     measurements: [{__MODULE__, :measure_and_persist, []}],
     period: to_timeout(second: 5),
     init_delay: to_timeout(second: 5),
     name: :top_processes_poller},
    id: __MODULE__
  )
end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Nothing fancy, at all. &lt;code&gt;measure_and_persist/0&lt;/code&gt; looks like what you probably expect it to look like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@doc &amp;quot;&amp;quot;&amp;quot;
  Collects top process data and writes it to ClickHouse.
  &amp;quot;&amp;quot;&amp;quot;
  def measure_and_persist do
    now = DateTime.utc_now()
    pod_name = System.get_env(&amp;quot;HOSTNAME&amp;quot;, &amp;quot;unknown&amp;quot;)
    deployment = System.get_env(&amp;quot;DEPLOYMENT&amp;quot;, &amp;quot;&amp;lt;redacted&amp;gt;&amp;quot;)

    rows =
      Enum.flat_map(@metrics, fn metric -&amp;gt;
        metric
        |&amp;gt; :recon.proc_count(@top_n)
        |&amp;gt; Enum.map(fn {pid, _value, _recon_info} -&amp;gt;
          build_row(pid, metric, now, pod_name, deployment)
        end)
        |&amp;gt; Enum.reject(&amp;amp;is_nil/1)
      end)

    if rows != [] do
      InternalTelemetryRepo.insert_all(TopProcess, rows)
    end

    :ok
  end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We use &lt;a href=&quot;https://github.com/plausible/ch&quot;&gt;Ch&lt;/a&gt; as our ClickHouse driver, and &lt;a href=&quot;https://github.com/plausible/ecto_ch&quot;&gt;&lt;code&gt;ecto_ch&lt;/code&gt;&lt;/a&gt; to integrate it with Ecto.&lt;/p&gt;
&lt;p&gt;That&amp;#39;s the whole infrastructure for capturing this data, right there. &lt;code&gt;build_row/5&lt;/code&gt; is just a bunch of formatting and &lt;a href=&quot;https://hexdocs.pm/elixir/Process.html#info/2&quot;&gt;&lt;code&gt;Process.info/2&lt;/code&gt;&lt;/a&gt; calls. That&amp;#39;s it!&lt;/p&gt;
&lt;h3&gt;In the Real World&lt;/h3&gt;
&lt;p&gt;This has been in production for just 10 days. Our top-n is &lt;code&gt;20&lt;/code&gt;: that is, we sample the top-20 processes over three metrics: reductions, message queue length, and memory. We sample every 5 seconds. Our total throughput in this table is:&lt;/p&gt;
&lt;p&gt;$$
\frac{top{\text -}n \times metrics \times pod\_count}{sample\_interval} = \frac{20 \times 3 \times 100}{5s} = 1200 \space rows/s
$$&lt;/p&gt;
&lt;p&gt;Easy peasy for good ol&amp;#39; ClickHouse. Some stats from the live table:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We have ~607M rows in &lt;code&gt;top_processes&lt;/code&gt;. Spare change for ClickHouse.&lt;/li&gt;
&lt;li&gt;Compressed size of this table in object storage is just 5.64 GB (134.66 GB uncompressed).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now for the fun part: queries! Let&amp;#39;s just look at a couple of fun queries we were able to issue over the past few days.&lt;/p&gt;
&lt;h4&gt;Frequent Fliers&lt;/h4&gt;
&lt;p&gt;Find the processes that appear most often in the top-n processes by memory, over the past 24 hours. This could help find processes that consistently consume a lot of memory (rather than spiking once in a while).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;SELECT
    concat(&amp;#39;`&amp;#39;, registered_name, &amp;#39;`&amp;#39;) AS name,
    count() AS appearances,
    formatReadableSize(avg(memory)) AS avg_memory,
    formatReadableSize(max(memory)) AS max_memory
FROM top_processes
WHERE
  metric = &amp;#39;memory&amp;#39;
  AND deployment = {deployment:LowCardinality(String)}
  AND registered_name != &amp;#39;&amp;#39;
  AND timestamp &amp;gt;= now() - INTERVAL 24 HOUR
GROUP BY registered_name
ORDER BY appearances DESC
LIMIT 10
FORMAT MARKDOWN&lt;/code&gt;&lt;/pre&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;name&lt;/th&gt;&lt;th&gt;appearances&lt;/th&gt;&lt;th&gt;avg_memory&lt;/th&gt;&lt;th&gt;max_memory&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;tzdata_release_updater&lt;/code&gt;&lt;/td&gt;&lt;td&gt;362659&lt;/td&gt;&lt;td&gt;8.25 MiB&lt;/td&gt;&lt;td&gt;8.55 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;Elixir.Sentry.Sources&lt;/code&gt;&lt;/td&gt;&lt;td&gt;362659&lt;/td&gt;&lt;td&gt;13.18 MiB&lt;/td&gt;&lt;td&gt;13.18 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;code_server&lt;/code&gt;&lt;/td&gt;&lt;td&gt;362659&lt;/td&gt;&lt;td&gt;26.52 MiB&lt;/td&gt;&lt;td&gt;51.77 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;application_controller&lt;/code&gt;&lt;/td&gt;&lt;td&gt;362659&lt;/td&gt;&lt;td&gt;10.23 MiB&lt;/td&gt;&lt;td&gt;10.29 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;ldclient_event_process_server_default&lt;/code&gt;&lt;/td&gt;&lt;td&gt;362657&lt;/td&gt;&lt;td&gt;54.69 MiB&lt;/td&gt;&lt;td&gt;283.01 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;Elixir.&amp;lt;redacted&amp;gt;.K8sServer&lt;/code&gt;&lt;/td&gt;&lt;td&gt;352357&lt;/td&gt;&lt;td&gt;10.03 MiB&lt;/td&gt;&lt;td&gt;12.00 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;ldclient_event_server_default&lt;/code&gt;&lt;/td&gt;&lt;td&gt;349761&lt;/td&gt;&lt;td&gt;12.82 MiB&lt;/td&gt;&lt;td&gt;39.39 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;&amp;lt;redacted&amp;gt;&lt;/code&gt;&lt;/td&gt;&lt;td&gt;286100&lt;/td&gt;&lt;td&gt;5.10 MiB&lt;/td&gt;&lt;td&gt;7.88 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;&amp;lt;redacted&amp;gt;&lt;/code&gt;&lt;/td&gt;&lt;td&gt;275572&lt;/td&gt;&lt;td&gt;4.65 MiB&lt;/td&gt;&lt;td&gt;11.58 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;&amp;lt;redacted&amp;gt;&lt;/code&gt;&lt;/td&gt;&lt;td&gt;242255&lt;/td&gt;&lt;td&gt;5.70 MiB&lt;/td&gt;&lt;td&gt;12.80 MiB&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;p&gt;Just to give you an idea: this executed in 0.238s, scanning 7,389,364 rows (516.95 MB of data) 🤯.&lt;/p&gt;
&lt;h4&gt;Maximum Consumer Memory&lt;/h4&gt;
&lt;p&gt;This is useful to spot if there are any pods in a given deployment where the process consuming the most memory is consuming &lt;em&gt;more&lt;/em&gt; than the top memory hog in the other pods.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;SELECT
    toStartOfMinute(timestamp) AS minute,
    pod_name,
    max(memory) peak_memory
FROM top_processes
WHERE metric = &amp;#39;memory&amp;#39;
  AND deployment = {deployment:LowCardinality(String)}
  AND timestamp &amp;gt;= now() - INTERVAL 3 HOUR
GROUP BY minute, pod_name
ORDER BY minute;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;ClickHouse Cloud has decent charting functionality, and this type of query spits out very easy-to-spot-stuff-on charts. Here&amp;#39;s what it looks like, over time, broken by pod (with no runaway processes):&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://andrealeopardi.com/posts/beam-metrics-in-clickhouse/memory-chart-example.png&quot; alt=&quot;A screenshot of a line chart (in ClickHouse Cloud) showing the above query visualized.&quot; title=&quot;&quot;/&gt;&lt;/p&gt;
&lt;p&gt;Also, I can&amp;#39;t help but share this: even when forcing the query to avoid any caches (&lt;code&gt;SETTINGS use_query_cache = false&lt;/code&gt;), this query returned in 42 &lt;em&gt;milliseconds&lt;/em&gt;, reading 1,131,629 rows (21.51 MB of data). It never gets old.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;I’m aware that using ClickHouse for this type of stuff isn&amp;#39;t groundbreaking. ClickHouse is mostly sold as an &lt;em&gt;analytics database&lt;/em&gt;, so much so that there&amp;#39;s a whole product called &lt;a href=&quot;https://clickhouse.com/clickstack&quot;&gt;ClickStack&lt;/a&gt; to store and query OpenTelemetry data. This is truly ClickHouse&amp;#39;s bread and butter.&lt;/p&gt;
&lt;p&gt;What I described in this post is just what I found to be a perfect use case. BEAM is easy to inspect (or &amp;quot;self&amp;quot;-inspect) at runtime, making this whole thing trivial.&lt;/p&gt;
&lt;p&gt;Just to recap, we talked about:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Why high-cardinality metrics are hard to store in metric products (like Datadog in our case).&lt;/li&gt;
&lt;li&gt;How we overcame this by storing lots of detailed data about the runtime of the BEAM in ClickHouse.&lt;/li&gt;
&lt;li&gt;A couple of example queries we ran that show the potential use cases of something like this.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I did not explicitly close the cost loop here because storage cost with ClickHouse, at least at this scale, is &lt;strong&gt;negligible&lt;/strong&gt;. At the time of writing, ClickHouse charges something like USD$25/TB/mo in &lt;code&gt;us-east-*&lt;/code&gt;. Even if we grow the size of our stored internal metrics 100 times what it is today, we&amp;#39;ll be storing half a terabyte of data and paying less than a single ChatGPT Plus subscription. There are obviously egress and compute cost to factor in, but those are only relevant when we &lt;em&gt;query&lt;/em&gt; the data; with five engineers on the platform team only having to look at this data once in a while, that cost amounts to zero for all intents and purposes.&lt;/p&gt;
&lt;p&gt;I want to give another shout out to my colleague Victor, who essentially came up with this whole idea.&lt;/p&gt;
&lt;p&gt;Thanks for reading!&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Upgrading Amignosis: Phoenix and Elixir with Claude Code</title>
<link>https://petros.blog/2026/02/15/upgrading-an-elixir-phoenix-app-using-tidewave/</link>
<enclosure type="image/jpeg" length="0" url="https://i0.wp.com/petros.blog/wp-content/uploads/2026/02/image-1.png?fit=1024%2C768&amp;ssl=1"></enclosure>
<guid isPermaLink="false">k8CxLXnR_-H9bLPdHYqYdx1oyABFPUUzzgFtBA==</guid>
<pubDate>Tue, 03 Mar 2026 08:19:26 +0000</pubDate>
<description>Discover how I upgraded my Phoenix LiveView app using Tidewave and Claude Code, overcoming challenges and optimizing the process.</description>
<content:encoded>&lt;p&gt;My company, Amignosis, &lt;a href=&quot;https://amignosis.com&quot;&gt;website&lt;/a&gt; is a simple Phoenix LiveView web app. It was on Phoenix 1.7.21 and Elixir 1.18, and I wanted to upgrade it to Phoenix 1.8.3 and Elixir 1.19. So, I decided to try &lt;a href=&quot;https://tidewave.ai&quot;&gt;Tidewave&lt;/a&gt; with Claude Code, to do it for me.&lt;/p&gt;



&lt;p&gt;Here’s a video showing the process, but read ahead if you want to get the gist of it.&lt;/p&gt;



&lt;figure&gt;&lt;div&gt;

&lt;/div&gt;&lt;/figure&gt;



&lt;p&gt;I asked Claude Code to read the Phoenix CHANGELOGs online. Then I told it that I would feed it the diffs between 1.7.21 and 1.8.3 one by one.&lt;/p&gt;



&lt;p&gt;The diffs came from &lt;a href=&quot;https://phoenixdiff.org/compare/1.7.21%20--binary-id...1.8.3%20--binary-id&quot;&gt;phoenixdiff.org&lt;/a&gt;, a very handy site that shows you what’s changed between Phoenix versions. It basically compares what would be generated by &lt;code&gt;mix phx.new&lt;/code&gt;, if you were to do it by hand.&lt;/p&gt;



&lt;p&gt;The problem with that site is the presentation of the diffs. Because of this, Claude Code can’t read the content. I cloned the repository of that project. I enhanced it so that I can copy each diff to the clipboard.&lt;/p&gt;



&lt;p&gt;As I mentioned above, my first strategy was to feed it each diff one by one. During the process, I realized a better strategy. First, save all the diffs locally. Then, point Claude Code to them. This way, it can have the whole picture.&lt;/p&gt;



&lt;p&gt;What worked well is that Claude Code (via Tidewave) applied the diffs. It also tweaked them when needed to reference my specific project’s domain names.&lt;/p&gt;



&lt;h3&gt;Hiccups&lt;/h3&gt;



&lt;p&gt;The process worked quite well. I had a few hiccups though.&lt;/p&gt;



&lt;h4&gt;Tidewave got stuck in a loop due to a huge prompt&lt;/h4&gt;



&lt;p&gt;The first one was that at some point, one of the diffs was huge. I pasted it in the Tidewave chat prompt and hit enter. A message appeared saying the prompt was too big and couldn’t work. At that point, Tidewave got stuck in a loop. Anything I typed, no matter how small, I was getting the same error message. I couldn’t find a way to get it out of that state. Starting a new session meant losing the context, but that’s what I did anyway. It had the context of the pending applied diffs locally, and I described what we were working towards from scratch. It seemed it continued without any issues.&lt;/p&gt;



&lt;p&gt;Next time, I will avoid pasting big text. Instead, I will point it to a local file with that content. This way, it can just apply it directly.&lt;/p&gt;



&lt;h4&gt;Claude Code decided to skip daisyUI&lt;/h4&gt;



&lt;p&gt;A big change in Phoenix from 1.7.x to 1.8.x was the introduction of the &lt;a href=&quot;https://daisyui.com/?lang=en&quot;&gt;daisyUI&lt;/a&gt; library. It’s not mandatory to use it, but some of the default files generated assume you are using it anyway.&lt;/p&gt;



&lt;p&gt;Because I was feeding the diffs one by one, Claude Code didn’t have the whole context. So it decided to skip DaisyUI altogether.&lt;/p&gt;



&lt;p&gt;At some point, I asked it to embrace the introduction of daisyUI. We would handle any styling changes it would cause later. My app is very small (just two pages) so it wouldn’t be that hard.&lt;/p&gt;



&lt;p&gt;After some back and forth, we ended up with a running version. It had the same look and feel as my earlier version.&lt;/p&gt;



&lt;h4&gt;Tidewave couldn’t refresh its shell context&lt;/h4&gt;



&lt;p&gt;This wasn’t a huge deal, but still created a tiny friction. At some point, as Claude Code was applying the diffs, we started seeing some warnings. Those warnings were because the project was running with Elixir 1.18. Yet, Phoenix &amp;gt;= 1.8 expects Elixir 1.19.&lt;/p&gt;



&lt;p&gt;I am using &lt;a href=&quot;https://asdf-vm.com&quot;&gt;asdf&lt;/a&gt; to manage Elixir/OTP versions. As a result, my current Tidewave session couldn’t refresh its terminal context to that version. That’s despite us switching &lt;code&gt;asdf&lt;/code&gt; to the new versions locally. After some back and forth, we managed to snap out of the stuck situation. I don’t remember if I actually just started a new chat in Tidewave or not.&lt;/p&gt;



&lt;h3&gt;Conclusion&lt;/h3&gt;



&lt;p&gt;I think next time I can help Claude Code finish this faster. Two strategies should help here:&lt;/p&gt;



&lt;ol&gt;
&lt;li&gt;Don’t leave a project on an older version for too long&lt;/li&gt;



&lt;li&gt;Feed Claude Code *all* the diffs at once instead of one by one&lt;/li&gt;
&lt;/ol&gt;








&lt;h3&gt;Here’s what I am doing&lt;/h3&gt;


&lt;p&gt;At &lt;a href=&quot;https://amignosis.com&quot;&gt;Amignosis&lt;/a&gt;, I pour my heart and skill into crafting slowly brewed software, one thoughtful line at a time. I am craftsman in a world of complexity and low-quality solutions. I am a &lt;a href=&quot;https://amignosis.com/stars&quot;&gt;shoemaker&lt;/a&gt;. I take the time to create simple, timeless software built to last. Check what I am doing &lt;a href=&quot;https://petros.blog/now/&quot;&gt;now&lt;/a&gt; and &lt;a href=&quot;mailto:petros@amignosis.com&quot;&gt;talk to me&lt;/a&gt;.&lt;/p&gt;


	&lt;div&gt;
		&lt;div&gt;
							
					&lt;div&gt;
												&lt;p&gt;
							
								Type your email…							
													&lt;/p&gt;
												&lt;p&gt;
							
							
							
							
							
							
							
														
						&lt;/p&gt;
					&lt;/div&gt;
				
										&lt;div&gt;
					Join 71 other subscribers				&lt;/div&gt;
					&lt;/div&gt;
	&lt;/div&gt;
	&lt;div&gt;&lt;h3&gt;Like this:&lt;/h3&gt;&lt;div&gt;&lt;span&gt;&lt;span&gt;Like&lt;/span&gt;&lt;/span&gt; &lt;span&gt;Loading…&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;</content:encoded>
</item>
<item>
<title>Process-Based Concurrency: Why BEAM and OTP Keep Being Right</title>
<link>https://variantsystems.io/blog/beam-otp-process-concurrency/</link>
<guid isPermaLink="false">5VAM9jE0gDf0wBqwky5PbryDaGmxm6_EsdgpUg==</guid>
<pubDate>Mon, 02 Mar 2026 05:22:31 +0000</pubDate>
<description>A first-principles guide to process-based concurrency — what makes BEAM different, how OTP encodes resilience, and why everyone keeps reinventing it.</description>
<content:encoded>A first-principles guide to process-based concurrency — what makes BEAM different, how OTP encodes resilience, and why everyone keeps reinventing it.</content:encoded>
</item>
<item>
<title>PhoenixPress: Compile-Time SEO for Phoenix Apps</title>
<link>https://variantsystems.io/blog/phoenix-press-open-source/</link>
<guid isPermaLink="false">TYGtwwuwQYahrMhVDGv1w6kMUuZlHr5BPAGwJA==</guid>
<pubDate>Mon, 02 Mar 2026 05:22:31 +0000</pubDate>
<description>We open-sourced PhoenixPress — sitemaps, robots.txt, and RSS feeds for Phoenix, generated at compile time with zero runtime overhead.</description>
<content:encoded>We open-sourced PhoenixPress — sitemaps, robots.txt, and RSS feeds for Phoenix, generated at compile time with zero runtime overhead.</content:encoded>
</item>
<item>
<title>Building a Calendar in Phoenix LiveView</title>
<link>https://variantsystems.io/blog/building-production-calendar-phoenix-liveview/</link>
<guid isPermaLink="false">DRgXPme7ZP_z107HYYZUdnyW7cCpZ4sZYpHf_Q==</guid>
<pubDate>Mon, 02 Mar 2026 05:22:30 +0000</pubDate>
<description>How we built a full-featured calendar in LiveView — data model, recurring appointments, multi-user views, drag-and-drop, and performance.</description>
<content:encoded>How we built a full-featured calendar in LiveView — data model, recurring appointments, multi-user views, drag-and-drop, and performance.</content:encoded>
</item>
<item>
<title>Process-Based Concurrency: Why BEAM and OTP Keep Being Right</title>
<link>https://variantsystems.io/blog/beam-otp-process-concurrency</link>
<enclosure type="image/jpeg" length="0" url="https://variantsystems.io/_astro/beam-otp-process-concurrency.BVJ8LYHB.png"></enclosure>
<guid isPermaLink="false">W5RCqRGdrvsBYr9TT9EHQhIXkWURmorGLS7XKw==</guid>
<pubDate>Mon, 02 Mar 2026 05:22:18 +0000</pubDate>
<description>A first-principles guide to process-based concurrency — what makes BEAM different, how OTP encodes resilience, and why everyone keeps reinventing it.</description>
<content:encoded>&lt;p&gt;&lt;img src=&quot;https://variantsystems.io/_astro/beam-otp-process-concurrency.BVJ8LYHB_Z1yoND8.webp&quot; alt=&quot;Process-based concurrency with BEAM and OTP&quot; title=&quot;&quot;/&gt;&lt;/p&gt;&lt;p&gt;Every few months, someone in the AI or distributed systems space announces a new framework for running concurrent, stateful agents. It has isolated state. Message passing. A supervisor that restarts things when they fail. The BEAM languages communities watch, nod, and go back to work.&lt;/p&gt;&lt;p&gt;This keeps happening because process-based concurrency solves a genuinely hard problem, and the BEAM virtual machine has been solving it since 1986. Not as a library. Not as a pattern you adopt. As the runtime itself.&lt;/p&gt;&lt;p&gt;Dillon Mulroy &lt;a href=&quot;https://x.com/dillon_mulroy/status/2024543078745244080&quot;&gt;put it plainly&lt;/a&gt;:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://variantsystems.io/_astro/dillon-mulroy-beam-tweet.CGiGgRJV_1NDwak.webp&quot; alt=&quot;Dillon Mulroy tweet: &amp;quot;pretty sure we&amp;amp;#x27;re all just recreating OTP and the BEAM. it&amp;amp;#x27;s actors all the way down.&amp;quot; — 30.9K views, 455 likes&quot; title=&quot;&quot;/&gt;&lt;/p&gt;&lt;p&gt;Thirty thousand people saw that and a lot of them felt it. The Python AI ecosystem is building agent frameworks that independently converge on the same architecture — isolated processes, message passing, supervision hierarchies, fault recovery. The patterns aren’t similar to OTP by coincidence. They’re similar because the problem demands this shape.&lt;/p&gt;&lt;p&gt;This post isn’t the hot take about why Erlang was right. It’s the guide underneath that take. We’ll start from first principles — what concurrency actually means, why shared state breaks everything, and how processes change the game. By the end, you’ll understand why OTP’s patterns keep getting reinvented and why the BEAM runtime makes them work in ways other platforms can’t fully replicate.&lt;/p&gt;&lt;p&gt;We write Elixir professionally. Our largest production system — a &lt;a href=&quot;https://variantsystems.io/work/clinic-management-platform&quot;&gt;healthcare SaaS platform&lt;/a&gt; — runs on 80,000+ lines of Elixir handling real-time scheduling, AI-powered clinical documentation, and background job orchestration. This isn’t theoretical for us. But we’ll explain it like we’re explaining it to ourselves when we first encountered it.&lt;/p&gt;&lt;h2&gt;The concurrency problem, stated plainly&lt;/h2&gt;&lt;p&gt;Your program needs to do multiple things at once. Maybe it’s handling thousands of web requests simultaneously. Maybe it’s running AI agents that each maintain their own conversation state. Maybe it’s processing audio transcriptions while serving a real-time dashboard.&lt;/p&gt;&lt;p&gt;The hardware can do this. Modern CPUs have multiple cores. The question is how your programming model lets you use them.&lt;/p&gt;&lt;p&gt;There are two fundamental approaches.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Shared state with locks.&lt;/strong&gt; Multiple threads access the same memory. You prevent corruption with mutexes, semaphores, and locks. This is what most languages do — Java, C++, Go (with goroutines, but shared memory is still the default model), Python (with the GIL making it worse), Rust (with the borrow checker making it safer).&lt;/p&gt;&lt;p&gt;The problem with shared state isn’t that it doesn’t work. It’s that it works until it doesn’t. Race conditions are the hardest bugs to reproduce, the hardest to test for, and the hardest to reason about. The more concurrent your system gets, the more lock contention slows everything down. And a single corrupted piece of shared memory can cascade through the entire system.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Isolated state with message passing.&lt;/strong&gt; Each concurrent unit has its own memory. The only way to communicate is by sending messages. No shared memory, no locks, no races.&lt;/p&gt;&lt;p&gt;This is the &lt;a href=&quot;https://en.wikipedia.org/wiki/Actor_model&quot;&gt;actor model&lt;/a&gt;. Carl Hewitt proposed it in 1973. Erlang implemented it as a runtime in 1986. Every few years, the rest of the industry rediscovers it.&lt;/p&gt;&lt;h2&gt;What a “process” means on BEAM&lt;/h2&gt;&lt;p&gt;When BEAM programmers say “process,” they don’t mean an operating system process. OS processes are heavy — megabytes of memory, expensive to create, expensive to context-switch. They don’t mean threads either, which share memory and need synchronization. And they don’t mean green threads or coroutines, which are lighter but still typically share a heap and lack true isolation.&lt;/p&gt;&lt;p&gt;A BEAM process is something different:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;~2KB of memory&lt;/strong&gt; at creation. You can spawn millions of them on a single machine.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Own heap, own stack, own garbage collector.&lt;/strong&gt; When a process is collected, nothing else pauses. No stop-the-world GC events affecting the entire system.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Preemptively scheduled.&lt;/strong&gt; The BEAM scheduler gives each process a budget of approximately 4,000 “reductions” (roughly, function calls) before switching to the next one. No process can hog the CPU. This happens at the VM level — you can’t opt out of it.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Completely isolated.&lt;/strong&gt; A process cannot access another process’s memory. Period. The only way to interact is by sending a message.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This last point is the one that changes how you think about software. In most languages, when something goes wrong in one part of your program, the blast radius is unpredictable. A null pointer in a thread can corrupt shared state that other threads depend on. An unhandled exception in a Node.js async handler can crash the entire process — every connection, every user, everything.&lt;/p&gt;&lt;p&gt;On BEAM, the blast radius of a failure is exactly one process. Always.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;# Spawn a process that will crash
spawn(fn -&amp;gt;
  # This process does some work...
  raise &amp;quot;something went wrong&amp;quot;
  # This process dies. Nothing else is affected.
end)

# This code continues running, unaware and unharmed
IO.puts(&amp;quot;Still here.&amp;quot;)&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This isn’t a try/catch hiding the error. The process that crashed is gone — its memory is reclaimed, its state is released. Everything else keeps running. The question is: who notices, and what happens next?&lt;/p&gt;&lt;h2&gt;Message passing and mailboxes&lt;/h2&gt;&lt;p&gt;If processes can’t share memory, how do they communicate?&lt;/p&gt;&lt;p&gt;Every BEAM process has a mailbox — a queue of incoming messages. You send a message to a process using its process identifier (PID). The message is copied into the recipient’s mailbox. The sender doesn’t wait (it’s asynchronous by default). The recipient processes messages from its mailbox when it’s ready.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;# Process A sends a message to Process B
send(process_b_pid, {:temperature_reading, 23.5, ~U[2026-02-22 10:00:00Z]})

# Process B receives it when ready
receive do
  {:temperature_reading, temp, timestamp} -&amp;gt;
    IO.puts(&amp;quot;Got #{temp}°C at #{timestamp}&amp;quot;)
end&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;A few things to notice:&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Messages are copied, not shared.&lt;/strong&gt; When you send a message, the data is copied into the recipient’s heap. This sounds expensive, and for very large messages, it can be. But it means there’s zero possibility of two processes modifying the same data. The tradeoff is worth it — you buy correctness by default.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Pattern matching on receive.&lt;/strong&gt; The &lt;code&gt;receive&lt;/code&gt; block uses Elixir’s pattern matching to selectively pull messages from the mailbox. Messages that don’t match stay in the mailbox for later. This means a process can handle different message types in different contexts without any routing logic.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Backpressure is built in.&lt;/strong&gt; If a process receives messages faster than it can handle them, the mailbox grows. This is visible and monitorable. You can inspect any process’s mailbox length, set up alerts, and make architectural decisions about it. Contrast this with thread-based systems where overload manifests as increasing latency, deadlocks, or OOM crashes — symptoms that are harder to diagnose and attribute.&lt;/p&gt;&lt;p&gt;The message-passing model creates a natural architecture. Each process is a self-contained unit with its own state, handling one thing well. Processes compose into systems through messages — like microservices, but within a single runtime, with nanosecond message delivery instead of network hops.&lt;/p&gt;&lt;h2&gt;”Let it crash” — resilience as architecture&lt;/h2&gt;&lt;p&gt;This is the most misunderstood concept in the BEAM ecosystem.&lt;/p&gt;&lt;p&gt;“Let it crash” does not mean “ignore errors.” It does not mean “don’t handle edge cases.” It means: &lt;strong&gt;separate the code that does work from the code that handles failure.&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;In most languages, business logic and error recovery are interleaved:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;def process_payment(order):
    try:
        customer = fetch_customer(order.customer_id)
    except DatabaseError:
        logger.error(&amp;quot;DB failed fetching customer&amp;quot;)
        return retry_later(order)
    except CustomerNotFound:
        logger.error(&amp;quot;Customer missing&amp;quot;)
        return mark_order_failed(order)

    try:
        charge = payment_gateway.charge(customer, order.total)
    except PaymentDeclined:
        notify_customer(customer, &amp;quot;Payment declined&amp;quot;)
        return mark_order_failed(order)
    except GatewayTimeout:
        logger.error(&amp;quot;Payment gateway timeout&amp;quot;)
        return retry_later(order)
    except RateLimitError:
        sleep(1)
        return process_payment(order)  # retry

    try:
        send_confirmation(customer, charge)
    except EmailError:
        logger.warning(&amp;quot;Confirmation email failed&amp;quot;)
        # Continue anyway? Or fail? Hard to decide here.

    return mark_order_complete(order)&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Every function call is wrapped in error handling. The happy path — the actual business logic — is buried under defensive code. And every new failure mode adds another branch. The code becomes harder to read, harder to test, and harder to change.&lt;/p&gt;&lt;p&gt;On BEAM, you write the happy path:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;defmodule PaymentProcessor do
  use GenServer

  def handle_call({:process, order}, _from, state) do
    customer = Customers.fetch!(order.customer_id)
    charge = PaymentGateway.charge!(customer, order.total)
    Notifications.send_confirmation!(customer, charge)
    {:reply, :ok, state}
  end
end&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If any of those calls fail, the process crashes. That’s not a bug — it’s the design. A supervisor (which we’ll get to next) is watching this process. It knows what to do when it crashes: restart it, retry the operation, or escalate to a higher-level supervisor.&lt;/p&gt;&lt;p&gt;The business logic is clean because error recovery is a separate concern, handled by a separate process. This isn’t about being reckless. It’s about putting recovery logic where it belongs — in the supervision tree, not tangled into every function.&lt;/p&gt;&lt;p&gt;Here’s the key insight: &lt;strong&gt;the process that crashes loses its state, but that’s fine because you designed for it.&lt;/strong&gt; You put critical state in a database or an ETS table. The process itself is cheap, stateless enough to restart cleanly, and focused entirely on doing its job.&lt;/p&gt;&lt;h2&gt;Supervision trees&lt;/h2&gt;&lt;p&gt;A supervisor is a process whose only job is watching other processes and reacting when they die. Supervisors are organized into trees — a supervisor can supervise other supervisors, creating a hierarchy of recovery strategies.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;defmodule MyApp.Supervisor do
  use Supervisor

  def start_link(opts) do
    Supervisor.start_link(__MODULE__, opts, name: __MODULE__)
  end

  def init(_opts) do
    children = [
      {PaymentProcessor, []},
      {NotificationService, []},
      {MetricsCollector, []}
    ]

    # If any child crashes, restart only that child
    Supervisor.init(children, strategy: :one_for_one)
  end
end&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The &lt;code&gt;:one_for_one&lt;/code&gt; strategy means: if the &lt;code&gt;PaymentProcessor&lt;/code&gt; crashes, restart it. Leave &lt;code&gt;NotificationService&lt;/code&gt; and &lt;code&gt;MetricsCollector&lt;/code&gt; alone. Other strategies exist:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;&lt;code&gt;:one_for_all&lt;/code&gt;&lt;/strong&gt; — if any child crashes, restart all children. Used when children are interdependent and can’t function without each other.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;&lt;code&gt;:rest_for_one&lt;/code&gt;&lt;/strong&gt; — if a child crashes, restart it and all children started after it. Used when later children depend on earlier ones.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Supervisors also enforce intensity limits. You can say “restart this child up to 3 times within 5 seconds — if it keeps crashing after that, terminate this entire subtree and let my parent supervisor decide what to do.” This prevents crash loops from consuming resources indefinitely.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;# Restart up to 3 times in 5 seconds, then give up
Supervisor.init(children, strategy: :one_for_one, max_restarts: 3, max_seconds: 5)&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The supervision tree isn’t just an error-handling mechanism. It’s your application’s architecture diagram. When you look at a well-structured Elixir application, the supervision tree tells you:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;What components exist&lt;/li&gt;&lt;li&gt;What depends on what&lt;/li&gt;&lt;li&gt;What happens when each component fails&lt;/li&gt;&lt;li&gt;How the system degrades under failure&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This is information that in most codebases lives in documentation (if it exists at all) or in the heads of senior engineers. In an OTP application, it’s encoded in the code itself.&lt;/p&gt;&lt;h2&gt;OTP: patterns, not a framework&lt;/h2&gt;&lt;p&gt;OTP stands for Open Telecom Platform — a name from its Ericsson origins that nobody takes literally anymore. What OTP actually is: a set of battle-tested patterns for building concurrent systems.&lt;/p&gt;&lt;p&gt;The most important ones:&lt;/p&gt;&lt;h3&gt;GenServer — the general-purpose stateful process&lt;/h3&gt;&lt;p&gt;Most processes in an Elixir application are GenServers. A GenServer is a process that:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Holds state&lt;/li&gt;&lt;li&gt;Handles synchronous calls (request → response)&lt;/li&gt;&lt;li&gt;Handles asynchronous casts (fire and forget)&lt;/li&gt;&lt;li&gt;Handles arbitrary messages (system signals, timers, etc.)&lt;/li&gt;&lt;/ul&gt;&lt;pre&gt;&lt;code&gt;defmodule SessionStore do
  use GenServer

  # Client API
  def start_link(user_id) do
    GenServer.start_link(__MODULE__, %{user_id: user_id, messages: []})
  end

  def add_message(pid, message) do
    GenServer.cast(pid, {:add_message, message})
  end

  def get_history(pid) do
    GenServer.call(pid, :get_history)
  end

  # Server callbacks
  def init(state), do: {:ok, state}

  def handle_cast({:add_message, message}, state) do
    {:noreply, %{state | messages: [message | state.messages]}}
  end

  def handle_call(:get_history, _from, state) do
    {:reply, Enum.reverse(state.messages), state}
  end
end&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This is a process that holds a conversation history. You can spawn one per user session. Each one is isolated — its own memory, its own mailbox, its own lifecycle. A thousand concurrent users means a thousand of these processes, each consuming ~2KB plus whatever state they hold. The scheduler handles the rest.&lt;/p&gt;&lt;p&gt;Compare this to the typical approach: a shared data structure (Redis, a database table, or an in-memory map) that all request handlers read from and write to. That works, but now you need to think about cache invalidation, race conditions on writes, connection pooling to your state store, and what happens when the store goes down.&lt;/p&gt;&lt;p&gt;With GenServer, the state &lt;em&gt;is&lt;/em&gt; the process. No external store to manage. No cache to invalidate. The process is the single source of truth for its own state.&lt;/p&gt;&lt;h3&gt;Application — the deployable unit&lt;/h3&gt;&lt;p&gt;An OTP Application is a component that can be started and stopped as a unit. It has its own supervision tree, its own configuration, and its own lifecycle. Your Elixir project is itself an Application, and it depends on other Applications (Phoenix, Ecto, Oban, etc.).&lt;/p&gt;&lt;p&gt;When your application starts, the supervision tree starts from the root. Every process is accounted for. Nothing is floating — every process is supervised, and every supervisor is supervised, all the way up to the application root.&lt;/p&gt;&lt;p&gt;This is in contrast with most web frameworks where you start a server, and then various things happen at import time, module load time, and initialization time in ways that are difficult to reason about. In OTP, the startup order is explicit and hierarchical.&lt;/p&gt;&lt;h2&gt;The runtime: why BEAM can’t be replicated with libraries&lt;/h2&gt;&lt;p&gt;Other languages can implement the actor model as a library. Akka does it for the JVM. Asyncio with some discipline can approximate it in Python. But there are runtime-level properties of the BEAM that can’t be replicated without modifying the VM itself.&lt;/p&gt;&lt;h3&gt;Preemptive scheduling&lt;/h3&gt;&lt;p&gt;The BEAM scheduler counts reductions (roughly, function calls) for each process. After approximately 4,000 reductions, the scheduler preempts the process and switches to the next one. The process doesn’t get a choice. It doesn’t need to yield cooperatively.&lt;/p&gt;&lt;p&gt;This means: &lt;strong&gt;no process can starve the system.&lt;/strong&gt; If one process enters an infinite loop, runs an expensive computation, or blocks on a slow operation, every other process continues running normally.&lt;/p&gt;&lt;p&gt;Node.js can’t do this. Its event loop is cooperative — if a callback takes 500ms of CPU time, nothing else runs during those 500ms. Python with asyncio has the same limitation. Go is better (goroutines are preemptively scheduled as of Go 1.14), but goroutines share memory, which reintroduces the class of problems isolation solves.&lt;/p&gt;&lt;h3&gt;Per-process garbage collection&lt;/h3&gt;&lt;p&gt;Each BEAM process has its own heap and its own garbage collector. When a process’s heap needs collection, only that process pauses. Every other process continues executing.&lt;/p&gt;&lt;p&gt;This is a profound difference. In the JVM, Go, Python, or Node.js, garbage collection is a system-wide event. The GC pauses might be short (Go’s GC is excellent), but they affect all running work. For a system handling thousands of concurrent connections, even a 10ms pause affects every single one.&lt;/p&gt;&lt;p&gt;On BEAM, a process’s GC pause affects exactly one connection, one session, one agent. And because processes are small (remember, ~2KB), individual collection events are tiny.&lt;/p&gt;&lt;h3&gt;Soft real-time guarantees&lt;/h3&gt;&lt;p&gt;The combination of preemptive scheduling and per-process GC gives the BEAM something unusual: soft real-time guarantees. Not hard real-time — this isn’t an RTOS. But consistent, predictable latency across thousands of concurrent operations.&lt;/p&gt;&lt;p&gt;This is why WhatsApp ran 2 million connections per server on Erlang. Why Discord handles millions of concurrent users with Elixir. Why telecom switches — the original use case — require this level of reliability. And why the BEAM is naturally suited for AI agent systems where thousands of concurrent agents need responsive, isolated execution.&lt;/p&gt;&lt;h3&gt;Hot code swapping&lt;/h3&gt;&lt;p&gt;You can deploy new code to a running BEAM system without stopping it. Running processes continue executing old code until they make a new function call, at which point they transparently switch to the new version. No disconnected WebSockets. No dropped agent sessions. No downtime.&lt;/p&gt;&lt;p&gt;This isn’t theoretical. Ericsson built this because telephone switches can’t go down for deployments. In practice, most Elixir teams use rolling deploys instead. But the capability exists in the runtime, and for systems where connection continuity matters — long-running AI agent sessions, real-time collaborative tools, financial systems — it’s a genuine differentiator.&lt;/p&gt;&lt;h2&gt;Where this shows up today&lt;/h2&gt;&lt;p&gt;The patterns aren’t academic. They’re running production systems right now.&lt;/p&gt;&lt;h3&gt;Phoenix and LiveView&lt;/h3&gt;&lt;p&gt;Phoenix handles HTTP and WebSocket connections as BEAM processes. Each connection is isolated. Phoenix Channels routinely handle 100,000+ concurrent WebSocket connections on a single server. LiveView — server-rendered interactive UIs — maintains a stateful process per connected user. That process holds the UI state, handles events, and pushes updates. If a user’s LiveView process crashes, that user sees a reconnection. Nobody else is affected.&lt;/p&gt;&lt;h3&gt;Background job processing&lt;/h3&gt;&lt;p&gt;Oban — the dominant background job library in Elixir — runs jobs as supervised processes. Failed jobs get retried by their supervisor. Job queues have backpressure through process mailboxes. Scheduled work uses OTP timers. The entire system is a supervision tree.&lt;/p&gt;&lt;h3&gt;AI agents&lt;/h3&gt;&lt;p&gt;This is the connection everyone’s making right now. An AI agent is:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Long-lived&lt;/strong&gt; — maintains conversation state across multiple interactions&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Stateful&lt;/strong&gt; — tracks context, memory, tool results&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Failure-prone&lt;/strong&gt; — LLM API calls time out, rate limit, return malformed JSON&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Concurrent&lt;/strong&gt; — you need to run thousands simultaneously&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This maps directly to BEAM processes. One process per agent session. State lives in the process. Failures crash the process — a supervisor restarts it. Thousands of concurrent agents are just thousands of 2KB processes on a VM built to handle millions.&lt;/p&gt;&lt;p&gt;The Python ecosystem is building this with asyncio, Pydantic state models, try/except chains, and custom retry logic. It works — with significant engineering effort. But the result is the actor model implemented in userspace on a runtime that wasn’t designed for it. The BEAM gives you this at the VM level, with guarantees that can’t be bolted on.&lt;/p&gt;&lt;p&gt;George Guimarães &lt;a href=&quot;https://georgeguimaraes.com/your-agent-orchestrator-is-just-a-bad-clone-of-elixir/&quot;&gt;mapped the correspondence precisely&lt;/a&gt;: isolated state is a process, inter-agent communication is message passing, orchestration is a supervision tree, failure recovery is a supervisor, agent discovery is a process registry, event distribution is process groups. All built into the runtime since the 1990s.&lt;/p&gt;&lt;p&gt;Elixir-native AI tooling is emerging to capitalize on this: &lt;a href=&quot;https://github.com/agentjido/jido&quot;&gt;Jido&lt;/a&gt; for agentic workflows, Bumblebee for running transformer models inside supervision trees, and LangChain bindings with step-mode execution for controlled agent pipelines.&lt;/p&gt;&lt;h3&gt;The 30-second request problem&lt;/h3&gt;&lt;p&gt;Traditional web frameworks (Rails, Django, Express) optimize for requests that complete in milliseconds. AI agent interactions take 5-30 seconds — an LLM call alone can take several seconds, and an agent might chain multiple calls.&lt;/p&gt;&lt;p&gt;Most web servers weren’t built for this. A thread-per-request model with 30-second requests means you need vastly more threads to maintain throughput. Connection pools exhaust quickly. Timeouts cascade.&lt;/p&gt;&lt;p&gt;BEAM was designed for telephone calls — the original long-lived connections. A phone call holds state, runs for minutes, and the system handles millions of them concurrently. Replace “phone call” with “AI agent session” and the architecture is identical.&lt;/p&gt;&lt;h2&gt;What can and can’t be replicated&lt;/h2&gt;&lt;p&gt;Let’s be honest about the tradeoffs.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;What other languages can do with effort:&lt;/strong&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Actor model semantics (Akka, asyncio patterns, custom frameworks)&lt;/li&gt;&lt;li&gt;Supervision-like patterns (process managers, health checks, Kubernetes restarts)&lt;/li&gt;&lt;li&gt;Message passing (channels, queues, event buses)&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;A disciplined team can get 70% of what BEAM provides using Python, TypeScript, or Go with the right libraries and architecture. For many applications, that’s enough.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;What requires BEAM’s runtime:&lt;/strong&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;True preemptive scheduling with sub-millisecond fairness&lt;/li&gt;&lt;li&gt;Per-process garbage collection with zero system-wide pauses&lt;/li&gt;&lt;li&gt;Process isolation enforced at the VM level (not by convention)&lt;/li&gt;&lt;li&gt;Hot code swapping without disconnecting active sessions&lt;/li&gt;&lt;li&gt;Millions of lightweight processes on a single node&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;These aren’t features you can add to a runtime. They’re properties of how the runtime is built. The JVM can’t add per-process GC without fundamentally changing its memory model. Node.js can’t add preemptive scheduling without replacing its event loop. Python can’t remove the GIL without… well, they’re working on that.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;What BEAM doesn’t do as well:&lt;/strong&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Raw computational throughput. BEAM is not the fastest VM. For CPU-bound number crunching, the JVM or native code wins. Elixir addresses this with NIFs (native implemented functions) and libraries like Nx for numerical computing.&lt;/li&gt;&lt;li&gt;Ecosystem size. Python’s library ecosystem dwarfs Elixir’s, especially in machine learning and data science. This is the real reason most AI frameworks are built in Python — not because Python’s concurrency model is better, but because that’s where PyTorch, Transformers, and the training infrastructure live.&lt;/li&gt;&lt;li&gt;Learning curve. Process-based thinking requires a mental model shift. Developers from imperative backgrounds need time to internalize “let it crash” and stateless function composition. The payoff is real, but the ramp isn’t trivial.&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;The recurring reinvention&lt;/h2&gt;&lt;p&gt;The pattern keeps repeating because the problem keeps appearing.&lt;/p&gt;&lt;p&gt;In the 1990s, Java’s threading model was supposed to be the answer to concurrent computing. It wasn’t enough. Akka brought the actor model to the JVM in 2009.&lt;/p&gt;&lt;p&gt;In the 2010s, Node.js bet on the event loop and single-threaded async. It worked for I/O-bound web servers. It didn’t work for CPU-bound work or true parallelism. Worker threads were bolted on. Still not enough for isolated, stateful concurrency.&lt;/p&gt;&lt;p&gt;In the 2020s, AI agent frameworks need isolated, supervised, concurrent stateful processes. AutoGen describes itself as an “event-driven actor framework.” LangGraph builds state machines with shared reducers. CrewAI chains task outputs. Each one is building toward something that looks more and more like OTP — but on runtimes that weren’t designed for it.&lt;/p&gt;&lt;p&gt;Erlang’s insight in 1986 was that concurrent, fault-tolerant systems need isolation as a foundational property, not an afterthought. Every runtime that tries to bolt isolation onto a shared-memory model ends up with a system that’s more complex, less reliable, and harder to reason about than one that started with isolation as the default.&lt;/p&gt;&lt;p&gt;The BEAM isn’t the only way to build concurrent systems. But it’s the most coherent one. The runtime, the language, the patterns, and the philosophy are all aligned toward the same goal. When the rest of the industry keeps independently arriving at the same architecture, that’s not coincidence. That’s convergence on a correct solution.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;&lt;em&gt;We build production systems on Elixir and the BEAM — from &lt;a href=&quot;https://variantsystems.io/work/clinic-management-platform&quot;&gt;healthcare platforms&lt;/a&gt; to real-time infrastructure. If you’re evaluating Elixir for a project or need help with an existing BEAM codebase, &lt;a href=&quot;https://variantsystems.io/contact&quot;&gt;let’s talk&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Playing with HTML5 Canvas from Elixir</title>
<link>https://lucassifoni.info/blog/canvas-from-elixir/</link>
<guid isPermaLink="false">1AC_HKrKTPycqsDp4eyFNBfFUvFv81YaDYp-qw==</guid>
<pubDate>Thu, 26 Feb 2026 19:56:05 +0000</pubDate>
<description>Creating a server-side drawing API for HTML5 Canvas - using Elixir to generate optimized JavaScript commands for a circular PONG game</description>
<content:encoded>Creating a server-side drawing API for HTML5 Canvas - using Elixir to generate optimized JavaScript commands for a circular PONG game</content:encoded>
</item>
<item>
<title>Phoenix and Tailwind on FreeBSD – Makefile.feld</title>
<link>https://blog.feld.me/posts/2026/02/phoenix-tailwind-freebsd/</link>
<enclosure type="image/jpeg" length="0" url="https://blog.feld.me/static/site_logo_512.png"></enclosure>
<guid isPermaLink="false">NIsYXsl2pGPHDZW2qXnRU6u0PpBIHwbsYxZJSQ==</guid>
<pubDate>Thu, 26 Feb 2026 12:37:31 +0000</pubDate>
<description>Phoenix development on FreeBSD can be a bit of a pain because they integrated TailwindCSS nicely, but it depends on some prebuilt tailwind and esbuild binaries that it auto-fetches for you. Unfortunately, there aren&#39;t any prebuilt binaries for tailwind on FreeBSD so lots of people seem to run into issues …</description>
<content:encoded>&lt;p&gt;Phoenix development on FreeBSD can be a bit of a pain because they integrated TailwindCSS nicely, but it depends on some prebuilt tailwind and esbuild binaries that it auto-fetches for you. Unfortunately, there aren&amp;#39;t any prebuilt binaries for tailwind on FreeBSD so lots of people seem to run into issues. I&amp;#39;ve banged my head about this a lot with various hacky solutions, but so far this one seems the simplest. It also should work if you try to build or deploy on different OSes as long as you have node and npm installed.&lt;/p&gt;
&lt;div&gt;&lt;pre&gt;&lt;span&gt;diff&lt;/span&gt; &lt;span&gt;--&lt;/span&gt;&lt;span&gt;git&lt;/span&gt; &lt;span&gt;a&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;assets&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;package&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;json&lt;/span&gt; &lt;span&gt;b&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;assets&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;package&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;json&lt;/span&gt;
&lt;span&gt;new&lt;/span&gt; &lt;span&gt;file&lt;/span&gt; &lt;span&gt;mode&lt;/span&gt; &lt;span&gt;100644&lt;/span&gt;
&lt;span&gt;index&lt;/span&gt; &lt;span&gt;0000000&lt;/span&gt;&lt;span&gt;..&lt;/span&gt;&lt;span&gt;b5813eb&lt;/span&gt;
&lt;span&gt;---&lt;/span&gt; &lt;span&gt;/&lt;/span&gt;&lt;span&gt;dev&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;null&lt;/span&gt;
&lt;span&gt;+++&lt;/span&gt; &lt;span&gt;b&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;assets&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;package&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;json&lt;/span&gt;
&lt;span&gt;@@&lt;/span&gt; &lt;span&gt;-&lt;/span&gt;&lt;span&gt;0&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;0&lt;/span&gt; &lt;span&gt;+&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;8&lt;/span&gt; &lt;span&gt;@@&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;{&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;  &lt;/span&gt;&lt;span&gt;&amp;quot;name&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;&amp;quot;assets&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;  &lt;/span&gt;&lt;span&gt;&amp;quot;dependencies&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;{&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;    &lt;/span&gt;&lt;span&gt;&amp;quot;daisyui&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;&amp;quot;^5.0.16&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;    &lt;/span&gt;&lt;span&gt;&amp;quot;tailwindcss&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;&amp;quot;^4.1.3&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;    &lt;/span&gt;&lt;span&gt;&amp;quot;@tailwindcss/cli&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;&amp;quot;^4.1.3&amp;quot;&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;  &lt;/span&gt;&lt;span&gt;}&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;div&gt;&lt;pre&gt;&lt;span&gt;diff&lt;/span&gt; &lt;span&gt;--&lt;/span&gt;&lt;span&gt;git&lt;/span&gt; &lt;span&gt;a&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;assets&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;css&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;app&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;css&lt;/span&gt; &lt;span&gt;b&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;assets&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;css&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;app&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;css&lt;/span&gt;
&lt;span&gt;index&lt;/span&gt; &lt;span&gt;ebb15d9&lt;/span&gt;&lt;span&gt;..&lt;/span&gt;&lt;span&gt;f564733&lt;/span&gt; &lt;span&gt;100644&lt;/span&gt;
&lt;span&gt;---&lt;/span&gt; &lt;span&gt;a&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;assets&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;css&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;app&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;css&lt;/span&gt;
&lt;span&gt;+++&lt;/span&gt; &lt;span&gt;b&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;assets&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;css&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;app&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;css&lt;/span&gt;
&lt;span&gt;@@&lt;/span&gt; &lt;span&gt;-&lt;/span&gt;&lt;span&gt;13&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;7&lt;/span&gt; &lt;span&gt;+&lt;/span&gt;&lt;span&gt;13&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;7&lt;/span&gt; &lt;span&gt;@@&lt;/span&gt;
 &lt;span&gt;/*&lt;/span&gt; &lt;span&gt;daisyUI&lt;/span&gt; &lt;span&gt;Tailwind&lt;/span&gt; &lt;span&gt;Plugin&lt;/span&gt;&lt;span&gt;.&lt;/span&gt; &lt;span&gt;You&lt;/span&gt; &lt;span&gt;can&lt;/span&gt; &lt;span&gt;update&lt;/span&gt; &lt;span&gt;this&lt;/span&gt; &lt;span&gt;file&lt;/span&gt; &lt;span&gt;by&lt;/span&gt; &lt;span&gt;fetching&lt;/span&gt; &lt;span&gt;the&lt;/span&gt; &lt;span&gt;latest&lt;/span&gt; &lt;span&gt;version&lt;/span&gt; &lt;span&gt;with&lt;/span&gt;&lt;span&gt;:&lt;/span&gt;
&lt;span&gt;    &lt;/span&gt;&lt;span&gt;curl&lt;/span&gt; &lt;span&gt;-&lt;/span&gt;&lt;span&gt;sLO&lt;/span&gt; &lt;span&gt;https&lt;/span&gt;&lt;span&gt;:/&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;github&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;com&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;saadeghi&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;daisyui&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;releases&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;latest&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;download&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;daisyui&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;js&lt;/span&gt;
&lt;span&gt;    &lt;/span&gt;&lt;span&gt;Make&lt;/span&gt; &lt;span&gt;sure&lt;/span&gt; &lt;span&gt;to&lt;/span&gt; &lt;span&gt;look&lt;/span&gt; &lt;span&gt;at&lt;/span&gt; &lt;span&gt;the&lt;/span&gt; &lt;span&gt;daisyUI&lt;/span&gt; &lt;span&gt;changelog&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;https&lt;/span&gt;&lt;span&gt;:/&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;daisyui&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;com&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;docs&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;changelog&lt;/span&gt;&lt;span&gt;/&lt;/span&gt; &lt;span&gt;*/&lt;/span&gt;
&lt;span&gt;-&lt;/span&gt;&lt;span&gt;@plugin&lt;/span&gt; &lt;span&gt;&amp;quot;../vendor/daisyui&amp;quot;&lt;/span&gt; &lt;span&gt;{&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;@plugin&lt;/span&gt; &lt;span&gt;&amp;quot;daisyui&amp;quot;&lt;/span&gt; &lt;span&gt;{&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;themes&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;false&lt;/span&gt;&lt;span&gt;;&lt;/span&gt;
 &lt;span&gt;}&lt;/span&gt;

&lt;span&gt;@@&lt;/span&gt; &lt;span&gt;-&lt;/span&gt;&lt;span&gt;21&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;7&lt;/span&gt; &lt;span&gt;+&lt;/span&gt;&lt;span&gt;21&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;7&lt;/span&gt; &lt;span&gt;@@&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;curl&lt;/span&gt; &lt;span&gt;-&lt;/span&gt;&lt;span&gt;sLO&lt;/span&gt; &lt;span&gt;https&lt;/span&gt;&lt;span&gt;:/&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;github&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;com&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;saadeghi&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;daisyui&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;releases&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;latest&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;download&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;daisyui&lt;/span&gt;&lt;span&gt;-&lt;/span&gt;&lt;span&gt;theme&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;js&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;We&lt;/span&gt; &lt;span&gt;ship&lt;/span&gt; &lt;span&gt;with&lt;/span&gt; &lt;span&gt;two&lt;/span&gt; &lt;span&gt;themes&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;a&lt;/span&gt; &lt;span&gt;light&lt;/span&gt; &lt;span&gt;one&lt;/span&gt; &lt;span&gt;inspired&lt;/span&gt; &lt;span&gt;on&lt;/span&gt; &lt;span&gt;Phoenix&lt;/span&gt; &lt;span&gt;colors&lt;/span&gt; &lt;span&gt;and&lt;/span&gt; &lt;span&gt;a&lt;/span&gt; &lt;span&gt;dark&lt;/span&gt; &lt;span&gt;one&lt;/span&gt; &lt;span&gt;inspired&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;on&lt;/span&gt; &lt;span&gt;Elixir&lt;/span&gt; &lt;span&gt;colors&lt;/span&gt;&lt;span&gt;.&lt;/span&gt; &lt;span&gt;Build&lt;/span&gt; &lt;span&gt;your&lt;/span&gt; &lt;span&gt;own&lt;/span&gt; &lt;span&gt;at&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;https&lt;/span&gt;&lt;span&gt;:/&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;daisyui&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;com&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;theme&lt;/span&gt;&lt;span&gt;-&lt;/span&gt;&lt;span&gt;generator&lt;/span&gt;&lt;span&gt;/&lt;/span&gt; &lt;span&gt;*/&lt;/span&gt;
&lt;span&gt;-&lt;/span&gt;&lt;span&gt;@plugin&lt;/span&gt; &lt;span&gt;&amp;quot;../vendor/daisyui-theme&amp;quot;&lt;/span&gt; &lt;span&gt;{&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;@plugin&lt;/span&gt; &lt;span&gt;&amp;quot;daisyui/theme&amp;quot;&lt;/span&gt; &lt;span&gt;{&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;name&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;&amp;quot;dark&amp;quot;&lt;/span&gt;&lt;span&gt;;&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;default&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;false&lt;/span&gt;&lt;span&gt;;&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;prefersdark&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;true&lt;/span&gt;&lt;span&gt;;&lt;/span&gt;
&lt;span&gt;@@&lt;/span&gt; &lt;span&gt;-&lt;/span&gt;&lt;span&gt;56&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;7&lt;/span&gt; &lt;span&gt;+&lt;/span&gt;&lt;span&gt;56&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;7&lt;/span&gt; &lt;span&gt;@@&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;--&lt;/span&gt;&lt;span&gt;noise&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;0&lt;/span&gt;&lt;span&gt;;&lt;/span&gt;
 &lt;span&gt;}&lt;/span&gt;

&lt;span&gt;-&lt;/span&gt;&lt;span&gt;@plugin&lt;/span&gt; &lt;span&gt;&amp;quot;../vendor/daisyui-theme&amp;quot;&lt;/span&gt; &lt;span&gt;{&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;@plugin&lt;/span&gt; &lt;span&gt;&amp;quot;daisyui/theme&amp;quot;&lt;/span&gt; &lt;span&gt;{&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;name&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;&amp;quot;light&amp;quot;&lt;/span&gt;&lt;span&gt;;&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;default&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;true&lt;/span&gt;&lt;span&gt;;&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;prefersdark&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;false&lt;/span&gt;&lt;span&gt;;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;div&gt;&lt;pre&gt;&lt;span&gt;diff&lt;/span&gt; &lt;span&gt;--&lt;/span&gt;&lt;span&gt;git&lt;/span&gt; &lt;span&gt;a&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;config&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;config&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;exs&lt;/span&gt; &lt;span&gt;b&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;config&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;config&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;exs&lt;/span&gt;
&lt;span&gt;index&lt;/span&gt; &lt;span&gt;331305&lt;/span&gt;&lt;span&gt;d&lt;/span&gt;&lt;span&gt;..&lt;/span&gt;&lt;span&gt;b300fc7&lt;/span&gt; &lt;span&gt;100644&lt;/span&gt;
&lt;span&gt;---&lt;/span&gt; &lt;span&gt;a&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;config&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;config&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;exs&lt;/span&gt;
&lt;span&gt;+++&lt;/span&gt; &lt;span&gt;b&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;config&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;config&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;exs&lt;/span&gt;
&lt;span&gt;@@&lt;/span&gt; &lt;span&gt;-&lt;/span&gt;&lt;span&gt;43&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;14&lt;/span&gt; &lt;span&gt;+&lt;/span&gt;&lt;span&gt;43&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;16&lt;/span&gt; &lt;span&gt;@@&lt;/span&gt; &lt;span&gt;config&lt;/span&gt; &lt;span&gt;:esbuild&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;

 &lt;span&gt;# Configure tailwind (the version is required)&lt;/span&gt;
 &lt;span&gt;config&lt;/span&gt; &lt;span&gt;:tailwind&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;version&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;&amp;quot;4.1.12&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;  &lt;/span&gt;&lt;span&gt;version_check&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;false&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;span&gt;   &lt;/span&gt;&lt;span&gt;your_project&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;[&lt;/span&gt;
&lt;span&gt;     &lt;/span&gt;&lt;span&gt;args&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;~w(&lt;/span&gt;
&lt;span&gt;       --input=assets/css/app.css&lt;/span&gt;
&lt;span&gt;       --output=priv/static/assets/css/app.css&lt;/span&gt;
&lt;span&gt;     )&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;span&gt;     &lt;/span&gt;&lt;span&gt;cd&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;Path&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;expand&lt;/span&gt;&lt;span&gt;(&lt;/span&gt;&lt;span&gt;&amp;quot;..&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;__DIR__&lt;/span&gt;&lt;span&gt;)&lt;/span&gt;
&lt;span&gt;-&lt;/span&gt;&lt;span&gt;  &lt;/span&gt;&lt;span&gt;]&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;  &lt;/span&gt;&lt;span&gt;],&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;  &lt;/span&gt;&lt;span&gt;path&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;Path&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;expand&lt;/span&gt;&lt;span&gt;(&lt;/span&gt;&lt;span&gt;&amp;quot;../assets/node_modules/.bin/tailwindcss&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;__DIR__&lt;/span&gt;&lt;span&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;div&gt;&lt;pre&gt;&lt;span&gt;diff&lt;/span&gt; &lt;span&gt;--&lt;/span&gt;&lt;span&gt;git&lt;/span&gt; &lt;span&gt;a&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;mix&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;exs&lt;/span&gt; &lt;span&gt;b&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;mix&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;exs&lt;/span&gt;
&lt;span&gt;index&lt;/span&gt; &lt;span&gt;2&lt;/span&gt;&lt;span&gt;c3e407&lt;/span&gt;&lt;span&gt;..&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;&lt;span&gt;d59701&lt;/span&gt; &lt;span&gt;100644&lt;/span&gt;
&lt;span&gt;---&lt;/span&gt; &lt;span&gt;a&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;mix&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;exs&lt;/span&gt;
&lt;span&gt;+++&lt;/span&gt; &lt;span&gt;b&lt;/span&gt;&lt;span&gt;/&lt;/span&gt;&lt;span&gt;mix&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;span&gt;exs&lt;/span&gt;
&lt;span&gt;@@&lt;/span&gt; &lt;span&gt;-&lt;/span&gt;&lt;span&gt;81&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;7&lt;/span&gt; &lt;span&gt;+&lt;/span&gt;&lt;span&gt;90&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;10&lt;/span&gt; &lt;span&gt;@@&lt;/span&gt; &lt;span&gt;defmodule&lt;/span&gt; &lt;span&gt;Booter.MixProject&lt;/span&gt; &lt;span&gt;do&lt;/span&gt;
&lt;span&gt;       &lt;/span&gt;&lt;span&gt;&amp;quot;ecto.setup&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;[&lt;/span&gt;&lt;span&gt;&amp;quot;ecto.create&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;&amp;quot;ecto.migrate&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;&amp;quot;run priv/repo/seeds.exs&amp;quot;&lt;/span&gt;&lt;span&gt;],&lt;/span&gt;
&lt;span&gt;       &lt;/span&gt;&lt;span&gt;&amp;quot;ecto.reset&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;[&lt;/span&gt;&lt;span&gt;&amp;quot;ecto.drop&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;&amp;quot;ecto.setup&amp;quot;&lt;/span&gt;&lt;span&gt;],&lt;/span&gt;
&lt;span&gt;       &lt;/span&gt;&lt;span&gt;test&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;[&lt;/span&gt;&lt;span&gt;&amp;quot;ecto.create --quiet&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;&amp;quot;ecto.migrate --quiet&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;&amp;quot;test&amp;quot;&lt;/span&gt;&lt;span&gt;],&lt;/span&gt;
&lt;span&gt;-&lt;/span&gt;&lt;span&gt;      &lt;/span&gt;&lt;span&gt;&amp;quot;assets.setup&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;[&lt;/span&gt;&lt;span&gt;&amp;quot;tailwind.install --if-missing&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;&amp;quot;esbuild.install --if-missing&amp;quot;&lt;/span&gt;&lt;span&gt;],&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;      &lt;/span&gt;&lt;span&gt;&amp;quot;assets.setup&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;[&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;        &lt;/span&gt;&lt;span&gt;&amp;quot;cmd --cd assets npm install&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;        &lt;/span&gt;&lt;span&gt;&amp;quot;esbuild.install --if-missing&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;span&gt;+&lt;/span&gt;&lt;span&gt;      &lt;/span&gt;&lt;span&gt;],&lt;/span&gt;
&lt;span&gt;       &lt;/span&gt;&lt;span&gt;&amp;quot;assets.build&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;[&lt;/span&gt;&lt;span&gt;&amp;quot;compile&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;&amp;quot;tailwind your_project&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt; &lt;span&gt;&amp;quot;esbuild your_project&amp;quot;&lt;/span&gt;&lt;span&gt;],&lt;/span&gt;
&lt;span&gt;       &lt;/span&gt;&lt;span&gt;&amp;quot;assets.deploy&amp;quot;&lt;/span&gt;&lt;span&gt;:&lt;/span&gt; &lt;span&gt;[&lt;/span&gt;
&lt;span&gt;         &lt;/span&gt;&lt;span&gt;&amp;quot;tailwind your_project --minify&amp;quot;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Apply those changes and now &lt;code&gt;mix assets.setup&lt;/code&gt; etc will do the right thing: tailwind and daisy will be installed with npm and stored in the &lt;code&gt;assets/&lt;/code&gt; directory, building and deploying the assets will work without errors.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>An Elixir Adoption Success Story</title>
<link>https://www.thegreatcodeadventure.com/an-elixir-adoption-success-story/</link>
<guid isPermaLink="false">qhQvJI9YGArg9I1RYys_4vpeTt4YHa9ERYO5vg==</guid>
<pubDate>Tue, 24 Feb 2026 18:56:27 +0000</pubDate>
<description>How a team that was new to Elixir over-delivered a big project in just three months.</description>
<content:encoded>&lt;img src=&quot;https://www.thegreatcodeadventure.com/content/images/2021/06/potions-5650263_1280.png&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;p&gt;&lt;em&gt;How a team that was new to Elixir over-delivered a big project in just three months.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;Adopting a new language is more than just a technical journey. A language is only the right tool for the job if your engineers can wield it well. So, as a technical leader you don&amp;#39;t just select a language or a framework based on its technical capabilities and how suited it is for the problems you need to solve, you also try to optimize for the skills your team already has, for a robust ecosystem surrounding that language, for strong community support, and so much more. Elixir lays claim to all of these features and I&amp;#39;ve seen teams ramp up on Elixir with astonishing speed, delivering robust, high-value projects to their organizations in record time. Don&amp;#39;t believe me? I&amp;#39;ll walk you through one such success story in which a small team of engineers who were mostly new to Elixir managed to over-deliver on a complex project in just three months. You&amp;#39;ll learn why we chose Elixir, how Elixir enabled us to be successful, and some of the practices and techniques that will help&lt;br/&gt;
your own team adopt Elixir.&lt;/p&gt;&lt;p&gt;&lt;em&gt;This post was sponsored by digital product consultancy &lt;a href=&quot;https://dockyard.com/?ref=thegreatcodeadventure.com&quot;&gt;DockYard&lt;/a&gt; to support the Elixir community and to encourage its members to share their stories.&lt;/em&gt;&lt;/p&gt;&lt;h2&gt;The Project&lt;/h2&gt;&lt;p&gt;&lt;img src=&quot;https://images.unsplash.com/photo-1452860606245-08befc0ff44b?crop=entropy&amp;amp;cs=tinysrgb&amp;amp;fit=max&amp;amp;fm=jpg&amp;amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDV8fHByb2plY3R8ZW58MHx8fHwxNjI0MzgzODY2&amp;amp;ixlib=rb-1.2.1&amp;amp;q=80&amp;amp;w=2000&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;br/&gt;&lt;small&gt;Photo by &lt;a href=&quot;https://unsplash.com/@joszczepanska?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Jo Szczepanska&lt;/a&gt; / &lt;a href=&quot;https://unsplash.com/?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Unsplash&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;This Elixir success story centers on a three month engagement with an EdTech company to deliver Flatiron School’s in-browser IDE (Interactive Development Environment) and curriculum management system into their own ecosystem. This meant that we would be porting over some existing codebases in both Ruby and Elixir, as well as building out a new Phoenix application that our partnering EdTech company would need to maintain. These applications together represented a non-trivial set of features:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Students can click a button in the web app that forks and clones their own copy of a GitHub repository containing an interactive lesson.&lt;/li&gt;&lt;li&gt;Students can write and run code for that lesson repository directly in their browser in an interactive development environment (IDE).&lt;/li&gt;&lt;li&gt;Teachers can manage curriculum by creating, editing and deleting lessons.&lt;/li&gt;&lt;li&gt;Teachers can manage curriculum for different cohorts of students by copying the master curriculum plan, editing it for their cohorts, and making it available to those cohorts.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The details of this feature set are not too important. Just understand that we had a &lt;em&gt;lot&lt;/em&gt; to build and only a three month engagement to build it in. And, while we did have some legacy applications to maintain, we chose Elixir for the new application that needed to be built. This decision wasn&amp;#39;t just the right technical fit for the tasks at hand, it&amp;#39;s also the reason &lt;em&gt;why&lt;/em&gt; we were so successful in delivering on this project. So, why did we choose Elixir in the first place?&lt;/p&gt;&lt;h2&gt;Why We Chose Elixir&lt;/h2&gt;&lt;p&gt;&lt;img src=&quot;https://images.unsplash.com/photo-1610270066297-7b06341d2b8a?crop=entropy&amp;amp;cs=tinysrgb&amp;amp;fit=max&amp;amp;fm=jpg&amp;amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fHBvdGlvbnxlbnwwfHx8fDE2MjQzODQwMTM&amp;amp;ixlib=rb-1.2.1&amp;amp;q=80&amp;amp;w=2000&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;br/&gt;&lt;small&gt;Photo by &lt;a href=&quot;https://unsplash.com/@rokkon?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Jan Ranft&lt;/a&gt; / &lt;a href=&quot;https://unsplash.com/?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Unsplash&lt;/a&gt;&lt;/small&gt;&lt;br/&gt;
We had two viable choices for the new application that needed to be built---Rails and Phoenix. The team to which we were delivering the project had no Elixir experience at all, so a choice to build a Phoenix app represented a choice to adopt Elixir.&lt;/p&gt;&lt;p&gt;The first point in Elixir&amp;#39;s favor was that it was technically the right tool for the job. The application we were building was going to be responsible for the curriculum management responsibilities detailed above. We needed concurrency to handle the high volume of lessons that would be deployed or made available to a given class of students at a point in time, and fault tolerance to gracefully handle communication failure with external systems---the app would serve as a touch point between two other apps as well as the GitHub API. The need for concurrency and fault-tolerance made Elixir an obvious choice for this application, but that alone wasn&amp;#39;t enough to seal the deal.&lt;/p&gt;&lt;p&gt;At The Flatiron School, we had several strong Elixir evangelists who advocated for Elixir. They got other internal team members excited about the prospect of working in this new language, and they demonstrated a strong commitment to teaching and supporting Flatiron engineers, as well as engineers at our partner Ed Tech company, along their Elixir journeys. The enthusiasm and commitment of our Elixir evangelists helped us make a successful case for adopting Elixir to our partner company. It was decided that the new application that our two teams would build together during this three month engagement would be a Phoenix app. Before we dive into what we built and how, I want to talk a bit about the teams assigned to this project.&lt;/p&gt;&lt;h2&gt;The Team&lt;/h2&gt;&lt;p&gt;&lt;img src=&quot;https://images.unsplash.com/photo-1475506631979-72412c606f4d?crop=entropy&amp;amp;cs=tinysrgb&amp;amp;fit=max&amp;amp;fm=jpg&amp;amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDEyfHx0ZWFtfGVufDB8fHx8MTYyNDM4Mzk1MQ&amp;amp;ixlib=rb-1.2.1&amp;amp;q=80&amp;amp;w=2000&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;br/&gt;&lt;small&gt;Photo by &lt;a href=&quot;https://unsplash.com/@pascalswier16?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Pascal Swier&lt;/a&gt; / &lt;a href=&quot;https://unsplash.com/?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Unsplash&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;The team responsible for this project was a combination of two separate teams---a contingent from The Flatiron School and a contingent from the EdTech company to whom we were delivering the final product. Between these two teams, we had one engineer with over a year of Elixir experience, three with around six months of Elixir experience and three with no Elixir experience at all. All of our engineers with production Elixir experience were on the Flatiron side, whereas the engineers at our partner company (who would be responsible for owning and maintaining these applications after the three month engagement) were all brand new to Elixir and Phoenix.&lt;/p&gt;&lt;p&gt;It&amp;#39;s also important to note that we weren&amp;#39;t just facing the typical adoption challenge of introducing a new language to a group of people, we were also tasked with bringing two new teams of people together for the first time. Not only would we have to learn how to program in Elixir, we would also have to learn how to collaborate with a new group of people from an entirely different organization, with its own norms and practices.&lt;/p&gt;&lt;p&gt;With all of these challenges in front of us, we got to work.&lt;/p&gt;&lt;h2&gt;The Journey&lt;/h2&gt;&lt;p&gt;&lt;img src=&quot;https://www.thegreatcodeadventure.com/content/images/2021/06/compass-with-map-on-table-camera-travel.jpeg&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;/p&gt;&lt;p&gt;Our three month engagement proceeded in roughly three stages---technical design, establishing norms and practices, and shipping (lots of) Elixir code.&lt;/p&gt;&lt;p&gt;With only three month in which to deliver on our ambitious goals, we began the engagement with a brief design sprint. We used a one week sprint to identify product and technical requirements and map out the overall design of the new system, including the legacy apps we&amp;#39;d be porting over into a new ecosystem as well as the new app we would be building.&lt;/p&gt;&lt;p&gt;With a technical design in place, we were able to get to work, and we had a lot to do in order to establish both technical and process practices and norms. Elixir&amp;#39;s robust ecosystem and tooling made it easy to establish technical norms around testing and test coverage, releasing and deploying, observability, code quality and more. Meanwhile, we also established practices like daily stand ups, weekly retros, and lots of pair programming. We&amp;#39;ll dig into the details there and discuss how all of these contributed to our success in a bit.&lt;/p&gt;&lt;p&gt;With our norms and practices in place, we were able to start delivering value fast. About halfway through our three month engagement, we had delivered on our MVP, and by the end of the three months we had knocked off every single feature on our list and then some. Ultimately, we over-delivered on the agreed-upon features and left our partner company with a robust ecosystem that was easy to observe and debug and easy to grow and maintain as they took over ownership for the future.&lt;/p&gt;&lt;h2&gt;Our Secrets to Elixir Success&lt;/h2&gt;&lt;p&gt;How exactly were we able to over-deliver on this complex project, while bringing together teams from two entirely different organizations, and leveraging predominantly engineers with little-to-no Elixir experience? Keep reading to find out.&lt;/p&gt;&lt;h3&gt;Elixir Has a Gentle Learning Curve&lt;/h3&gt;&lt;p&gt;&lt;img src=&quot;https://www.thegreatcodeadventure.com/content/images/2021/06/25366514875_69e9d5c980_b.jpeg&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;/p&gt;&lt;p&gt;Our team of mostly new Elixir devs ramped up on Elixir with impressive speed, thanks in large part to Elixir&amp;#39;s gentle learning curve. There are a few reasons for this. First, Elixir is highly eloquent–-–its easy-to-read syntax lowers the barrier to entry for new developers. The syntax will look especially familiar to anyone with a Ruby background, so our Ruby devs already had a leg up when it came to reading existing Elixir code and writing their first Elixir programs.&lt;/p&gt;&lt;p&gt;Elixir&amp;#39;s syntax isn&amp;#39;t the only language feature that contributes to its gentle learning curve. Elixir&amp;#39;s pattern matching functionality, and things like guard clauses and the pipe operator make it easy to write clean and highly testable code. Pattern matching and guard clauses allow developers to implement control flow beautifully, without things like lots of nested &lt;code&gt;if&lt;/code&gt; conditions. Meanwhile, the pipe operator encourages developers to write small, pure, single-purpose functions that are strung together in easy-to-read flows. This kind of code has the added benefit of being easy to test, providing developers with a short feedback cycle between writing code and evaluating its behavior. As our developers unlocked the power of these and other Elixir features, they became more and more excited about working with Elixir. That excitement was infectious, and it inspired them and their colleagues to keep learning.&lt;/p&gt;&lt;p&gt;Elixir&amp;#39;s robust documentation also greatly contributed to the speed with which our new Elixir devs were able to learn, and be productive in, Elixir. Elixir treats &lt;a href=&quot;https://hexdocs.pm/elixir/1.12/writing-documentation.html?ref=thegreatcodeadventure.com&quot;&gt;documentation as a first class citizen&lt;/a&gt;, so you&amp;#39;ll find that everything from core language features to popular libraries are well documented. Elixir documentation is written with the &lt;a href=&quot;https://hexdocs.pm/ex_doc/readme.html?ref=thegreatcodeadventure.com&quot;&gt;ExDoc&lt;/a&gt; tool that makes it easy to generate documentation for your project and get it published to &lt;a href=&quot;https://hexdocs.pm/?ref=thegreatcodeadventure.com&quot;&gt;Hex Docs&lt;/a&gt;. The official Elixir docs are written with ExDoc and published to Hex Docs, and the rest of the Elixir world has followed suit in documenting their own libraries.&lt;/p&gt;&lt;p&gt;Official docs aren&amp;#39;t the only resource out there for learning Elixir and working with new libraries. There are also a lot of Elixir community resources. Our team relied on &lt;a href=&quot;https://elixirschool.com/en/?ref=thegreatcodeadventure.com&quot;&gt;Elixir School&lt;/a&gt;, a free, open-source online Elixir curriculum, as well as in the growing number of blog posts on Elixir topics. &lt;a href=&quot;https://elixirforum.com/?ref=thegreatcodeadventure.com&quot;&gt;Elixir Forum&lt;/a&gt;, a community forum where users can post and answer questions and open discussion threads, also provided critical support to our developers throughout this project.&lt;/p&gt;&lt;p&gt;Elixir&amp;#39;s language features, documentation, and growing community helped our team level up on Elixir in record time and supported them to solve complex challenges throughout the three month period of this project, far beyond the initial learning phase. Elixir&amp;#39;s gentle learning curve isn&amp;#39;t the only reason that our team was so successful in adopting this new language and delivering on some aggressive goals, however.&lt;/p&gt;&lt;h3&gt;Elixir Has a Robust Ecosystem&lt;/h3&gt;&lt;p&gt;&lt;img src=&quot;https://www.thegreatcodeadventure.com/content/images/2021/06/The_lake_ecosystem_-_Flickr_-_askmeaks.jpeg&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;/p&gt;&lt;p&gt;While Elixir is still a relatively new language, its ecosystem is maturing fast. You&amp;#39;ll find excellent support for everything from testing your code in development, to building and releasing your code in production, to instrumenting and observing it in the wild.&lt;/p&gt;&lt;h4&gt;Fast and Comprehensive Testing in Elixir&lt;/h4&gt;&lt;p&gt;Elixir&amp;#39;s built-in testing framework, &lt;a href=&quot;https://hexdocs.pm/ex_unit/1.12/ExUnit.html?ref=thegreatcodeadventure.com&quot;&gt;ExUnit&lt;/a&gt;, provides you with everything you need to exercise every pathway through your code. ExUnit even enables you to test asynchronous code flows and code flows that involve message passing between Elixir processes. Unsurprisingly, since ExUnit is written in pure Elixir, your tests will be highly concurrent and super fast. Mocking in your Elixir tests is also made easy thanks to the &lt;a href=&quot;https://hexdocs.pm/mox/Mox.html?ref=thegreatcodeadventure.com&quot;&gt;Mox&lt;/a&gt; library that lets you provide mocks based on the contracts defined in your code. So, while Elixir&amp;#39;s language features encourage you to write highly testable code with small, pure, single-purpose functions, actually writing the tests for those functions is also a delightful experience. All of this adds up to a high percentage of test coverage, which helped our team move fast &lt;em&gt;without&lt;/em&gt; breaking things when building out a new Elixir app and positioning new Elixir devs to maintain an existing one.&lt;/p&gt;&lt;h4&gt;Easy Elixir Releases&lt;/h4&gt;&lt;p&gt;An easy-to-use and comprehensive test framework is far from the only Elixir ecosystem feature worth mentioning. Elixir offers first-class support for releases through the &lt;a href=&quot;https://hexdocs.pm/mix/Mix.Tasks.Release.html?ref=thegreatcodeadventure.com&quot;&gt;Mix release tool&lt;/a&gt;. Releases allow us to compile our code and package it up, along with its runtime, into a single deployable unit. Each self-contained release package has everything it needs to run in the deployed environment, so you &lt;em&gt;don&amp;#39;t&lt;/em&gt; need to get your source code, &lt;em&gt;or&lt;/em&gt; the Erlang VM (Elixir&amp;#39;s runtime) onto production servers. Your team can even configure multiple versions of each light-weight release package, giving you the flexibility to deploy different pieces of your applications to serve different purposes. For example, you might have one release of an application that only runs some background workers, and another that runs the web server. Release building functionality is built directly into Mix, Elixir&amp;#39;s build tool---no need to reach for a complicated third-party dependency. Your team can start building and releasing Elixir apps with native Elixir tooling. This helps contribute to fast deployment cycles.&lt;/p&gt;&lt;h4&gt;First Class Observability in Elixir&lt;/h4&gt;&lt;p&gt;Once your code is deployed to production, Elixir makes it easy to observe. Elixir treats observability like the first class citizen it is, thanks to the &lt;a href=&quot;https://github.com/beam-telemetry/telemetry?ref=thegreatcodeadventure.com&quot;&gt;Telemetry library&lt;/a&gt;. The Telemetry library leverages Erlang Term Storage (ETS), a robust in-memory store, to dispatch and handle metrics and instrumentations for your code. It has fast become the standard for instrumenting Elixir code, and most Elixir libraries (like the Phoenix web framework and the Ecto database wrapper) use it to emit a standard set of observability metrics. Library authors and app developers alike are encouraged to use the library to do the same, and Telemetry makes it easy to report metrics and events to the destination of your choice, like Datadog or Prometheus. Elixir developers are therefore empowered to design code with observability baked-in right from the beginning. The result is that we gain a high degree of visibility into our applications, from both out-of-the box Telemetry instrumentation in the libraries we use as well as from our own code. Code that we can observe is code that we can debug, so Elixir&amp;#39;s observability tooling contributes to fast bug remediation cycles.&lt;/p&gt;&lt;p&gt;When we&amp;#39;re working in a new language for the first time, we&amp;#39;re bound to introduce a bug or two. Visibility and ease of debugging become even more important here. This is just one more reason that adopting Elixir was such a smooth process for us.&lt;/p&gt;&lt;h4&gt;An Elixir Library For All Your Needs&lt;/h4&gt;&lt;p&gt;Elixir&amp;#39;s increasingly mature ecosystem also means that you&amp;#39;ll find a library for most of the common problems you&amp;#39;ll be tasked with solving. During our three month project, we often had the experience of starting the day with a new problem to solve and no idea what technologies Elixir offered to solve it, and ending the day with a relevant library identified and implemented. In one instance, we needed a solution for JWT authentication in Elixir. Two new Elixir devs were deployed to investigate our options and they had the &lt;a href=&quot;https://github.com/joken-elixir/joken?ref=thegreatcodeadventure.com&quot;&gt;Joken&lt;/a&gt; Elixir library for working with JWTs implemented in our app by the end of that same day. This experience was repeated again and again–––we would need to solve some common problem, and sure enough we would quickly find an existing Elixir library to help us develop our solution. However, when Elixir&amp;#39;s libraries fall short, you can plug the gaps with pure Erlang. Elixir supports Erlang interop, so dropping down to the Erlang level and leveraging Erlang functions is trivial. In one instance, when we had some encryption need that we couldn&amp;#39;t solve with an existing Elixir library, we were able to write our own Elixir code that used Erlang&amp;#39;s &lt;code&gt;:crypto&lt;/code&gt; module to get the job done.&lt;/p&gt;&lt;h3&gt;Elixir Was The Right Tool For The Job&lt;/h3&gt;&lt;p&gt;&lt;img src=&quot;https://images.unsplash.com/photo-1426927308491-6380b6a9936f?crop=entropy&amp;amp;cs=tinysrgb&amp;amp;fit=max&amp;amp;fm=jpg&amp;amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDJ8fHRvb2x8ZW58MHx8fHwxNjI0MzgzNzU3&amp;amp;ixlib=rb-1.2.1&amp;amp;q=80&amp;amp;w=2000&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;br/&gt;&lt;small&gt;Photo by &lt;a href=&quot;https://unsplash.com/@barnimages?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Barn Images&lt;/a&gt; / &lt;a href=&quot;https://unsplash.com/?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Unsplash&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;In addition to the language features mentioned earlier that made Elixir easy and fun to learn, Elixir&amp;#39;s concurrency, fault-tolerance and distributed nature made it the perfect fit for solving the specific problems facing our team. We needed a curriculum management application for teachers that could process lots of lesson &amp;quot;deployments&amp;quot; at a given time, and recover from failure when communicating with external systems. And we needed an in-browser IDE for students that could manage lots of load, and handle crashes while still remaining stateful so that students wouldn&amp;#39;t lose their work.&lt;/p&gt;&lt;p&gt;Elixir&amp;#39;s GenServers and Supervision trees were the perfect fit for our curriculum management tool. We were able to build concurrent workflows that enabled teachers to deploy large amounts of content to different groups of students quickly. Elixir also gave us fine-grained control over failure scenarios. We could decide exactly how our program should behave in the event of a deployment failure, whether that failure was caused internally or happened as a result of an error communicating with an external system like the GitHub API. In Elixir, concurrency and fault-tolerance aren&amp;#39;t afterthoughts. Elixir is built on top of Erlang&amp;#39;s OTP (Open Telecom Platform), which provides a number of libraries and conveniences for managing exactly these pieces of functionality. So, building our application to behave in this manner, while not exactly trivial, wasn&amp;#39;t nearly as onerous as it might have been in another language.&lt;/p&gt;&lt;p&gt;Also thanks to Erlang and OTP, we were able to support a distributed, clustered deployment of our in-browser IDE application right out of the box. We could manage a dedicated set of scalable resources for a cluster of IDE deployments, while still easily sharing state between nodes. Elixir&amp;#39;s first-class support for distribution made it easy for us to handle a scenario in which, for example, an IDE fails in one node, and is restarted in another node without the current student losing any of their work or being aware of any failure or loss of connectivity.&lt;/p&gt;&lt;p&gt;Thanks to Elixir&amp;#39;s powerful concurrency, fault-tolerance, and distribution features, we were able to quickly build robust and critical code pathways that solved for some of the hardest problems in programming, all with a relatively low degree of complexity.&lt;/p&gt;&lt;h3&gt;Elixir Makes it Easy to Teach and Learn&lt;/h3&gt;&lt;p&gt;&lt;img src=&quot;https://images.unsplash.com/photo-1564144006388-615f4f4abb6e?crop=entropy&amp;amp;cs=tinysrgb&amp;amp;fit=max&amp;amp;fm=jpg&amp;amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDEwfHx0ZWFjaHxlbnwwfHx8fDE2MjQzODM3ODQ&amp;amp;ixlib=rb-1.2.1&amp;amp;q=80&amp;amp;w=2000&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;br/&gt;&lt;small&gt;Photo by &lt;a href=&quot;https://unsplash.com/@bel2000a?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Belinda Fewings&lt;/a&gt; / &lt;a href=&quot;https://unsplash.com/?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Unsplash&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Finally, our Elixir success story wouldn&amp;#39;t have been possible without the dedicated and talented engineers attached to this project. Elixir&amp;#39;s gentle learning curve and robust ecosystem and tooling helped create passionate Elixir advocates who were committed to teaching their colleagues, along with new Elixir devs with an infectious excitement for learning. Together, this group deployed the following techniques to create a fun and supportive environment that helped us learn quickly and deliver value even faster.&lt;/p&gt;&lt;h4&gt;Lots of Pair Programming&lt;/h4&gt;&lt;p&gt;For this project, we took an aggressive approach to pair programming, pairing up an experienced and a new Elixir dev on almost every feature and task. This also helped us develop relationships and strengthen communication between the two teams (Flatiron engineers and devs from our partner EdTech company) that were coming together to deliver this project. Pairs were able to work quickly to deliver features while strengthening the Elixir skills of our newer Elixir devs.&lt;/p&gt;&lt;h4&gt;Lunch and Learns&lt;/h4&gt;&lt;p&gt;As the team built on their Elixir knowledge and delivered feature after feature, the excitement to share what they were learning grew. So, we scheduled regular &amp;quot;lunch and learn&amp;quot; discussions in which team members shared short presentations on Elixir topics over lunch. Topics included testing best practices, concurrency in Elixir, pattern matching, and more. This practice helped build on the excitement and sense of accomplishment that was already growing within the team and gave engineers an opportunity to go deeper on the new topics they were encountering every day.&lt;/p&gt;&lt;h4&gt;Celebrating Wins&lt;/h4&gt;&lt;p&gt;With so much happening so quickly, it wasn&amp;#39;t hard to celebrate our wins. We blocked time out at the end of our daily stand-ups for pairs to demo their latest progress or for individual team members to share a quick tidbit that they&amp;#39;d learned. For bigger milestones, we held celebratory lunches and other team activities. All of this served to keep engagement high, helping us stay motivated to tackle complex problems in a new language.&lt;/p&gt;&lt;h2&gt;Adopt Elixir for Much Success&lt;/h2&gt;&lt;p&gt;&lt;img src=&quot;https://images.unsplash.com/photo-1612038750554-db2fa5d68752?crop=entropy&amp;amp;cs=tinysrgb&amp;amp;fit=max&amp;amp;fm=jpg&amp;amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDI5fHxjZWxlYnJhdGV8ZW58MHx8fHwxNjI0Mzg0MDg2&amp;amp;ixlib=rb-1.2.1&amp;amp;q=80&amp;amp;w=2000&quot; alt=&quot;An Elixir Adoption Success Story&quot; title=&quot;&quot;/&gt;&lt;br/&gt;&lt;small&gt;Photo by &lt;a href=&quot;https://unsplash.com/@pattib?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Patti Black&lt;/a&gt; / &lt;a href=&quot;https://unsplash.com/?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit&quot;&gt;Unsplash&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Many teams and organizations reach for Elixir to solve complex technical problems well-suited for Elixir&amp;#39;s concurrent, fault-tolerant, and distributed nature. This was certainly one of the motivating factors in our own choice to use Elixir during this three month project. But, as this case study shows, whether or not you need these compelling popular Elixir features, Elixir can empower your team to write clean, testable, well-instrumented code and have fun doing it. The learning curve to ramp up on Elixir is gentle, and individuals and teams can skill up relatively quickly to deliver value to your organization in record time. Armed with the teaching and learning practices described here, Elixir enabled our team to vastly over-deliver on a complicated project in a short period of time. Elixir is an obvious choice for greenfield development, and after we adopted Elixir, we never looked back.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Eloquent Control Flow + Efficient Time Complexity in Elixir</title>
<link>https://www.thegreatcodeadventure.com/eloquent-control-flow-and-efficient-time-complexity-in-elixir/</link>
<guid isPermaLink="false">gwMPOZfiLNP-5M37Awi3PHdSsqxNrPFSUWhW-w==</guid>
<pubDate>Tue, 24 Feb 2026 18:56:27 +0000</pubDate>
<description>In this post, I break down my Advent of Code Day 1 solution and dive into how you can use recursion, pattern matching and custom guard clauses to implement even complex logic and control flow in an easy-to-reason about way that also avoids common time complexity pitfalls.</description>
<content:encoded>&lt;p&gt;While tackling the Day 1 challenge from this year&amp;#39;s Advent of Code in Elixir, I was reminded of some of the many ways that Elixir let&amp;#39;s us write concise, efficient, and eloquent code. In this post, I break down my solution and dive into how you can use recursion, pattern matching and custom guard clauses to implement even complex logic and control flow in an easy-to-reason about way that also avoids common time complexity pitfalls.&lt;/p&gt;&lt;h2&gt;Advent of Code Challenge&lt;/h2&gt;&lt;p&gt;&lt;a href=&quot;https://adventofcode.com/2020/about?ref=thegreatcodeadventure.com&quot;&gt;Advent of Code&lt;/a&gt; is...&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;...an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company training, university coursework, practice problems, or to challenge each other.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;You can complete Advent of Code challenges in any language and submit your answers to &amp;quot;win&amp;quot; stars :star: :star: :star:&lt;/p&gt;&lt;p&gt;It&amp;#39;s a fun way to play around with a new language that you&amp;#39;re just starting to learn or to refine your skills in a language that you&amp;#39;re already familiar with.&lt;/p&gt;&lt;p&gt;After being &lt;s&gt;pestered&lt;/s&gt; kindly reminded about it by &lt;a href=&quot;https://hostiledeveloper.com/?ref=thegreatcodeadventure.com&quot;&gt;someone who is definitely not annoying&lt;/a&gt;, I decided to try out the Day 1 puzzle in Elixir.&lt;/p&gt;&lt;p&gt;I had a lot of fun putting my solution together and, unsurprisingly, I found that Elixir&amp;#39;s features allowed me to implement even complex logic and control flow in a way that was remarkably concise, as well as efficient with regards to time complexity. Keep reading to see how to leverage pattern matching, recursion and custom guard clauses to write some super clean and extremely elegant Elixir code. :soap: :star: :soap:&lt;/p&gt;&lt;h2&gt;The Prompt&lt;/h2&gt;&lt;p&gt;The Day 1 Advent of Code prompt can be simply stated as:&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Find the two elements in a given list that sum to 2020 and return the product of those two numbers.&lt;/p&gt;&lt;/blockquote&gt;&lt;blockquote&gt;&lt;p&gt;For example, of the following list: &lt;code&gt;[979, 1721, 366, 299, 675, 1456]&lt;/code&gt;,&lt;/p&gt;&lt;/blockquote&gt;&lt;blockquote&gt;&lt;p&gt;&lt;code&gt;1721 + 299 == 2020&lt;/code&gt;, and &lt;code&gt;1721 * 299 = 514579&lt;/code&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;blockquote&gt;&lt;p&gt;So, your code should return &lt;code&gt;514579&lt;/code&gt;&lt;/p&gt;&lt;/blockquote&gt;&lt;h2&gt;Attempt 1: Lots of Iterating&lt;/h2&gt;&lt;p&gt;Before writing any code, I attempted to conceptualize my approach. The first conceptualization that jumped out at me was heavily reliant on iteration and went something like this:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Iterate over the list&lt;/li&gt;&lt;li&gt;For each element in the list, iterate over the remaining elements in the list&lt;/li&gt;&lt;li&gt;For each pair of elements, add them up. If the sum equals 2020, stop.&lt;/li&gt;&lt;li&gt;If the sum does not equal 2020, keep going&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;So, taking the example of our list from above, it would look something like this:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Does 979 + 1721 = 2020? Nope!&lt;/li&gt;&lt;li&gt;Does 979 + 366 = 2020? Nope!&lt;/li&gt;&lt;li&gt;Does 979 + 299 = 2020? Nope!&lt;/li&gt;&lt;li&gt;Does 979 + 675 = 2020? Nope!&lt;/li&gt;&lt;li&gt;Does 979 + 1456 = 2020? Nope!&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Okay, move on to the second element in the list...&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Does 1721 + 366 = 2020? Nope!&lt;/li&gt;&lt;li&gt;...&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This approach uses nested iteration and is not very efficient. For each item in the list you have to iterate over the remainder of the list and execute the check to see if the two numbers sum to &lt;code&gt;2020&lt;/code&gt;. In other words, the &lt;em&gt;outer loop&lt;/em&gt; executes N times, once for each element in the list. And every time the outer loop executes, the inner loop executes M times, where M is however many steps it must complete to check the current outer loop element against the remaining list elements. As a result, the &amp;quot;check to see if the two numbers sum to &lt;code&gt;2020&lt;/code&gt;&amp;quot; statement executes a total of N * M times.&lt;/p&gt;&lt;p&gt;So, the number of operations your code has to do will grow exponentially for each element added to the list. This represents a high degree of time complexity. Thanks to Elixir, we can do better.&lt;/p&gt;&lt;p&gt;We&amp;#39;ll use recursion and pattern matching to avoid the need to perform 💸 💸 💸 expensive nested iterations 💸 💸 💸. Keep reading to find out how!&lt;/p&gt;&lt;h2&gt;Attempt 2: Pattern Matching and Recursion&lt;/h2&gt;&lt;p&gt;Once I recognized the time complexity of the &amp;quot;lots of iterating&amp;quot; approach, I knew I needed to cut down on iterations. Luckily, Elixir provides us a way to pull elements from a list without iterating over that list--pattern matching. In the next section, we&amp;#39;ll use pattern matching and recursion to peel off list elements and perform out &amp;quot;sum to &lt;code&gt;2020&lt;/code&gt;&amp;quot; check on them.&lt;/p&gt;&lt;h2&gt;Efficient Code with Pattern Matching&lt;/h2&gt;&lt;p&gt;First, let&amp;#39;s walk through how Elixir&amp;#39;s pattern matching can be applied to list elements such that we can perform our &amp;quot;check if sum is &lt;code&gt;2020&lt;/code&gt;&amp;quot; statement against &lt;em&gt;all&lt;/em&gt; of the list elements,  &lt;em&gt;without&lt;/em&gt; iterating. This will give us the ability to solve our Advent of Code problem with code that is not overly time-complex.&lt;/p&gt;&lt;h3&gt;Pattern Matching List Elements: The Concept&lt;/h3&gt;&lt;p&gt;In order to illustrate how we can use pattern matching in this way, let&amp;#39;s focus on the goal of checking to see if the first element in our list, plus any other element in the list, equals &lt;code&gt;2020&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;With Elixir&amp;#39;s pattern matching, we can separate out list elements into a &amp;quot;head&amp;quot;, i.e. the first element, and a &amp;quot;tail&amp;quot;, i.e. everything after the first element.&lt;/p&gt;&lt;p&gt;Something like this:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;language-prettyprint&quot;&gt;iex&amp;gt; [head | tail] = [1, 2, 3]
iex&amp;gt; head
1
iex&amp;gt; tail
[2, 3]&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Using this approach, we can match a variable to the first list element, the second list element, and then everything else like this:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;language-prettyprint&quot;&gt;iex&amp;gt; list = [979, 1721, 366, 299, 675, 1456]
[979, 1721, 366, 299, 675, 1456]
iex&amp;gt; [first | [second | rest] = tail] = list
[979, 1721, 366, 299, 675, 1456]
iex&amp;gt; first
979
iex&amp;gt; second
1721
iex&amp;gt; rest
[366, 299, 675, 1456]
iex&amp;gt; tail
[1721, 366, 299, 675, 1456]&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In this way, we can establish a variable, &lt;code&gt;first&lt;/code&gt;, set equal to the first list element, and another variable, &lt;code&gt;second&lt;/code&gt;, bound to the value of the second list element.&lt;/p&gt;&lt;p&gt;Then, we can check if the sum of these two numbers equals &lt;code&gt;2020&lt;/code&gt;. If so, great! We&amp;#39;re done.&lt;/p&gt;&lt;p&gt;If &lt;em&gt;not&lt;/em&gt;, we can construct a new list using the &lt;em&gt;same&lt;/em&gt; first element, and the list remainder stored in &lt;code&gt;rest&lt;/code&gt;, cutting out the second element entirely.&lt;/p&gt;&lt;p&gt;Like this:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;language-prettyprint&quot;&gt;iex&amp;gt; new_list = [first | rest]
[979, 366, 299, 675, 1456]&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now, we can repeat the step above to see if the first element in the list plus the &lt;em&gt;new&lt;/em&gt; second element in the list equals &lt;code&gt;2020&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;language-prettyprint&quot;&gt;iex&amp;gt; [first | [second | rest]] = new_list
[979, 366, 299, 675, 1456]
iex&amp;gt; first
979
iex&amp;gt; second
366&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;From here, we repeat the process. Does &lt;code&gt;first + second == 2020&lt;/code&gt;? If so, great! We&amp;#39;re done. If not...construct a new list using the same first element, and the list remainder stored in &lt;code&gt;rest&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Eventually, if the first element cannot be added to any other list element to get the sum of &lt;code&gt;2020&lt;/code&gt;, then we end up with a list that contains only one element. We&amp;#39;ll have cut out all the other elements until only the first element remains.&lt;/p&gt;&lt;p&gt;What do we want to do then? We want to revisit the &lt;em&gt;initial starting list&lt;/em&gt; and pick up with the &lt;em&gt;second&lt;/em&gt; element there. The tail of the original list will be a list that &lt;em&gt;starts&lt;/em&gt; with the original list&amp;#39;s second element.&lt;/p&gt;&lt;p&gt;In other words, if our original list read &lt;code&gt;[979, 1721, 366, 299, 675, 1456]&lt;/code&gt;, and we didn&amp;#39;t find any other number added to &lt;code&gt;979&lt;/code&gt; to equal &lt;code&gt;2020&lt;/code&gt;, then the tail of that list should read:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;language-prettyprint&quot;&gt;[1721, 366, 299, 675, 1456]&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;We already matched a variable, &lt;code&gt;tail&lt;/code&gt; to the original list&amp;#39;s tail above, like this:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;language-prettyprint&quot;&gt;iex&amp;gt; list = [979, 1721, 366, 299, 675, 1456]
[979, 1721, 366, 299, 675, 1456]
iex&amp;gt; tail
[1721, 366, 299, 675, 1456]&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;So, using the list stored in the &lt;code&gt;tail&lt;/code&gt; variable, we can simply repeat the process described above.&lt;/p&gt;&lt;p&gt;Now that we have a basic understanding of what we need to do, let&amp;#39;s write some code.&lt;/p&gt;&lt;h3&gt;Pattern Matching List Elements: The Code&lt;/h3&gt;&lt;p&gt;We&amp;#39;ll define a module &lt;code&gt;Accountant&lt;/code&gt;, that implements one public function, &lt;code&gt;product_of_equals_twenty_twenty&lt;/code&gt;. This function will take in a list and return the product of the two numbers in the list that sum to &lt;code&gt;2020&lt;/code&gt;. The public interface of our module will work like this:&lt;/p&gt;&lt;p&gt;Let&amp;#39;s begin implementing it now.&lt;/p&gt;&lt;p&gt;The function head will use pattern matching to pull out the tail of the list and save it for later in a variable, &lt;code&gt;tail&lt;/code&gt;. Then it will pass the list to a helper function that is responsible for stepping through the process we described above. Let&amp;#39;s revisit that process now by taking a closer look at &lt;code&gt;get_two/1&lt;/code&gt;.&lt;/p&gt;&lt;h3&gt;Pattern Matching Function Heads&lt;/h3&gt;&lt;p&gt;We&amp;#39;ll implement a few versions of the &lt;code&gt;get_two/1&lt;/code&gt; function that use pattern matching in the function head to determine how to behave. We&amp;#39;ll also see recursion make a guest appearance. Let&amp;#39;s take a look!&lt;/p&gt;&lt;p&gt;Let&amp;#39;s examine each of the versions of this function. Some of this code should look familiar from our discussion of pattern matching list elements above. One function version pattern matches the first and second elements of the list and uses a guard clause to check if they sum to &lt;code&gt;2020&lt;/code&gt;. More on guard clauses in a bit.&lt;/p&gt;&lt;p&gt;If the first and second list element do sum to &lt;code&gt;2020&lt;/code&gt;, then the function body will execute and return the product of the two numbers.&lt;/p&gt;&lt;p&gt;If the first and second list element do &lt;em&gt;not&lt;/em&gt; sum to &lt;code&gt;2020&lt;/code&gt;, i.e. if our guard clause does not evaluate to &lt;code&gt;true&lt;/code&gt;, then we hit the next version of the function implementation:&lt;/p&gt;&lt;p&gt;Here, we build a &lt;em&gt;new&lt;/em&gt; list constructed from the first list element and the &lt;em&gt;remainder&lt;/em&gt; of that list, minus the second element. That new list is given as an argument to a recursive call to &lt;code&gt;get_two/1&lt;/code&gt;. This will continue until we either hit the guard clause and return the product of the two elements. Or, until we have removed every element after the first one, resulting in a list with a length of &lt;code&gt;1&lt;/code&gt;. In that case, we will return &lt;code&gt;nil&lt;/code&gt;:&lt;/p&gt;&lt;p&gt;So, we have a function that, when invoked with a given list, will call itself recursively until it finds two elements that sum to &lt;code&gt;2020&lt;/code&gt;--in which case it returns their product--or until there is only one list element left--in which case it returns &lt;code&gt;nil&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;By calling &lt;code&gt;get_two/1&lt;/code&gt; with the list &lt;code&gt;[979, 1721, 366, 299, 675, 1456]&lt;/code&gt;, we will have successfully checked to see if &lt;code&gt;979&lt;/code&gt; plus any other number in the list equals &lt;code&gt;2020&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;This brings us back to the public function, &lt;code&gt;product_of_equals_twenty_twenty/1&lt;/code&gt;. If this first invocation of &lt;code&gt;get_two/1&lt;/code&gt; returns something that is not &lt;code&gt;nil&lt;/code&gt;, then we&amp;#39;re done! We found the product of the two numbers that sum to &lt;code&gt;2020&lt;/code&gt;. Let&amp;#39;s implement some logic to that effect now:&lt;/p&gt;&lt;p&gt;If the first &lt;code&gt;get_two/1&lt;/code&gt; invocation does &lt;em&gt;not&lt;/em&gt; return a number and does return &lt;code&gt;nil&lt;/code&gt;, we need to start the whole process again, this time with the &lt;em&gt;tail end&lt;/em&gt; of our original list.&lt;/p&gt;&lt;p&gt;This will restart the process, this time with a list that reads: &lt;code&gt;[1721, 366, 299, 675, 1456]&lt;/code&gt;. Once again, our code will try to sum each number with the first list element, this time &lt;code&gt;1721&lt;/code&gt;. If it returns some product, then we&amp;#39;re done! If not, we&amp;#39;ll keep invoking &lt;code&gt;product_of_equals_twenty_twenty/1&lt;/code&gt; with the next list tail, and the next, until we find what we&amp;#39;re looking for.&lt;/p&gt;&lt;p&gt;To wrap up this code, we&amp;#39;ll add a function head for &lt;code&gt;product_of_equals_twenty_twenty/1&lt;/code&gt; that can handle being invoked with an empty list--that is what will happen if our list does not contain any two numbers that equal &lt;code&gt;2020&lt;/code&gt; and we continue to invoke &lt;code&gt;product_of_equals_twenty_twenty/1&lt;/code&gt; until we have a tail that is empty.&lt;/p&gt;&lt;h3&gt;Putting It All Together&lt;/h3&gt;&lt;p&gt;Putting all together, our code reads:&lt;/p&gt;&lt;p&gt;Our code works, and it&amp;#39;s highly efficient. We&amp;#39;re never iterating over the list, never mind iterating over it in a nested fashion. Instead, we are recursively pulling the first and second elements off of a list, and shrinking the list each step of the way.&lt;/p&gt;&lt;p&gt;This looks pretty clean, but I think we can do even better. Anytime I see an &lt;code&gt;if&lt;/code&gt; condition in Elixir, I wonder if I can replace it with recursion and pattern matching. Elixir allows us to combine recursion and pattern matching into an elegant solution for control flow. Could we implement &lt;code&gt;product_of_equals_twenty_twenty/1&lt;/code&gt; such that it can handle the case of a &amp;quot;found product&amp;quot;? Let&amp;#39;s give it a shot!&lt;/p&gt;&lt;h2&gt;Clean Control Flow with Pattern Matching and Recursion&lt;/h2&gt;&lt;p&gt;We&amp;#39;ll take a similar approach here to the one we used with our &lt;code&gt;get_two/1&lt;/code&gt; implementation. A set of function heads will use pattern matching to determine how to behave. One such function will leverage recursion to continue code flow, while other function heads will determine when the code will stop executing and return. In this way, Elixir pairs recursion and pattern matching to implement control flow--without &lt;code&gt;if&lt;/code&gt; conditions or &lt;code&gt;while&lt;/code&gt; loops.&lt;/p&gt;&lt;p&gt;Let&amp;#39;s take a look.&lt;/p&gt;&lt;p&gt;We&amp;#39;ve changed the arity of &lt;code&gt;product_of_equals_twenty_twenty&lt;/code&gt; to take in two arguments. The second argument will be the product of the two numbers that sum to &lt;code&gt;2020&lt;/code&gt;, and it will default to &lt;code&gt;nil&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;So, when our function is invoked with a list,&lt;/p&gt;&lt;p&gt;The &lt;code&gt;product&lt;/code&gt; argument evaluates to &lt;code&gt;nil&lt;/code&gt;, and we find ourselves in this version of our function:&lt;/p&gt;&lt;p&gt;Here, we kick off the process by recursively invoking &lt;code&gt;product_of_equals_twenty_twenty/2&lt;/code&gt; with the &lt;em&gt;tail&lt;/em&gt; of the original list and the result of calling &lt;code&gt;get_two/1&lt;/code&gt; with the original list.&lt;/p&gt;&lt;p&gt;If &lt;code&gt;get_two/1&lt;/code&gt; returns a product that is not &lt;code&gt;nil&lt;/code&gt;, then we&amp;#39;ll find ourselves in this other version of the &lt;code&gt;product_of_equals_twenty_twenty/1&lt;/code&gt; function:&lt;/p&gt;&lt;p&gt;In which case, we break out of our recursive function calls, stop code execution, and return the product.&lt;/p&gt;&lt;p&gt;Let&amp;#39;s step through this code in detail.&lt;/p&gt;&lt;ol&gt;&lt;li&gt;The user calls &lt;code&gt;Accountant.product_of_equals_twenty_twenty([979, 1721, 366, 299, 675, 1456])&lt;/code&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;At this point, we hit this function,&lt;/p&gt;&lt;p&gt;where &lt;code&gt;list&lt;/code&gt; evaluates to &lt;code&gt;[979, 1721, 366, 299, 675, 1456]&lt;/code&gt;, &lt;code&gt;tail&lt;/code&gt; is set to &lt;code&gt;[1721, 366, 299, 675, 1456]&lt;/code&gt; and &lt;code&gt;product&lt;/code&gt; is &lt;code&gt;nil&lt;/code&gt;.&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;code&gt;get_two/1&lt;/code&gt; is called with the &lt;code&gt;list&lt;/code&gt;&lt;/li&gt;&lt;li&gt;Since &lt;code&gt;979&lt;/code&gt; is &lt;em&gt;not&lt;/em&gt; one of the numbers that sums to &lt;code&gt;2020&lt;/code&gt;, this call returns &lt;code&gt;nil&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;So, we call &lt;code&gt;product_of_equals_twenty_twenty/2&lt;/code&gt; with &lt;code&gt;tail&lt;/code&gt; and &lt;code&gt;nil&lt;/code&gt;&lt;/li&gt;&lt;li&gt;This brings us back to this same function body:&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;em&gt;This time around&lt;/em&gt;, &lt;code&gt;list&lt;/code&gt; is equal to &lt;code&gt;[1721, 366, 299, 675, 1456]&lt;/code&gt;, &lt;code&gt;tail&lt;/code&gt; is &lt;code&gt;[366, 299, 675, 1456]&lt;/code&gt; and &lt;code&gt;product&lt;/code&gt; is still &lt;code&gt;nil&lt;/code&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;code&gt;get_two/1&lt;/code&gt; is called with the &lt;code&gt;list&lt;/code&gt;&lt;/li&gt;&lt;li&gt;Since &lt;code&gt;1721&lt;/code&gt; plus &lt;code&gt;299&lt;/code&gt;&lt;em&gt;does&lt;/em&gt; equal &lt;code&gt;2020&lt;/code&gt;, this will return the result of &lt;code&gt;1721 * 299&lt;/code&gt;, which is &lt;code&gt;514_579&lt;/code&gt;&lt;/li&gt;&lt;li&gt;So, we call &lt;code&gt;product_of_equals_twenty_twenty/2&lt;/code&gt; will &lt;code&gt;tail&lt;/code&gt; and &lt;code&gt;514_579&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;This brings us to this other function body:&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;So, we stop recursing, code execution is done, and the product is returned.&lt;/p&gt;&lt;p&gt;Phew! Seems like a fair amount of complexity, but when we put it altogether, we see some very clean and concise code.&lt;/p&gt;&lt;p&gt;In just about a dozen lines of code, we&amp;#39;ve implemented an efficient, iteration-free solution--all thanks to the beauty of Elixir&amp;#39;s pattern matching. By pattern matching list elements, we were able to avoid expensive iterations. By using pattern matching in function heads, along with recursion, we were able to implement control flow that didn&amp;#39;t rely on &lt;code&gt;if&lt;/code&gt; conditions or &lt;code&gt;while&lt;/code&gt; loops.&lt;/p&gt;&lt;p&gt;Before we go, we&amp;#39;ll do just a bit more refactoring for readability with the help of custom guard clauses.&lt;/p&gt;&lt;h2&gt;Refactoring with Custom Guard Clauses&lt;/h2&gt;&lt;p&gt;Guard clauses allows us to apply more complex checks to pattern matching function heads. We&amp;#39;re using a number of guard clauses throughout our code, for example:&lt;/p&gt;&lt;p&gt;Here, we&amp;#39;ve implemented a version of the &lt;code&gt;get_two/1&lt;/code&gt; function that will execute &lt;em&gt;if&lt;/em&gt; the length of the provided &lt;code&gt;list&lt;/code&gt; argument is equal to &lt;code&gt;1&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Guard clauses give us even more fine-grained control over which code to execute under which conditions. This is yet another way that we can handle complex control flow without verbose and hard-to-reason-about &lt;code&gt;if&lt;/code&gt; and nested &lt;code&gt;if&lt;/code&gt; conditions.&lt;/p&gt;&lt;p&gt;Only a certain set of expressions are allowed for usage in guard clauses, see docs &lt;a href=&quot;https://hexdocs.pm/elixir/guards.html?ref=thegreatcodeadventure.com#list-of-allowed-expressions&quot;&gt;here&lt;/a&gt;. But, we can define custom guard clauses to wrap up more complex guard logic. The guard clause we wrote to check if two numbers sum to &lt;code&gt;2020&lt;/code&gt; is a great candidate for a custom guard clause. By wrapping up that logic in a custom guard clause, we can name the concept to make it easier to read and reason about.&lt;/p&gt;&lt;p&gt;To define a custom guard clause, we&amp;#39;ll use the &lt;code&gt;defguard&lt;/code&gt; macro:&lt;/p&gt;&lt;p&gt;Now, we can use our guard clause like this:&lt;/p&gt;&lt;p&gt;Custom guard clauses give us the ability to implement even complex control flow with pattern matching function heads, while keeping our code readable.&lt;/p&gt;&lt;p&gt;Putting it all together:&lt;/p&gt;&lt;h2&gt;Elixir Encourages Efficient and Eloquent Code&lt;/h2&gt;&lt;p&gt;By using Elixir&amp;#39;s pattern matching against our list of numbers, we were able to write efficient code that avoided the time complexity of expensive nested iterations.&lt;/p&gt;&lt;p&gt;By using that same pattern matching feature, paired with guard clauses and recursion, we were able to implement control flow in a way that is 🧼 clean 🧼 and 🗣 eloquent 🗣. The code speaks for itself by being readable and easy to reason about. No messy, nested &lt;code&gt;if&lt;/code&gt; conditions to deal with.&lt;/p&gt;&lt;p&gt;This Advent of Code challenge really shows off some of Elixir&amp;#39;s simplest, but most powerful features.&lt;/p&gt;</content:encoded>
</item>
<item>
<title>Testing GenServers with Erlang Trace</title>
<link>https://www.thegreatcodeadventure.com/testing-genservers-with-erlang-trace/</link>
<guid isPermaLink="false">1Xf8y4vjwZkD0tK2nF5paGD_AnJ_pZOgdHIZvw==</guid>
<pubDate>Tue, 24 Feb 2026 18:56:27 +0000</pubDate>
<description>Thanks to lots of Googling and some help from a friend, I learned you can test that a GenServer received a message with the help of Erlang tracing.</description>
<content:encoded>&lt;img src=&quot;https://www.thegreatcodeadventure.com/content/images/2020/08/mailbox.jpg&quot; alt=&quot;Testing GenServers with Erlang Trace&quot; title=&quot;&quot;/&gt;&lt;p&gt;When working on a messaging system with RabbitMQ for my upcoming workshop at ElixirConf, I ran into a common Elixir testing challenge--testing that a GenServer received a message. Thanks to lots of Googling and some help from a friend, I learned you &lt;em&gt;can &lt;/em&gt;test that a GenServer received a message with the help of Erlang tracing. &lt;/p&gt;&lt;h2&gt;The Problem&lt;/h2&gt;&lt;p&gt;ExUnit provides an &lt;a href=&quot;https://hexdocs.pm/ex_unit/ExUnit.Assertions.html?ref=thegreatcodeadventure.com#assert_receive/3&quot;&gt;&lt;code&gt;assert_receive/3&lt;/code&gt;&lt;/a&gt;, but that only allows you to check the mailbox of the current process, i.e. the process running the test. So, how can we check that a GenServer running in our application received a certain message? This is the issue I encountered when testing our RabbitMQ consumer GenServer.&lt;/p&gt;&lt;p&gt;We have a GenServer that runs when the application starts up and consumes messages from a RabbitMQ queue. How can we set an expectation that the consumer does in fact receive and process a given message sent by a publisher to that queue?&lt;/p&gt;&lt;p&gt;We can do exactly that with the help of ExUnit&amp;#39;s &lt;a href=&quot;https://hexdocs.pm/ex_unit/ExUnit.Callbacks.html?ref=thegreatcodeadventure.com#start_supervised/2&quot;&gt;&lt;code&gt;start_supervised/2&lt;/code&gt;&lt;/a&gt; callback and Erlang&amp;#39;s &lt;a href=&quot;http://erlang.org/doc/man/erlang.html?ref=thegreatcodeadventure.com#trace-3&quot;&gt;&lt;code&gt;trace/3&lt;/code&gt;&lt;/a&gt; function.&lt;/p&gt;&lt;h2&gt;Introducing Erlang Trace&lt;/h2&gt;&lt;p&gt;Erlang&amp;#39;s &lt;code&gt;trace/3&lt;/code&gt; function is pretty powerful. It allows us to attach a trace to a specified process. What does this mean? If we trace a given process, we are telling Erlang to send a message to a calling process (in our case, the test), whenever the trace process receives a message. Sneaky!&lt;/p&gt;&lt;p&gt;We&amp;#39;ll use ExUnit&amp;#39;s &lt;code&gt;start_supervised/2&lt;/code&gt; function to start our GenServer and capture its PID. Then, we&amp;#39;ll use &lt;code&gt;trace/3&lt;/code&gt; to ensure that whenever the GenServer PID receives a message, our test gets notified. With that in place, we can assert that the &lt;em&gt;test&lt;/em&gt; received a message from the trace, thereby testing that our GenServer received a message.&lt;/p&gt;&lt;p&gt;Let&amp;#39;s do it!&lt;/p&gt;&lt;h3&gt;Step 1. Start the GenServer with &lt;code&gt;start_supervised/2&lt;/code&gt;&lt;/h3&gt;&lt;p&gt;First, we&amp;#39;ll start up our GenServer and capture its PID&lt;/p&gt;&lt;h3&gt;Step 2: Start the trace&lt;/h3&gt;&lt;p&gt;Next, we&amp;#39;ll use Erlang&amp;#39;s &lt;code&gt;trace/3&lt;/code&gt; function to trace the GenServer PID such that the test process receives a message whenever the GenServer PID does.&lt;/p&gt;&lt;h3&gt;Step 3: Set the Assertion&lt;/h3&gt;&lt;p&gt;Now we&amp;#39;re ready to enact the code that we expect to result in our GenServer receiving a message--publishing to a RabbitMQ queue.&lt;/p&gt;&lt;p&gt;Publishes a message &lt;em&gt;should&lt;/em&gt; cause our consumer GenServer to receive the &lt;code&gt;:basic_consume_ok&lt;/code&gt; message. This will in turn send the following message to the test process which started the trace:&lt;/p&gt;&lt;p&gt;So, we can use ExUnit&amp;#39;s &lt;a href=&quot;https://hexdocs.pm/ex_unit/ExUnit.Assertions.html?ref=thegreatcodeadventure.com#assert_receive/3&quot;&gt;&lt;code&gt;assert_receive/3&lt;/code&gt;&lt;/a&gt; function to assert that the test receives this message:&lt;/p&gt;&lt;p&gt;In this way, we &lt;em&gt;can&lt;/em&gt; in fact test that our GenServer received a certain message.&lt;/p&gt;&lt;h2&gt;Conclusion&lt;/h2&gt;&lt;p&gt;Erlang&amp;#39;s &lt;code&gt;trace/3&lt;/code&gt; function adds a powerful tool to our Elixir testing arsenal. It helps us solve a common testing problem--that of asserting that your GenServers received a certain message. Together with ExUnit&amp;#39;s &lt;code&gt;start_supervised/2&lt;/code&gt; callback and &lt;code&gt;assert_receive/3&lt;/code&gt; function, we were able to write exactly the test we needed for our RabbitMQ messaging system.&lt;/p&gt;&lt;h2&gt;Special Thanks&lt;/h2&gt;&lt;p&gt;Special thanks to &lt;a href=&quot;https://twitter.com/_StevenNunez?ref=thegreatcodeadventure.com&quot;&gt;Steven Nuñez&lt;/a&gt; who turned me on to Erlang&amp;#39;s &lt;code&gt;trace/3&lt;/code&gt; function and who generally gives me all of my ideas. Check out his &lt;a href=&quot;https://hostiledeveloper.com/2020/08/09/connection-pools-and-rabbitmq.html?ref=thegreatcodeadventure.com&quot;&gt;recent post on managing RabbitMQ connections in Elixir with ExRabbitPool&lt;/a&gt; and learn more about working with RabbitMQ and Elixir by signing up for our &lt;a href=&quot;https://2020.elixirconf.com/trainers/3/course?ref=thegreatcodeadventure.com&quot;&gt;ElixirConf 2020 workshop&lt;/a&gt;!&lt;/p&gt;</content:encoded>
</item>
</channel>
</rss>
