It’s coming along. The development branch is already capable of interacting with the buttons, and already detects input events from the knobs and from the touchscreen. I’m still figuring out the internal code structure of actions, because they’ll need to provide different interaction modes depending on whether they’re in a touchscreen or a button.