In our last post we explored operators and kernels in Tensorflow Lite, and how the ability to swap out kernels depending on the hardware capabilities available can lead to dramatic performance improvements when performing inference. We made an analogy of operators to instruction set architectures (ISAs), and kernels to the hardware implementation of instructions in a processor. Just like in traditional computer programs, the sequence of instructions in a model needs to be encoded and distribu...| danielmangum.com
The buzz around “edge AI”, which means something slightly different to almost everyone you talk to, is well past reaching a fever pitch. Regardless of what edge AI means to you, the one commonality is typically that the hardware on which inference is being performed is constrained in one or more dimensions, whether it be compute, memory, or network bandwidth. Perhaps the most constrained of these platforms are microcontrollers. I have found that, while there is much discourse around “ru...| danielmangum.com
LiteRT (short for Lite Runtime), formerly known as TensorFlow Lite, is Google's| Google AI for Developers
Affordable, pre-certified cellular IoT development kit for LTE-M, NB-IoT, GNSS and Bluetooth LE. Perfect for evaluation and development on the nRF9160 SiP| www.nordicsemi.com
Getting Started Guide| docs.zephyrproject.org