Below are the steps I used to get Mistral 8x7Bs Mixture of Experts (MOE) model running locally on my Macbook (with its Apple M2 chip and 24 GB of memory). Here’s a great overview of the model for anyone interested in learning more. Short version: The Mistral “Mixtral” 8x7B 32k model,developed by Mistral AI, is … Continue reading Running Mistral 8x7Bs Mixture of Experts on a Macbook→