Similar to yesterday’s post on running Mistral 8x7Bs Mixture of Experts (MOE) model, I wanted to document the steps I took to run Mistral’s 7B-Instruct-v0.2 model on a Mac for anyone else interested in playing around with it. Unlike yesterday’s post though, this 7B Instruct model’s inference speed is about 20 tokens/second on my M2 … Continue reading Running Mistral 7B Instruct on a Macbook→| Matt Mazur
An alternative cause for the Great Stagnation: the cargo cult company| www.shyamsankar.com
Startup Databricks just released DBRX, the most powerful open source large language model yet—eclipsing Meta’s Llama 2.| WIRED
Inflection AI, an AI startup aiming to build more 'personal' AI assistants, has raised $1.3 billion at a $4 billion valuation.| TechCrunch
Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.| WIRED