How do the sizes of local LLMs compare to the size of offline Wikipedia downloads?| evanhahn.com
AI Agents Crash Course—Part 14 (with implementation).| Daily Dose of Data Science
AI Agents Crash Course—Part 13 (with implementation).| Daily Dose of Data Science
AI Agents Crash Course—Part 4 (with implementation).| Daily Dose of Data Science
If you like the idea of AI but don't want to share your content or information with a third party, you can always install an LLM on your Apple desktop or laptop. You'll be surprised at how easy it is.| ZDNET
Overview This guide will walk you through creating a simple chat application in .NET that interacts with a locally hosted AI model. Using the Microsoft.Extensions.AI library, you can communicate with an AI model without relying on cloud services. This provides better privacy, reduced latency, and cost efficiency. Prerequisites Install .NET 8.0 or a later version. […]| Collabnix
AI Agents Crash Course—Part 2 (with implementation).| Daily Dose of Data Science
This blog demonstrates how to use DeepSeek-R1 for text generation using Ollama, a tool for running LLMs locally. These instructions align with the usage described on the DeepSeek-R1 page at ollama.com. 1. Install Ollama Ollama currently supports macOS (both Intel and Apple Silicon). Install it using Homebrew: brew install ollama Confirm the installation by checking […]| Collabnix
This article will teach you how to use the Spring AI project to build applications based on different chat models.| Piotr's TechBlog
A deep dive into key components of multimodal systems—CLIP embeddings, multimodal prompting, and tool calling.| Daily Dose of Data Science
Brave Nightly sets a new standard for AI privacy and customization. Connect your preferred AI model, local or remote, directly to Leo in your browser.| Brave
A No BS guide to getting started developing with LLMs. We’ll cover the jargon, terms, and get a model running locally. We’ll also cover the different model formats, and how to convert and quantize a model.| GDCorner
Running AI locally on Linux because open source empowers us to do so.| It's FOSS
My experimentation with LLMs on day 1 and day 2 of Advent of Code was a bit frustrating. For the day 3 puzzle, I decided to change model. Previously I had been using the codellama:13b model but wasn’t really happy. I kept arguing with it and it just frustrated me. So let’s try some others. codellama:34b So I thought maybe the model just wasn’t big enough, so off we went| beny23.github.io
So it is that time of the year again. Advent of Code is back. Yey! This means I get to try to look at a new language again. This time, why not Kotlin? But as an extra challenge, I thought why not see how the vaunted LLMs would help. Is AI really the accelerator that would elevate a mere developer to a rockstar ninja (whatever that is)? I have to add that I am a bit of an AI sceptic and keep saying that| beny23.github.io