fast.ai is joining Answer.AI, and we’re announcing a new kind of educational experience, ‘How To Solve It With Code’| Answer.AI
Earlier this year, I wrote Your AI product needs evals. Many of you asked, “How do I get started with LLM-as-a-judge?” This guide shares what I’ve learned after helping over 30 companies set up their evaluation systems. The Problem: AI Teams Are Drowning in Data Ever spend weeks building an AI system, only to realize you have no idea if it’s actually working? You’re not alone. I’ve noticed teams repeat the same mistakes when using LLMs to evaluate AI outputs: Too Many Metrics: Cre...| Hamel's Blog
TL;DR We’ve released (a while ago, now, with no further report of any major issues, warranting this blog post!) rerankers, a low-dependency Python library to provide a unified interface to all commonly used re-ranking models. It’s available on GitHub here. In this post, we quickly discuss: Why two-stage pipelines are so popular, and how they’re born of various trade-offs. The various methods now commonly used in re-ranking. rerankers itself, its design philosophy and how to use it. Intr...| Answer.AI
We propose that those interested in providing LLM-friendly content add a /llms.txt file to their site. This is a markdown file that provides brief background information and guidance, along with links to markdown files providing more detailed information.| Answer.AI
Quickly understand inscrutable LLM frameworks by intercepting API calls.| hamel.dev
How to construct domain-specific LLM evaluation systems.| hamel.dev
The top advice I would give my younger self would be to start blogging sooner. Here are some reasons to blog:| Medium