Omar Sanseviero Personal Website| hackerllama
Omar Sanseviero Personal Website| hackerllama
This series aims to demystify embeddings and show you how to use them in your projects. The first blog post taught you how to use and scale up open-source embedding models, pick an existing model, current evaluation methods, and the state of the ecosystem. This second blog post will dive deeper into embeddings and explain the differences between bi-encoders and cross-encoders. Then, we’ll dive into retrieving and re-ranking: we’ll build a tool to answer questions about 400 AI papers. We...| hackerllama
Here are some terms that are useful to know when joining the Local LLM community. LocalLlama: A Reddit community of practitioners, researchers, and hackers doing all kinds of crazy things with ML models. LLM: A Large Language Model. Usually a transformer-based model with a lot of parameters…billions or even trillions. Transformer: A type of neural network architecture that is very good at language tasks. It is the basis for most LLMs. GPT: A type of transformer that is trained to predict th...| hackerllama
This series aims to demystify embeddings and show you how to use them in your projects. This first blog post will teach you how to use and scale up open-source embedding models. We’ll look into the criteria for picking an existing model, current evaluation methods, and the state of the ecosystem. We’ll look into three exciting applications: Finding the most similar Quora or StackOverflow questions Given a huge dataset, find the most similar items Running search embedding models directly i...| hackerllama
Omar Sanseviero Personal Website| hackerllama