What are the different ways to use and finetune pretrained large language models (LLMs)? The most common ways to use and finetune pretrained LLMs include a feature-based approach, in-context prompting, and updating a subset of the model parameters.| magazine.sebastianraschka.com
Methods and Strategies for Building and Refining Reasoning Models| magazine.sebastianraschka.com
Model Merging, Mixtures of Experts, and Towards Smaller LLMs| magazine.sebastianraschka.com
Modern policy gradient algorithms and their application to language models...| cameronrwolfe.substack.com
Understanding the problem formulation and basic algorithms for RL..| cameronrwolfe.substack.com
Things I Learned From Hundreds of Experiments| magazine.sebastianraschka.com
LoRA is one of the most widely used, parameter-efficient finetuning techniques for training custom LLMs. From saving memory with QLoRA to selecting the optimal LoRA settings, this article provides practical insights for those interested in applying it.| Lightning AI