Phi-4 Mini and Phi-4 Multimodal are the latest Small Language Models for Chatting and Multimodal instruction following by Microsoft.| DebuggerCafe
In this article, we cover the summary of the Phi-3 technical report including the architecture, the dataset curation strategy, benchmarks, and Phi-3 vision capabilities.| DebuggerCafe
Qwen2 VL is a Vision Language model with the Qwen2 Language Decoder and Vision Transformer model from DFN as the image encoder.| DebuggerCafe
Unsloth provides memory efficient and fast inference & training of LLMs with support for several models like Meta Llama, Google Gemma, & Phi.| DebuggerCafe
Fine-tuning the Phi 1.5 model on the BBC News Summary dataset for Text Summarization using Hugging Face Transformers.| DebuggerCafe
Phi 1.5 is a 1.3 Billion Parameters LLM by Microsoft which is capable of coding, common sense reasoning, and is adept in chain of thoughts.| DebuggerCafe
Instruction following Jupyter Notebook interface with a QLoRA fine-tuned Phi 1.5 model and the Hugging Face Transformers library.| DebuggerCafe