LLM Domain Expertise: Fine-Tuning, Merging & Emergence
Explore adapting Large Language Models (LLMs) like Llama and Mistral for specialized fields like materials science. Learn about fine-tuning techniques (CPT, SFT, DPO/ORPO) and the power of SLERP model merging to enhance domain expertise and unlock emergent capabilities, particularly in larger models. Discover experimental findings and the impact of model scale.