Fine-tuning large language models (LLMs) on niche text corpora has emerged as a crucial step in enhancing their performance on scientific tasks. This paper investigates various fine-tuning approaches for LLMs when applied to scientific text. We analyze the impact of different factors, such as training, model design, and optimization techniques, on