Skip to content

Latest commit

 

History

History
25 lines (13 loc) · 1.11 KB

README.md

File metadata and controls

25 lines (13 loc) · 1.11 KB

LLM Fine-Tuning

Introduction

This document summarizes my learnings and experiences with fine-tuning Large Language Models (LLMs). The goal is to create a concise, revisitable resource that simplifies understanding and implementation of fine-tuning techniques for various NLP tasks.

Currently, the focus is on transformer-based models like GPT and BERT. However, I plan to expand this to include other architectures and advanced techniques in the future.

Frameworks

I primarily use below frameworks for fine-tuning LLMs:

Detailed step-by-step guides for fine-tuning LLMs using these frameworks are available in the following notebooks:

LoRA (Low-Rank Adaptation)

For lightweight and efficient fine-tuning, I explore LoRA techniques. You can check the LoRA Guide to understand how to implement LoRA with Hugging Face models.


This repository is continuously updated to include new learnings and frameworks as I progress in my understanding of LLM fine-tuning. Feedback and contributions are welcome!