How to fine-tune AI models for undergraduates and beginners (with free GPU resources).
This repository serves as a record of my learning process and the insights I've gained while studying the fine-tuning of AI models. The aim is to create a comprehensive resource that is easy to revisit and can help others understand and implement fine-tuning techniques. Especially, this aims to help no-local-GPU users to fine-tune AI models effectively with free GPU resources.
In order to train AI models effectively, GPU is essential. However, not everyone has access to a local GPU or the resources to rent one.
This section provides an overview of free GPU resources available on platforms like Kaggle and Google Colab, along with tips on how to manage these resources efficiently.
The two most popular platforms that offer free GPU resources are Kaggle and Google Colab.
- You can check my file: Kaggle or learn from the original guide.
- You can check my file: Google Colab or learn from the original guide.
-
Although these platforms offer free GPU resources, they come with limitations (time, VRAM, etc.)
-
It's important to manage these resources efficiently to avoid interruptions during training.
-
Please check these below files for detailed information:
-
TL;DR:
- Kaggle:
- 30 hours weekly.
- Fast data/weight uploads and workflows.
- 1x 16 GB VRAM GPU or 2x 15 GB VRAM GPUs. (Tesla P100 or 2x T4).
- Colab:
- About 3 hours daily.
- Many example scripts available.
- 1x 16 GB VRAM GPU. (T4)
- Effective mix:
- Load data on both platforms.
- Find avaibale online scripts (usually written for Colab).
- Stablize/Debug/Estimate time on Colab.
- Convert the scripts to Kaggle.
- Fine-tune models on Kaggle.
- For large models: Run per epoch and use Kaggle advantage for output workflows (MUST READ).
- For small models: Use Colab to experiment (hyperparameter tuning, etc.) and fine-tune in one go on Kaggle.
- Download weight from Kaggle, upload to Colab and inference (Optional).
- Kaggle:
Another problems besides the resource management is that there are not many scripts available for fine-tuning AI models. (You can find many scripts for inference, but not for fine-tuning from my experience.)
This section provides some fine-tune scripts for some AI tasks.
Please check the Computer vision script folder for detailed information.
TL;DR: I provide scripts for fine-tuning with Torch and TF, and YOLO with Ultralytics.
Please check the LLM script folder for detailed information.
TL;DR: I provide scripts for fine-tuning with Hugging Face.
Beside the above sections, I also provide some other useful information for fine-tuning AI models that I have learned from reading cool papers, books, and blogs.
Please check the Others folder for detailed information.
This repository will continue to grow as I learn more about fine-tuning AI models. Feel free to explore, learn, and contribute! 🚀