From the course: Fine-Tuning for LLMs: from Beginner to Advanced
Unlock the full course today
Join today to access over 24,700 courses taught by industry experts.
Demo: Challenges in LoRA - Hugging Face Tutorial
From the course: Fine-Tuning for LLMs: from Beginner to Advanced
Demo: Challenges in LoRA
- [Instructor] In this demo, we are going to do some experimentation to verify how to do parameter tuning on the rank size and on the batch size when training LoRA on the T5 model. The start of this notebook is the same as before. So we connect to the GPU, we do the same PIP install. We need to download the dataset. It's going to be the same dataset. And we're going to do the same preprocessing of the data. However, as the batch size will change, that means that the creation of the tensorflow dataset needs to be done at each part of the full loop. Let me show you what I mean. Here we have the same LoRA implementation. So, what we're going to do is, here I implemented a little method to count the parameters, the trainable and non-trainable, so you can see what we are doing. And then we're going to try for the ranks, 1, 4 and 16, and for batch sizes, 8, 64 and 128, okay? And for each combination of both, what we're going to do is we're going to apply LoRA only on the final layer. So…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
-
(Locked)
Introduction to PEFT3m 49s
-
(Locked)
LoRA adapters6m 57s
-
(Locked)
LoRA in depth: Technical analysis5m 3s
-
(Locked)
Demo: LoRA fine-tuning on FLAN-T514m 8s
-
(Locked)
Implementing LoRA in LLMs5m 6s
-
(Locked)
Demo: Challenges in LoRA6m 28s
-
(Locked)
Solution: Fine-tuning FLAN-T5 for translation7m 1s
-
(Locked)
-
-