From the course: Fine-Tuning for LLMs: from Beginner to Advanced

Unlock the full course today

Join today to access over 24,700 courses taught by industry experts.

Demo: Challenges in LoRA

Demo: Challenges in LoRA

- [Instructor] In this demo, we are going to do some experimentation to verify how to do parameter tuning on the rank size and on the batch size when training LoRA on the T5 model. The start of this notebook is the same as before. So we connect to the GPU, we do the same PIP install. We need to download the dataset. It's going to be the same dataset. And we're going to do the same preprocessing of the data. However, as the batch size will change, that means that the creation of the tensorflow dataset needs to be done at each part of the full loop. Let me show you what I mean. Here we have the same LoRA implementation. So, what we're going to do is, here I implemented a little method to count the parameters, the trainable and non-trainable, so you can see what we are doing. And then we're going to try for the ranks, 1, 4 and 16, and for batch sizes, 8, 64 and 128, okay? And for each combination of both, what we're going to do is we're going to apply LoRA only on the final layer. So…

Contents