From the course: Fine-Tuning for LLMs: from Beginner to Advanced

Unlock the full course today

Join today to access over 24,700 courses taught by industry experts.

Demo: LoRA fine-tuning on FLAN-T5

Demo: LoRA fine-tuning on FLAN-T5

- [Instructor] In this demo we get to the excellent part of LoRA Fine-Tuning. So we are going to finally implement LoRA, one of the most advanced and exciting techniques in PEFT, parameter efficient fine-tuning. As the time of this recording in 2024, LoRA is less than two years old. This means that you're going to learn something that not only is a state of the art, but also you will see that implementing it it'll be a little complex because there doesn't exist packages and support for LoRA, for Hugging Face, TensorFlow or PyTorch natively like doing something like LoRA.apply. We don't have that yet. That's how state of the art we are right now. So I hope you are as excited as I am. Let me connect to a GPU and there we are. And as always, first we need to do a pip installs. So to do LoRA effectively the only package we need to add, which is new to us, is the tensorflow_add-ons, which we we'll use to add our lower adapter. We'll see how we'll use it later. There it is. We can see that…

Contents