From the course: Fine-Tuning for LLMs: from Beginner to Advanced
Unlock the full course today
Join today to access over 24,700 courses taught by industry experts.
Demo: LoRA fine-tuning on FLAN-T5 - Hugging Face Tutorial
From the course: Fine-Tuning for LLMs: from Beginner to Advanced
Demo: LoRA fine-tuning on FLAN-T5
- [Instructor] In this demo we get to the excellent part of LoRA Fine-Tuning. So we are going to finally implement LoRA, one of the most advanced and exciting techniques in PEFT, parameter efficient fine-tuning. As the time of this recording in 2024, LoRA is less than two years old. This means that you're going to learn something that not only is a state of the art, but also you will see that implementing it it'll be a little complex because there doesn't exist packages and support for LoRA, for Hugging Face, TensorFlow or PyTorch natively like doing something like LoRA.apply. We don't have that yet. That's how state of the art we are right now. So I hope you are as excited as I am. Let me connect to a GPU and there we are. And as always, first we need to do a pip installs. So to do LoRA effectively the only package we need to add, which is new to us, is the tensorflow_add-ons, which we we'll use to add our lower adapter. We'll see how we'll use it later. There it is. We can see that…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
-
(Locked)
Introduction to PEFT3m 49s
-
(Locked)
LoRA adapters6m 57s
-
(Locked)
LoRA in depth: Technical analysis5m 3s
-
(Locked)
Demo: LoRA fine-tuning on FLAN-T514m 8s
-
(Locked)
Implementing LoRA in LLMs5m 6s
-
(Locked)
Demo: Challenges in LoRA6m 28s
-
(Locked)
Solution: Fine-tuning FLAN-T5 for translation7m 1s
-
(Locked)
-
-