From the course: Hands-On AI: RAG using LlamaIndex

Unlock the full course today

Join today to access over 24,700 courses taught by industry experts.

Using LLMs

Using LLMs

- [Instructor] Let's begin our deep dive into the core components of LlamaIndex, starting with how to use a LLM. So if you're running this on Codespaces, make sure you select the kernel and connect to the environment that we had set up. If you are running this on Google Colab, make sure you run this cell to install the appropriate libraries. We'll go ahead and do a couple of imports here and we'll load our dotenv file. And this line of code pretty much is saying, okay, if the environment variable is present, let's grab that. If not, we're going to be prompted to enter our API key. That's pretty much all that's saying there. All right, so let's talk now about using LLMs. So when you're building a LLM-based applications, you know one of the first decisions is which LLM to use. And you can actually use more than one if you wish. But the LLM is used at different stages of the pipeline. It's used during indexing and during querying. During indexing, we can use it to judge the relevance of…

Contents