From the course: RAG and Fine-Tuning Explained
Unlock this course with a free trial
Join today to access over 24,700 courses taught by industry experts.
RAG: Retrieval Augmented Generation
From the course: RAG and Fine-Tuning Explained
RAG: Retrieval Augmented Generation
- People tend to treat chat-based LLM systems as a combination of a search engine and a knowledgeable and skilled assistant. They ask the LLM to answer questions or ask you to perform some task and often combine the two. The trouble is, and I explained this earlier, the LLM isn't a knowledge lookup system, it's a language transformer. It doesn't actually know anything, it just has this complex enough map of our language to be able to auto complete most sentences in a way that is mostly correct, as long as the pattern of the information was readily available and prevalent in its training data. This is a challenge because for us users of AI systems, incorrect information presented with 100% confidence reads at best, like hallucination, and at worst, like a blatant lie. The good news is we already know how to make the output far more accurate. Remember, context makes all the difference. So if instead of immediately answering, the AI service were to retrieve some information from a…
Contents
-
-
-
How LLMs work2m 8s
-
(Locked)
Context makes all the difference1m 21s
-
(Locked)
RAG: Retrieval Augmented Generation1m 46s
-
(Locked)
The RAG flow1m 30s
-
(Locked)
Embeddings: Helping AI understand data3m 9s
-
(Locked)
Knowledge graphs3m 16s
-
(Locked)
Fine-tuning1m 31s
-
(Locked)
RAFT: RAG with fine-tuning2m 4s
-
-