Our LLM Bootcamp covers the following topics:
-
A gentle introduction to foundation LLMs, vector databases, vector embeddings, semantic search and orchestration frameworks
-
Difference between fine-tuning and RAG (Retrieval Augmented Generation)
-
Common design patterns for building an LLM application on enterprise data
-
Processing of single query/task/inference task in in-context learning
-
Role of orchestration frameworks like LangChain in overcoming the context window constraint
-
Understand the use cases and limitations of LLM agents
-
Role of embeddings and vector databases in semantic retrieval
-
The need for a semantic cache when building LLM applications at scale
-
Trade-offs, challenges and pitfalls faced while building these applications to solve real-world problems
You can review the course syllabus to see all the topics that will be covered during the bootcamp.