What topics does the LLM Bootcamp cover?

Our LLM Bootcamp will cover the following topics:

  • A gentle introduction to foundation LLMs, vector databases, vector embeddings, semantic search, and orchestration frameworks
  • Difference between fine-tuning and RAG (Retrieval Augmented Generation)
  • Common design patterns for building an LLM application on enterprise data
  • Processing of a single query/task/inference task in in-context learning
  • Role of orchestration frameworks like LangChain in overcoming the context window constraint
  • Understand the use cases and limitations of LLM agents
  • Role of embeddings and vector databases in semantic retrieval
  • The need for a semantic cache when building LLM applications at scale
  • Trade-offs, challenges, and pitfalls faced while building these applications to solve real-world problems
  • The importance of evaluating LLMs using benchmark datasets and key metrics
  • Role of guardrails and observability in ensuring safety, reliability, and monitoring of LLM applications in production

You can review the course syllabus to see all the topics that will be covered during the bootcamp.