What you will learn
- Understand the theory behind LLMs and key concepts from LangChain and Hugging Face
- Integrate proprietary LLMs (like OpenAIās ChatGPT) and open-source models such as Meta’s Llama and Microsoftās Phi
- Learn about LangChain components, including chains, templates, RAG modules, agents, and tools
- Explore RAG step-by-step for storage and retrieval using vector stores, with access to documents and web pages
- Implement agents and tools to add features like conducting internet searches and retrieving up-to-date information
- Deploy solutions in a local environment, enabling the use of open-source models without internet connection
- Build an application that automatically summarizes videos and responds to questions about them
- Develop a complete custom chatbot with memory and create a user-friendly interface using Streamlit
- Create an advanced RAG application to interact with documents and extract relevant information using a chat interface
Requirements
- Programming logic
- Basic Python programming
Description
In this course, you will dive deep into the world of Generative AI with LLMs (Large Language Models), exploring the potential of combining LangChain with Python. You will implement proprietary solutions (like ChatGPT) and modern open-source models like Llama and Phi. Through practical, real-world projects, you’ll develop innovative applications, including a custom virtual assistant and a chatbot that interacts with documents and videos. We’ll explore advanced techniques such as RAG and agents, and use tools like Streamlit to create intuitive interfaces. You’ll learn how to use these technologies for free in Google Colab and also how to run projects locally.
In the introduction, youāll be introduced to the theory of Large Language Models (LLMs) and their fundamental concepts. Additionally, weāll explore the Hugging Face ecosystem, which offers modern solutions for Natural Language Processing (NLP). You’ll learn to implement LLMs using both the Hugging Face pipeline and the LangChain library, understanding the advantages of each approach.
The second part is focused on mastering LangChain. You’ll learn to access open-source models, like Meta’s Llama and Microsoftās Phi, as well as proprietary LLMs, like OpenAI’s ChatGPT. We’ll explain model quantization to enhance performance and scalability. Key LangChain components, such as chains, templates, and tools, will be presented, along with how to use them to develop robust NLP solutions. Prompt engineering techniques will be covered to help you achieve more accurate results. The concept of RAG (Retrieval-Augmented Generation) will be explored, including information storage and retrieval processes. Youāll learn to implement vector stores and understand the importance of embeddings and how to use them effectively. Weāll also demonstrate how to use RAG toĀ interact with PDF documents and web pages. Additionally, you’ll have the opportunity to explore integrating agents and tools, like using LLMs to perform web searches and retrieve recent information. Solutions will be implemented locally, enabling access to open-source models even without an internet connection.
In the project development phase, youāll learn to create a custom chatbot with an interface and memory for Q&A. Youāll also learn to develop interactive applications using Streamlit, making it easy to build intuitive interfaces. One project involves developing an advanced application using RAG to interact with multiple documents and extract relevant information through a chat interface. Another project will focus on building an application that automatically summarizes videos and answers related questions, resulting in a powerful tool for instant, automated video comprehension.
Who this course is for
- Professionals and enthusiasts in the field of artificial intelligence interested in exploring the use of LLMs
- Professionals looking to implement LLMs in their own applications
- Students aiming to gain deeper knowledge in NLP and learn to implement modern solutions
- Professionals from other fields who want to learn how to use language models in real-world applications
- Developers seeking to expand their skills with generative AI
- Researchers interested in exploring advances in LLMs and their practical applications
good