Course Overview
About Course
This 40-hour course teaches participants to build generative AI applications using Python. Generative AI involves models that create new content (text, images, audio, etc.). The training is designed for beginners and intermediate learners (developers, data scientists, tech enthusiasts) and covers industry-standard tools. We introduce the Hugging Face Transformers library (pretrained models for NLP, vision and multimodal tasks), the OpenAI GPT API (for chat and completion models), and the LangChain framework for chaining LLM calls. Underlying deep learning libraries TensorFlow and PyTorch are also taught for training and fine-tuning models. By the end of the course, learners will know how to leverage pre-trained models, customize them on domain data, and deploy end-to-end generative AI solutions.
-
Course Syllabus
Introduction to Generative AI & Setup (2h): Introduces generative AI concepts and examples (text, images, audio). Students set up the Python environment (installing Transformers, LangChain, etc.) and run a simple model (e.g. a text generator) to see AI in action. Key ideas like prompts, embeddings, tokenization, and the difference between training vs. inference are covered. This module ensures everyone has the baseline Python/ML knowledge needed.
Hugging Face Transformers and NLP (6h): Deep dive into the Hugging Face Transformers library. Learners use pre-trained models for NLP tasks: generating text, summarizing articles, translating languages, and more. Hands-on labs use transformers pipelines and tokenizers. Students fine-tune a small language model on example data and experiment with the Hugging Face Model Hub. This module emphasizes practical use of state-of-the-art models for text generation and understanding.
OpenAI GPT API (4h): Learn to use OpenAI’s GPT models (GPT-3/GPT-4) via the Python API. The module covers obtaining API keys, constructing prompts, and calling openai.Completion.create() or chat endpoints. Coding exercises build a simple text-completion or Q&A application, adjusting parameters like temperature and max_tokens. As one source notes, the OpenAI API lets you work with GPT models using very little Python code. Students see how changing prompts and settings affects creativity vs. accuracy of the output.
LangChain and LLM Agents (5h): Introduces the LangChain framework for building complex LLM applications. Topics include PromptTemplates, Chains, Agents, and memory. Students build sequences of LLM calls (chains) and simple agent loops (LLM-driven decision-making). Labs include examples like a two-step workflow where one model generates content and another evaluates it. We demonstrate LangChain’s integration with tools and vector stores. By the end, learners can create custom workflows that orchestrate language models in code.
Retrieval-Augmented Generation (RAG) (4h): Teaches how to augment LLMs with external knowledge. Students generate embeddings from documents and store them in vector databases (FAISS, ChromaDB). As the literature explains, storing embedding vectors in an index avoids recomputing them repeatedly. In labs, participants build a document-based QA system: the system retrieves relevant passages for a given question and feeds them to the LLM as context. This improves answer accuracy by grounding generation in real data.
Multimodal AI & Image Generation (4h): Covers generative models beyond text. We introduce diffusion models (e.g. Stable Diffusion) and GANs for image generation. Using Hugging Face’s Diffusers and/or API access to DALL·E, students create images from text prompts. Exercises include generating art or photo-realistic scenes, and possibly style transfer. This module highlights how Transformers and diffusers support computer vision tasks. (Optionally, audio or video generation tools can be briefly explored if time permits.)
Deep Learning Frameworks: PyTorch & TensorFlow (3h): A focused overview of the two major ML libraries. We cover core concepts: tensors, computational graphs (static vs. dynamic), layers, and training loops. Students build and train a simple neural network (for example, a text classifier or a small CNN). This establishes the foundations needed for custom model work. The material emphasizes each framework’s strengths (e.g. PyTorch’s eager execution, TensorFlow’s production pipelines).
Fine-Tuning & Custom Model Training (5h): Hands-on training in fine-tuning and customizing models. Using the Hugging Face Trainer or custom code in PyTorch/TensorFlow, learners fine-tune a pre-trained LLM or image model on domain-specific data. We cover transfer learning, training/validation splits, and evaluation metrics (e.g. accuracy for classification or ROUGE for summarization). A lab might have students fine-tune a model for a new text task and measure its performance. Advanced topics like Reinforcement Learning from Human Feedback (RLHF) are introduced conceptually. By module end, students understand how to adapt large models to specialized applications.
Deployment & LLM Ops (3h): Focuses on putting models into production. Options include serving models with Python APIs (FastAPI/Flask), deploying on Hugging Face Spaces, or using cloud ML services. We discuss MLOps best practices for LLMs: model versioning, monitoring, and experiment tracking (e.g. using MLflow’s LLM support). Students learn to write a simple web app that calls a model, and how to optimize models for inference (batching, quantization). Ethical and performance considerations (bias, hallucinations, latency) are also covered.
Capstone Project Workshop (4h): Participants apply everything by building a complete generative AI solution. Example projects include: a domain-specific chatbot (LangChain + OpenAI), an automated content generator, or a media-generation app combining text and images. Each project must integrate multiple tools (e.g. a vector store with a language model chain). Teams implement, test, and present their work. This project-based approach boosts engagement and gives real-world skills. Instructors provide guidance and feedback, ensuring that students successfully translate concepts into a working application.
Key Features
- Hands-on, project-based learning: The course emphasizes lab exercises and projects that reinforce real-world skills. Participants build sample AI applications (chatbots, image generators, etc.) in each module.
- Industry-standard tools: Covers Hugging Face Transformers , LangChain, OpenAI’s GPT API , plus TensorFlow and PyTorch. Learners get practical experience with these frameworks.
- Practical applications: Modules include use-cases like text generation, summarization, RAG (retrieval-augmented generation), and image synthesis. Capstone projects simulate real AI development, improving engagement and workforce readiness
- Step-by-step progression: Starts with fundamentals (what is generative AI, Python setup) and advances to complex topics (fine-tuning models, deploying APIs, LLM agents). The curriculum builds skills gradually.
- Support for beginners: No deep AI prerequisites assumed. Concepts are explained from first principles, with plenty of code examples, Q&A, and community support.



