๐ŸŒŸ

LLM Development Basics

This course is for developers with development experience but no prior knowledge of the LLM field. The course aims to help developers systematically master the core principles, component composition, security risks of large language models (LLM), and the practical application of mainstream development frameworks (LangChain & LangGraph). From theoretical introduction to project demonstration, the course proceeds step by step, taking into account both technical breadth and engineering implementation.

Day 1: Concept Establishment and Basic Theory

From traditional AI to Generative Models

๐Ÿง 
Study Content +
โ€ข

1.1 LLM Theoretical Basis

The evolution from traditional AI to generative AI

Core principles of LLMs and the most important technical concepts

An overview of mainstream LLM providers and the open-source ecosystem

Common model categories and representative application scenarios

โ€ข

1.2 Core Components and Key Concepts

Fundamental elements such as prompts, models, agents, tools, and memory

Agent protocols including MCP, A2A, and AG-UI

Analysis of typical LLM application architectures and case studies

โ€ข

1.3 Tools and Hands-On Demonstrations

Demonstrations of common tools such as ChatGPT, Claude, and Gemini

Hands-on experience with interaction patterns such as MCP, Canvas, and Deep Search

Exploring LLM use cases in development with tools like GitHub Copilot and Cursor

โ€ข

1.4 Security and Compliance Fundamentals

An overview of potential LLM-related risks

A detailed explanation of the OWASP Top 10 for LLMs (2025) with practical recommendations

Learning Objectives +
  • โœ“ Understand the development history of AI from classical methods to generative models
  • โœ“ Master the basic operating principles and core components of large language models
  • โœ“ Understand the characteristics of mainstream LLM providers and how to choose among them
  • โœ“ Become familiar with typical LLM application scenarios and security risks

Day 2: Development Frameworks and Project Preparation

A development path from theory to practice

๐Ÿ”ง
Study Content +
โ€ข

2.1 LLM Project Development Workflow

How to plan an LLM application project

A complete development workflow from requirements analysis to model integration

Practical paths for data preparation, model access, and service deployment

โ€ข

2.2 Overview of Mainstream Development Frameworks

A comparison of the core architectures of LangChain and LangGraph

Detailed explanation of the core modules used to build agents, including tools, memory, and executors

Applicable scenarios for common protocol support such as MCP, A2A, and AG-UI

โ€ข

2.3 Framework Practice

Get started with LangChain and LangGraph using official demos

Implement basic conversational tasks by combining agents, tools, and memory

โ€ข

2.4 Preparation for the Practice Project

Introduce the practice environment and starter code structure

Clarify project goals and task breakdown to establish a smooth development rhythm

Learning Objectives +
  • โœ“ Master the complete development workflow and planning approach for LLM projects
  • โœ“ Understand the characteristics of mainstream development frameworks and when to use them
  • โœ“ Learn how to build a fully functional agent system
  • โœ“ Become familiar with standard protocols and communication mechanisms for LLM applications

Day 3: Integrated Practice and Hands-On Labs

A complete journey from concepts to practical application

โšก๏ธ
Study Content +
โ€ข

3.1 Lab 1: A Basic Chatbot

Goal: build a minimum viable LLM chatbot

Steps: prompt design -> API integration -> message interaction

โ€ข

3.2 Lab 2: Building a Simple Agent

Goal: build an agent with basic reasoning capabilities

Steps: define the agent -> integrate tools -> manage context

โ€ข

3.3 Lab 3: Calling External Tools

Goal: implement an agent that can call external tools such as search and calculators

Steps: define the tools -> build the invocation chain -> integrate returned results

โ€ข

3.4 Lab 4: MCP Server Practice

Goal: build an agent service that follows the MCP protocol

Steps: understand the protocol -> build the service -> collaboratively debug and test the interface

โ€ข

3.5 Wrap-Up and Q&A

Review the main knowledge points covered over the three days

Answer common questions and suggest next-step learning paths

Learning Objectives +
  • โœ“ Master the complete development workflow of LLM applications through real project practice
  • โœ“ Learn how to build complex multi-agent collaboration systems
  • โœ“ Develop practical skills for integrating and invoking a variety of tools
  • โœ“ Understand how the MCP protocol is applied in real projects
  • โœ“ Establish a complete LLM project development framework and a set of best practices

๐Ÿ“ฆ Appendix

+
Recommended tools and platforms: OpenAI, Anthropic, LlamaIndex, LangChain, and more
Recommended reading and resources: practical guides to LLM application architecture, official LangChain and LangGraph documentation, and MCP protocol whitepapers
Reference code repositories and links to practical projects