Maven - Ai Engineering Buildcamp: From Rag To Agents
Released 4/2026
By Alexey Grigorev - 15 years of experience. Teaching AI and Data to 100k+ students
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English | Duration: 15 Lessons ( 5h 22m ) | Size: 1.88 GB
This course takes you from core concepts to production-grade AI systems through hands-on, project-focused modules.
LLMs & RAG
Learn Large Language Models and Retrieval-Augmented Generation. Build conversational agents using the OpenAI SDK and create data-processing pipelines.
Outcome: A RAG pipeline with real data.
Agentic Flows + MCP
Add agentic behavior with function calling, using libraries like PydanticAI, and Agents SDK. Expose tools via MCP.
Outcome: A capable, tool-using agents.
Testing & Evaluation
Improve through testing and offline evaluation. Use LLMs as judges to compare approaches. Learn tools like Evidently and LangWatch.
Outcome: A thoroughly tested and evaluated assistant.
Monitoring & Guardrails
Use Grafana, Pydantic Logfire and OpenTelemetry for observability and safety.
Outcome: Real-time monitoring.
Use Cases
Create two agents: a website generator and a code reviewer. Learn about other use cases.
Capstone
Build an end-to-end AI application with your data.
Outcome: A portfolio-ready project.
Hackathon: Collaborate on real-world problems.
By the end, you will: build, evaluate and monitor a smart assistant; create deep research and coding agents; have a polished portfolio project
What you'll learn
Create your own production-ready AI application in 6 weeks
?️ Build a Fully Functional AI Assistant from Scratch
A conversational AI assistant that answers questions from GitHub repositories, YouTube transcripts, or internal documentation.
Use Retrieval-Augmented Generation and the OpenAI API.
? Add Agentic Behavior to Your AI Systems
Build systems that can reason, make decisions, and take actions with function calling.
Use tools like PydanticAI and OpenAI's Agent SDK.
Extend the capabilities of your agent with MCP.
? Use Testing and Evaluation to Improve Prompts and Results
Test the application with unit tests and judges.
Learn how to evaluate your application with ranking metrics, simulate user queries, and use LLMs to judge outputs.
Select the best prompt, model and chunking strategy using the data-driven approach.
? Monitor Your Application
Set up real-world monitoring using Grafana, Pydantic Logfire, Evidently, and LangWatch.
Track costs and token usage in real-time.
Add guardrails to prevent the application misuse.
? Build 8+ Projects
FAQ Assistant, YouTube Video Q&A system, Wikipedia search and summary system, and a documentation agent.
AI Coding agent, Deep Research agent and a Code Evaluator agent.
Two more projects: capstone and hackathon at the end.
Code:
Bitte
Anmelden
oder
Registrieren
um Code Inhalt zu sehen!