TL;DR
- 32+ hours of research-based content that teaches you how LLMs work, not just prompting shortcuts
- Best for: Complete beginners or daily AI users without a systematic approach
- Standout features: Security deep-dive, reasoning models, professional prompt testing
- Main criticism: Basic sections aren’t well-labeled, so experienced users waste time skipping around
- Worth it? Yes, if you want to understand why prompts work and build personalized AI systems. Skip if you already have ML/NLP education.
I’ll be straight with you, this isn’t my first rodeo. I’ve taken multiple prompt engineering courses out there and most of them focus on quick wins and shortcuts for specific use cases: prompting for devs, for business, content creation, etc.
The Prompt Engineering Bootcamp from ZTM challenged that. It’s a massive course with over 30 hours of content (plus some more hours to complete challenges and projects). Why this intensive? Scott Kerr, the instructor, wants you to understand how Large Language Models actually work under the hood (and he’s very good at that).
Instead of copy-pasting templates, Scott will take you on a journey; from when LLMs were created until today. You’ll get to see how text generation is possible, which will enable you to craft prompts for any situation (or even improve on existing ones).
You’ll learn what currently works, why it works and how you can adapt to the future as the models change and get more “intelligent”.
What You’ll Actually Learn
The Prompt Engineering Bootcamp has 20+ sections. The structure is logical and every new section builds well on the previous one, which I prefer.
Sections 1-5: The Foundation
This is where you learn how LLMs actually work. We’re talking tokenization, transformer model architecture, training processes, and the difference between base models and assistant models.
You’ll explore weird concepts like the reversal curse and learn why models sometimes give you different answers for the same prompt.
Most interesting: Scott’s exercises like “Thinking Like LLMs”, that help you build intuition for how these models process information.
Sections 6-10: The Prompting Framework
Here’s where things get practical. Scott introduces a systematic approach to prompt engineering that covers everything from system messages and context to few-shot learning and chain-of-thought prompting.
Section 11: The Dark Side of AI
This section is a goldmine if you’re building anything with AI or using it professionally. Guardrails, jailbreaks, prompt injections, hallucinations—all the weird behavior you’ve probably encountered when working with LLMs is explained here. If you’ve ever wondered “why is ChatGPT acting so strange?”, this section answers that.
Sections 12-14: Advanced Topics
You’ll dive into hyperparameters, ChatGPT’s architecture, multi-modality (images, voice, etc.), function calling, and open-source models. You’ll also learn to use tools like Chatbot Arena to compare different models and figure out which one actually works best for your use case.
Section 15: Advanced Prompting Techniques
This is where things get really interesting. You’ll learn techniques like Chain of Density prompting, prompt chaining (including programmatic visualization), ReAct prompting, Tree of Thoughts, and XML tags for better structure.
Most interesting bit: The section on emotional stimuli in prompts (yes, really) extremely fascinating, it even has research backing why it works.
Section 16: Reasoning Models
This was one of the most relevant sections for me as I use multiple LLMs. Scott covers the new generation of reasoning models (like o1 from OpenAI), how they differ from standard LLMs, and why they’re better at certain tasks.
You’ll learn about the generator-verifier gap, reinforcement learning from human feedback (RLHF), process reward models, and test-time computation.
Most interesting bit: The deep dive into whether reasoning models are “lying” to you about their thought process.
Sections 17-19: Testing and Evaluation
These sections teach you how to actually test your prompts systematically.
You’ll learn about model benchmarks like MMLU, create your own mini-benchmarks, and use PromptFoo (a professional tool) to develop proper prompt tests with human, code, and AI judges. This is crucial if you’re building anything serious with AI.
Section 21: AI Research and AGI
The course ends with a look at the bigger picture: mechanistic interpretability, scaling laws, and the quest for Artificial General Intelligence. This also shows the Turing Test with practical examples and discussing current AI trends.
The Projects (AKA Vibe Coding Practice)
In the Prompt Engineering Bootcamp, you’ll work on three game-building projects using AI-assisted coding:
- Snake Game (Section 3)
- Tic Tac Toe with AI opponent (Section 5)
- Flappy Bird (Section 17)
Plus, you’ll create a Career Coach application that shows you how to build practical AI tools with custom instructions and different modes. There’s also an optional appendix on building with AutoGPT (autonomous agents) if you want to explore that rabbit hole.
Here’s my take: I completed two of the three game projects but skipped the last one because I already have experience with vibe coding. But if you’ve never built something with AI, do these projects. They’re genuinely valuable for understanding how to structure prompts for coding tasks and what AI can actually do (versus what you think it can do).
Scott Kerr’s Resource-Based (and Fun) Teaching Approach
Scott doesn’t just teach based on personal experience, he brings in supportive resources, especially research papers from NASA, Google, and OpenAI, and more leading companies to navigate topics through them.
Scott has a gift for taking complex stuff such as transformer architectures, tokenization, and model training, and making it click for complete beginners. He offers depth in every topic, so you’re not left wondering “but how does that actually work?”
He cracks jokes throughout to keep things light, which is essential when you’re dealing with 32+ hours of technical content. As a learner said, “He (Scott) makes the subject really fun and engaging – there’s never a dull moment!” — I agree.
How The Bootcamp Helped Me
I write articles for Class Central, and this bootcamp changed how I approach AI tools daily. Here’s what’s different now:
- Better prompts for content work: I write clearer prompts for proofreading, meta descriptions, and social media content
- Understanding limitations: I know when AI will struggle versus when it’ll nail a task, which saves me tons of time
- Security awareness: I can spot potential issues with AI-generated content (hallucinations) and understand why weird behaviors happen
- Custom instructions > Chat prompting: The biggest shift was realizing I should build custom instructions for recurring tasks instead of re-prompting every single time
- Testing and iteration: I now know how to systematically test prompts instead of guessing whether changes improved anything
P.S. I highly recommend the sections on reasoning models and prompt testing if you use AI professionally. You’ll learn to choose the right model for specific tasks and measure whether your prompts are working.
The Only Things I’d Change
If you’re a daily AI user, some of the initial sections might feel too basic. The course doesn’t clearly mark which sections are skippable for experienced users, so I ended up watching basic ChatGPT setup videos at 2x speed and manually skipping through to make sure I wasn’t missing important details hidden in the beginner content.
Clear labeling would help: something like “Skip this if you already use ChatGPT daily” or “Essential for all learners” on each section.
Also, that 14-15 hours per week estimate? Add more time if you’re actually doing the projects and exercises properly. It’s not impossibly demanding, but I’d recommend setting a realistic time commitment.
Who Should Take This Course?
This bootcamp is perfect for:
- Anyone with little to no AI experience (someone in content writing who’s only used ChatGPT a few times)
- People using AI without a system—you’re re-writing prompts from scratch every time, you don’t know how to format outputs properly, and you’re looking for custom instructions
- Developers who want to integrate AI into their workflow beyond basic code generation
- Anyone building AI applications who needs to understand security concerns like prompt injection
It’s still valuable for:
- Experienced AI users who want research-backed techniques to formulate prompts
- People who want to move from “I use ChatGPT sometimes” to “I have a system of personalized agents”
- Anyone curious about the technical side of how LLMs actually work
But skip it if:
- You already have formal education in machine learning and NLP
- You’ve completed multiple advanced AI courses and actively read about prompting
- You need specialized domain knowledge rather than general prompt engineering skills
Bottom Line
If you’re tired of rewriting prompts, struggling with understanding why AI tools aren’t producing the output you want, or building AI tools without knowing security risks, take this Prompt Engineering Bootcamp.
It’s hands-down the most comprehensive prompt engineering course I’ve taken. The research-based approach, practical projects, advanced techniques such as Tree of Thoughts and prompt chaining, and professional testing tools offer sustainable frameworks that actually work, not just now, but long-term.
The bootcamp does have minor issues: needs clearer labeling and is more intensive than specified (which is a good thing?), but if you want to create advanced prompts, workflows, accurate custom instructions, and personalized agents for yourself or your company, I’d definitely recommend it to you.
This article was produced by the Class Central Report team in partnership with Zero To Mastery.
The post Write Prompts That Actually Work: ZTM’s Prompt Engineering Bootcamp Review appeared first on The Report by Class Central.



