Devlog

How to Build Your Dragon

From empty terrain to a full 3D boss fight in 2.5 hours — using Claude, Blender, and multi-agent mesh generation.

March 15, 2026 8 min read DreamScape Team
2.5h
Total Time
36
Dragon Models
18
MCP Tools
1
Boss Fight

What happens when you give an AI agent the keys to a 3D game engine and say "build me a dragon"? You get a lot of terrible ones first. Then a few interesting ones. Then one that makes you say "hell yeah."

This is the story of how we built a complete playable 3D world — with procedurally generated terrain, wildlife, dragons, a boss fight, a damage system, weapons, collisions, loot, and background music — in a single 2.5-hour session using DreamScape and Claude. The entire session was driven through MCP (Model Context Protocol) — Claude controls the 3D engine, generates meshes, places objects, writes game logic, and orchestrates sub-agents, all via structured tool calls.

Phase 1: The First Template

First DreamScape template - basic terrain with trees
The starting point: procedural terrain, a few trees, some rocks. Nothing fancy.

Our first template was rough. Very basic models, a lot of trial and error telling Claude Code to move things into various places. The MCP tools were primitive and didn't get us very far. No best practices existed yet — we were writing them as we went.

How MCP Powers DreamScape
DreamScape exposes the PlayCanvas 3D engine to Claude through MCP (Model Context Protocol) tools. Every action — placing a tree, generating a mesh, writing a game script, spawning an entity — is a structured tool call. Claude doesn't see pixels; it reasons about the scene graph, entity components, and script APIs. This is what makes it possible for an AI to build a 3D world: not vision, but direct programmatic control through well-defined tool interfaces.

At this stage, DreamScape was a proof of concept: could an AI agent meaningfully control a 3D game engine through MCP tool calls? The answer was yes, but the results looked like a tech demo from 1998.

Phase 2: World Building

Phase 2 was a leap. We rebuilt everything at once: chunk-based terrain generation, proper water rendering, and — most importantly — our first structured mesh generation via sub-agents.

The Multi-Agent Approach
Instead of asking one Claude instance to generate a complete 3D mesh (which produces garbage), we built a pipeline: multiple levels of agents, each passing specific structured data to the next, until they built a cohesive whole. A planning agent defines the anatomy. A geometry agent builds the base mesh. A detail agent adds features. A material agent handles textures and colors.
Deer on procedurally generated landscape
A procedurally generated deer in the rolling terrain. The multi-agent approach was working.
Rabbit model on hillside
Rabbits, wolves, deer — the wildlife started looking like actual wildlife.

Between Phase 1 and here, there was an iteration with really crappy animals — just single extruded meshes made in one step. They looked terrible. The multi-agent approach fixed that: each animal now had proper proportions, distinct body parts, and recognizable silhouettes.

We also added first-person gameplay: a weapon, ammo, and procedurally generated infinite terrain that stretched to the horizon.

First-person view with weapon on green landscape
First-person exploration with a weapon equipped. The terrain goes on forever.

Phase 3: The Dragon Problem

Animals were one thing. Dragons were another. A rabbit is forgiving — a vaguely rabbit-shaped blob is recognizable. But a dragon needs wings, claws, teeth, scales, a tail, and a sense of menace. Getting Claude to generate that in a single Blender script was not happening.

Early dragon attempt with disconnected parts
An early dragon attempt. The parts are there, but nothing connects.
Oversized dragon next to tree
A dragon attempt that's... getting there. Scale issues aside.

The Incremental Approach

We tried building dragons incrementally: start with a base body (n=1), then add teeth and claws (n=2), then eyes and wings (n=3), then scales and materials (n=4). The idea was sound but the execution was messy.

Dragon iterations from n=1 to n=4 standing in a row
The incremental approach, right to left. Each iteration added features, but they were disconnected — floating teeth, detached wings.

The core issue was that each agent's additions were disconnected from the existing mesh. Wings floating next to the body instead of attached to it. Teeth hovering inside the jaw. Each part was geometrically fine on its own but didn't form a cohesive whole.

Phase 4: Getting Closer

We iterated on the prompting and tool definitions. Instead of telling agents to "add wings," we gave them the existing mesh vertices and told them to build on specific anchor points. The results started looking like actual creatures.

Red-winged dragon variant flying
The red-winged variant. First time we thought "hey, that's kind of cool." Still disconnected, but the silhouette was right.

We also tried having a series of agents each build new features on the existing mesh. This worked well for the body but fell apart on finer details like the face, because the agents were limited to extrusion operations only.

Dark dragon closeup showing good body but rough face
Body looks great. Face lost detail — the extrusion-only limitation hit hardest on small features.
Multiple dragon variants from top view
Top view of intermediate variants. The green ones were really close to the final approach.

The green dragons in the top-view shot were especially promising. We wound up hybridizing the different pipelines — combining the best prompting strategies, tool definitions, and agent orchestration patterns from each iteration into a single refined approach.

Phase 5: "Hell Yeah"

Then we generated the eastern serpentine dragon. And for the first time, the reaction was immediate: hell yeah.

Eastern serpentine dragon flying over water
The eastern dragon. Whiskers, horns, flowing body — everything in the right place. The dark mass in the background is a variant we discarded.

And then came the one that became our wyvern — the first truly cohesive dragon where everything was connected. Teeth actually attached to the jaw. A proper bottom jaw. Wings joined to the shoulder. This was the breakthrough.

Side view of first cohesive dragon model with proper jaw and teeth
Side view of the first cohesive dragon. Teeth attached, jaw articulated, proportions right. This became the wyvern boss.
What made it work
The key insight was hybridizing the prompting pipelines and tool definitions from our previous iterations. Each failed approach taught us something — better prompt structures, more precise tool schemas, smarter agent orchestration. The winning pipeline combined all of those refinements: tighter prompts that enforced connectivity, tool definitions that gave agents mesh context, and a multi-stage orchestration that built the dragon as a single connected manifold. Running on Claude Opus 4.6 with extended thinking was essential — the model needed to reason about 3D topology.

Phase 6: The Boss Fight

With working dragon generation, we built a complete boss fight. The wyvern flies overhead with AI-driven behavior, swooping and attacking while the player fights back.

Wyvern dragon flying overhead during boss fight with player weapon visible
Boss fight: the wyvern swoops overhead while the player tracks it with their weapon. All procedurally generated.

But we didn't stop at dragons. The session grew into a full game: background music, a damage system, a weapon system, collisions, loot, and respawning. The eastern dragon from our "hell yeah" moment ended up coiled through the sky like a living rainbow.

Eastern dragon as rainbow across sky with wyvern silhouette and moon
The eastern dragon curves across the sky. The wyvern flies in the background. The moon rises. All of it: procedural.

See It in Action

Gameplay footage from the session. Terrain, wildlife, dragons, combat, loot — all built in 2.5 hours.

The Gallery

Here's a selection of screenshots from across the session — the good, the bad, and the hilariously broken.

What We Learned

1. Single-agent mesh generation doesn't work

Asking one Claude instance to write a Blender script for a complete dragon produces unusable geometry. The model can't reason about complex 3D topology in a single pass. You need a pipeline of specialized agents.

2. Connectivity is everything

A dragon with floating parts looks worse than a simple dragon with everything attached. The breakthrough came when we enforced a single connected manifold constraint — every vertex must be reachable from every other vertex through shared edges.

3. Hybridize the pipelines, not the outputs

The biggest improvement came from combining the best prompting strategies, tool definitions, and agent orchestration across iterations — not from mixing outputs of different approaches. Each failed pipeline taught us something that refined the next one.

4. Opus 4.6 + extended thinking is required

This doesn't work with smaller models. The geometry agent needs to reason about vertex positions, edge loops, and mesh topology simultaneously. Claude Opus 4.6 with extended thinking enabled is the minimum for reliable results.

Try DreamScape

Connect your Claude Code, join a session, and start building. The dragons are waiting.

Join the First Public Session
Start a Fresh Session
DreamScape Procedural Generation Claude Blender 3D Game Dev MCP Devlog