What happens when you give an AI agent the keys to a 3D game engine and say "build me a dragon"? You get a lot of terrible ones first. Then a few interesting ones. Then one that makes you say "hell yeah."
This is the story of how we built a complete playable 3D world — with procedurally generated terrain, wildlife, dragons, a boss fight, a damage system, weapons, collisions, loot, and background music — in a single 2.5-hour session using DreamScape and Claude. The entire session was driven through MCP (Model Context Protocol) — Claude controls the 3D engine, generates meshes, places objects, writes game logic, and orchestrates sub-agents, all via structured tool calls.
Phase 1: The First Template
Our first template was rough. Very basic models, a lot of trial and error telling Claude Code to move things into various places. The MCP tools were primitive and didn't get us very far. No best practices existed yet — we were writing them as we went.
At this stage, DreamScape was a proof of concept: could an AI agent meaningfully control a 3D game engine through MCP tool calls? The answer was yes, but the results looked like a tech demo from 1998.
Phase 2: World Building
Phase 2 was a leap. We rebuilt everything at once: chunk-based terrain generation, proper water rendering, and — most importantly — our first structured mesh generation via sub-agents.
Between Phase 1 and here, there was an iteration with really crappy animals — just single extruded meshes made in one step. They looked terrible. The multi-agent approach fixed that: each animal now had proper proportions, distinct body parts, and recognizable silhouettes.
We also added first-person gameplay: a weapon, ammo, and procedurally generated infinite terrain that stretched to the horizon.
Phase 3: The Dragon Problem
Animals were one thing. Dragons were another. A rabbit is forgiving — a vaguely rabbit-shaped blob is recognizable. But a dragon needs wings, claws, teeth, scales, a tail, and a sense of menace. Getting Claude to generate that in a single Blender script was not happening.
The Incremental Approach
We tried building dragons incrementally: start with a base body (n=1), then add teeth and claws (n=2), then eyes and wings (n=3), then scales and materials (n=4). The idea was sound but the execution was messy.
The core issue was that each agent's additions were disconnected from the existing mesh. Wings floating next to the body instead of attached to it. Teeth hovering inside the jaw. Each part was geometrically fine on its own but didn't form a cohesive whole.
Phase 4: Getting Closer
We iterated on the prompting and tool definitions. Instead of telling agents to "add wings," we gave them the existing mesh vertices and told them to build on specific anchor points. The results started looking like actual creatures.
We also tried having a series of agents each build new features on the existing mesh. This worked well for the body but fell apart on finer details like the face, because the agents were limited to extrusion operations only.
The green dragons in the top-view shot were especially promising. We wound up hybridizing the different pipelines — combining the best prompting strategies, tool definitions, and agent orchestration patterns from each iteration into a single refined approach.
Phase 5: "Hell Yeah"
Then we generated the eastern serpentine dragon. And for the first time, the reaction was immediate: hell yeah.
And then came the one that became our wyvern — the first truly cohesive dragon where everything was connected. Teeth actually attached to the jaw. A proper bottom jaw. Wings joined to the shoulder. This was the breakthrough.
Phase 6: The Boss Fight
With working dragon generation, we built a complete boss fight. The wyvern flies overhead with AI-driven behavior, swooping and attacking while the player fights back.
But we didn't stop at dragons. The session grew into a full game: background music, a damage system, a weapon system, collisions, loot, and respawning. The eastern dragon from our "hell yeah" moment ended up coiled through the sky like a living rainbow.
See It in Action
The Gallery
Here's a selection of screenshots from across the session — the good, the bad, and the hilariously broken.
What We Learned
1. Single-agent mesh generation doesn't work
Asking one Claude instance to write a Blender script for a complete dragon produces unusable geometry. The model can't reason about complex 3D topology in a single pass. You need a pipeline of specialized agents.
2. Connectivity is everything
A dragon with floating parts looks worse than a simple dragon with everything attached. The breakthrough came when we enforced a single connected manifold constraint — every vertex must be reachable from every other vertex through shared edges.
3. Hybridize the pipelines, not the outputs
The biggest improvement came from combining the best prompting strategies, tool definitions, and agent orchestration across iterations — not from mixing outputs of different approaches. Each failed pipeline taught us something that refined the next one.
4. Opus 4.6 + extended thinking is required
This doesn't work with smaller models. The geometry agent needs to reason about vertex positions, edge loops, and mesh topology simultaneously. Claude Opus 4.6 with extended thinking enabled is the minimum for reliable results.
Try DreamScape
Connect your Claude Code, join a session, and start building. The dragons are waiting.
Join the First Public SessionStart a Fresh Session