Pathfinding Performance Benchmark:
A* vs Field-Based Navigation
Rigorous benchmarks on standardized grids with verified path validation. All results are reproducible. Benchmark scripts available to evaluation partners.
Test Conditions
Grid: 256×256, 8-directional movement, √2 diagonal cost, 20% obstacle density
Agents: 100 with dynamic obstacle changes between ticks
Statistical robustness: 20 random seeds per scenario
Path validation: 100% success rate, all paths verified obstacle-free
Results
| Scenario | A* Baseline | StrataNav | Speedup |
|---|---|---|---|
| Game tick (100 agents, dynamic) | 111.7ms | 7.06ms | 15× |
| Replanning (1000 queries, query-only) | baseline | 13.49× faster | 13× |
| Replanning (incl. precompute amortized) | baseline | 8.78× faster | 8.8× |
| Batch pathfinding (50 queries, 256²) | baseline | 2× faster | 2× |
| Path success rate | 100% | 100% | parity |
| Path quality vs optimal | optimal | ~1.28× cost | near-optimal |
Why the Difference Is Architectural, Not Algorithmic
A* performance scales with: number of agents × grid complexity × frequency of goal changes. All three multiply together.
StrataNav performance scales with: number of queries. Grid complexity is absorbed at precompute time. Goal changes are free. Agent count is irrelevant to per-query cost.
This is why the gap widens as systems scale. At 10 agents, the difference is modest. At 100, it is 15×. At 1000, it becomes the difference between a shippable game and a technical impossibility.
Frame Budget Impact
At 100 agents, A* consumes 111.7ms per tick — more than an entire 60fps frame (16.6ms). Studios either cap agent count or accept sub-30fps performance.
StrataNav returns 105ms per tick to the studio's frame budget. That is budget for rendering, physics, effects, audio, and AI — everything players actually experience.
Methodology
Benchmarks run on standardized hardware. Speedup ratios are hardware-independent — they reflect architectural differences, not hardware advantages. A* re-solves every agent from scratch each tick. StrataNav reuses the precomputed field. Both algorithms produce valid, obstacle-free paths to the same destinations.
Benchmark scripts are available to evaluation partners for independent verification.