Skip to content

Architecture Overview

Document: Technical Design – Architecture Overview
Status: Exploratory
Last updated: 2026-01-10


1. Purpose of This Document

This document provides a high-level architecture for Atherion that supports the game’s core pillars:

  • Instanced progression (Everspire)
  • Instanced open-world maps (soft instancing; no “channels” exposed)
  • Third-person action combat that feels responsive
  • No forced maintenance downtime as a player-facing experience
  • SpacetimeDB Maincloud as the initial backend platform
  • One SpacetimeDB database per region (EU/NA/Asia) as a starting point

This is not an implementation spec. It is a shared mental model and a guide for future decisions.


2. Core Architectural Principle

Partition the simulation.
The game is not “one world server” — it is many instances (maps, hubs, dungeon runs), each with its own simulation loop, lifecycle, and scaling profile.

In practical terms:

  • Social hubs can support high population with low simulation needs.
  • Open-world maps run moderate simulation and moderate population.
  • Everspire instances run high-fidelity simulation for small groups.

This partitioning is also what enables:

  • graceful patching / draining
  • scalability
  • blast-radius containment

3. Deployment Topology (Initial)

3.1. Regions

Decision (current):
Operate one region shard per continent: e.g. EU / NA / Asia.

Each region is logically independent:

  • player identity is region-bound (at least initially)
  • economy is region-bound
  • social structures are region-bound

Cross-region play is out of scope for now.


3.2. SpacetimeDB Placement

Decision (current):
Each region uses one SpacetimeDB database on Maincloud (for now).

Rationale:

  • Minimizes operational complexity early.
  • Keeps schema, reducers, and subscriptions centralized.
  • Allows rapid iteration while still modeling instances internally.

Future scale path may split into multiple databases per region (meta vs simulation), but this is explicitly deferred.


4. The Instance Model

4.1. Instance Types

Atherion’s world is composed of instance types:

  • Hub Instances

    • high player count
    • no combat
    • low update frequency (presence, chat, cosmetics)
  • Open-World Map Instances

    • moderate player count
    • exploration + bounties + dynamic events
    • moderate combat fidelity
    • multiple instances per map for capacity
  • Everspire Run Instances

    • low player count (solo, party, strike size)
    • high combat fidelity
    • deterministic encounter state where needed
    • lifecycle managed per entry/exit

Instances are not exposed as “channels” in UI. Players experience them as the world simply being “crowded” or “available.”


4.2. Interest Management (High-Level)

Decision (directional):
Clients should only receive state updates for what matters:

  • their instance
  • nearby entities
  • party members
  • relevant objectives (boss, event targets)

Rationale:
Interest management is critical to avoid scaling costs that grow as O(n²).

Specific filtering mechanisms (distance, party priority, objective priority) are intentionally not specified yet.


5. Simulation and Tick Rates

5.1. Tick Rate as “Instance Scheduling”

Decision (current):
Tick rates are defined per instance type, not globally.

Working targets:

  • Hub: ~2 Hz (presence)
  • Open world: ~10 Hz (combat + movement)
  • Everspire: ~20 Hz (more responsive combat)

Rationale:

  • Instanced content can afford higher fidelity.
  • Open world must handle scale and chaos.
  • Hubs prioritize social density over precision.

Ticking is conceptually implemented via scheduled game-loop reducers per instance.


5.2. Combat Model Implications

This architecture assumes:

  • client visuals run at high FPS (60+)
  • server simulation runs at lower Hz (10–20)
  • responsiveness is achieved via:
    • timestamped inputs
    • validation windows (e.g., dodge i-frames span multiple ticks)
    • client interpolation/prediction

Exact netcode details are deferred to technical-design/networking/*.


6. Data & Authority Boundaries

6.1. Authoritative Server State

Decision (hard):
Server state is authoritative for anything that matters economically or competitively:

  • loot, rewards
  • crafting results
  • inventory
  • trading / market
  • progression / ranks
  • instance outcomes

Clients send intent, not results.


6.2. Data Categories

High-level classification:

  • Durable state: characters, inventory, progression, economy
  • Instance state: encounter phases, spawned entities, run progress
  • Ephemeral state: transient combat events, short-lived VFX-like signals

Durable and instance state must be robust against reconnects and patch drains.


7. Patching and Uptime Philosophy

7.1. “No Forced Maintenance Downtime”

Decision (hard constraint):
Patches should not require the entire region to go offline for hours.

Desired player experience:

  • players receive notification of new version
  • they may finish current run / activity
  • they restart when convenient within a grace period
  • hard cutoff is rare and short

7.2. Epoch-Based Soft Patching (Concept)

Working model:

  • Introduce a server epoch (or build epoch) concept.
  • New instances start on the new epoch.
  • Existing instances can be marked as draining:
    • no new entries
    • existing runs allowed to finish
    • reconnect grace is allowed for active runs
  • At cutoff:
    • old clients cannot authenticate
    • remaining old-epoch open-world instances are gracefully migrated/ended

This requires:

  • instance lifecycle awareness
  • reconnect-resilient clients
  • additive migrations and backward-compatible server logic (when possible)

Exact mechanics are deferred to technical-design/backend/patching-and-uptime.md.


8. Scaling Strategy (Initial and Future)

8.1. Initial Scaling

The initial approach scales by:

  • instancing maps (multiple open-world instances per map)
  • instancing Everspire runs (one instance per party)
  • limiting effect spam in open-world events (design constraint)

This is the primary scalability lever early.


8.2. Future Scaling Path (Explicitly Deferred)

If/when one DB per region becomes insufficient:

  • split databases by responsibility:
  • Meta DB: accounts, economy, social
  • Sim DBs: map clusters, dungeon runs
  • introduce explicit shard routing and inter-db messaging

This is not required for early prototypes and is intentionally postponed.


9. What We Are Not Solving Yet

To avoid false certainty, the following are out of scope for this document:

  • Exact networking protocol and reconciliation details
  • Precise open-world instance player caps
  • Schema design and reducer signatures
  • Combat role locking (fluid trinity is still a gameplay decision)
  • Cross-region transfers / global economy
  • Anti-cheat beyond “server authoritative outcomes”

These topics will be addressed in follow-up docs once prototypes reveal constraints.


10. Next Documents

This overview should later be expanded into more concrete specs:

  • technical-design/backend/instance-model.md
  • technical-design/backend/tick-rates.md
  • technical-design/backend/patching-and-uptime.md
  • technical-design/networking/client-server-contract.md
  • technical-design/networking/combat-timing.md
  • technical-design/networking/interest-management.md

End of document.


Discussion