Clustering
Run multiple Asobi nodes as one cluster: horizontal scale for connections and matches, plus automatic failover. Presence, chat, and cross-match messaging are cluster-safe out of the box via pg.
Asobi is single-node by design for gameplay. A match lives on one node; the world server's zones live on one node. Clustering is for connection termination, cross-node messaging, and failover — not for live cross-node zone migration. Shard at the app level (e.g. route players by region).
What's cluster-safe
pg-scoped process groups — presence, chat channels, world/match whereis lookups work cross-node.- Player sessions: a session on node A can send to a match on node B (proxied via
pglookup). - Storage (Postgres) is shared; everything persistent is consistent across nodes.
- Matchmaker is replicated (one gen_server per node, tickets are in PG; any node can match).
What isn't
- A match/world process does not migrate between nodes. If the owning node dies, active matches on it are lost (though state persists for post-mortem).
- ETS caches (zone entity snapshots, rate limits) are per-node. Hot paths assume local access.
- Luerl VMs are per-process and per-node — no shared script state across nodes.
Forming a cluster
Set a consistent cookie and explicit node names, then connect:
# node 1
ERLANG_COOKIE=... \
NODE_NAME=asobi@10.0.0.1 \
ghcr.io/widgrensit/asobi_lua:latest
# node 2
ERLANG_COOKIE=... \
NODE_NAME=asobi@10.0.0.2 \
ASOBI_CLUSTER_SEEDS=asobi@10.0.0.1 \
ghcr.io/widgrensit/asobi_lua:latestOr from a running shell:
net_adm:ping('asobi@10.0.0.1').
nodes(). %% ['asobi@10.0.0.1']Service discovery
For Kubernetes or cloud deployments, use libcluster or a similar strategy. Asobi's asobi_cluster module handles the common cases:
{asobi, [
{cluster, #{
strategy => k8s_dns,
service => <<"asobi-headless">>,
basename => <<"asobi">>
}}
]}Routing players to nodes
Put a load balancer in front of the cluster with a sticky WebSocket cookie, or hash on player_id at the LB. This keeps a player's session on one node; cross-node calls happen only for matches/worlds the player joins on a different node.
Deployment
Rolling restarts are safe: drain a node (stop accepting new matches, wait for existing ones to finish), upgrade, rejoin. Sessions on the drained node reconnect to another node when the LB routes them.
Observability
Cluster-wide metrics surface via telemetry events under [asobi, match, *], [asobi, zone, *], and [asobi, matchmaker, *]. Wire them into Prometheus via telemetry_metrics_prometheus or ship them to any OpenTelemetry collector.