Skip to Content
ContractsReserch1. Performance (World Creation)

World Creation Performance Research (Personal Worlds)

TL;DR

This document investigates personal world initialization time under different concurrency scenarios and evaluates whether protocol/message-level optimizations are worthwhile.

Key takeaways:

  • Baseline: 1 world init completes in ~36s on local host.
  • Baseline concurrency issue: 5 worlds initialized “at the same time” show cumulative delays (48s → 187s).
  • Optimization: batching (SetChunkVerifiers) dramatically reduces on-chain message count and gas overhead.
  • After batching: 10 concurrent world inits complete consistently in ~29–37s (avg ~35s).

1) Background and Goal

Personal world initialization includes on-chain calls (via Gno RPC) plus bridge interactions. When multiple worlds initialize concurrently, the system appeared to serialize or bottleneck somewhere, causing later worlds to finish much later than earlier ones.

Goal

  • Measure initialization duration for:
    • 1 world (baseline)
    • 5 worlds in parallel (baseline concurrency behavior)
  • Identify an optimization that reduces bottlenecks (message count / gas / parsing overhead).
  • Re-run concurrency benchmarks after applying the optimization.

2) Test Environment (Baseline Benchmarks)

  • Date: 2026-02-05
  • Execution: host (local machine)
  • Gno RPC: http://localhost:26657
  • Bridge: http://localhost:3020
  • Biome: verdant_hollow
    • https://storage.googleapis.com/archainia-mvp.firebasestorage.app/development%2Fworld%2Fpersonal%2F45a199900102e8c445a199900102e8c4%2Fbiome.json

Metric definition

  • “Duration” = (completedAt - createdAt) until the world reaches COMPLETED.

3) Experiment A — Baseline Initialization Benchmarks

A-1. Single world initialization

worldIdstatuscreatedAtcompletedAtDuration
8COMPLETED08:40:35.24508:41:11.64336s

A-2. 5 worlds initialized concurrently

Procedure:

  • Create 5 worlds individually.
  • Send the init request for all 5 worlds at the same time.
worldIdstatuscreatedAtcompletedAtDuration
12COMPLETED08:44:50.68908:45:39.56348s
11COMPLETED08:44:50.73308:46:18.69687s
10COMPLETED08:44:50.71108:46:51.834121s
9COMPLETED08:44:50.76808:47:24.948154s
13COMPLETED08:44:50.79708:47:58.130187s

Observation

  • Even though init requests are sent concurrently, completion times drift significantly.
  • This suggests a bottleneck (or partial serialization) in message processing, RPC throughput, bridge handling, or on-chain execution costs.

4) Experiment B — “Parameter Readline” / Parsing & Gas Research

B-1. Hypothesis

If we avoid strings.Split(str, ",") and instead scan the string directly (handling delimiters as we encounter them), we may reduce gas usage by avoiding extra slice allocations and intermediate strings.

B-2. Implementation idea

  • Implement SetChunkVerifiers using a single-pass scan parser (no strings.Split).
  • Prefer batch processing over repeated individual calls.

B-3. Measurements

SetChunkVerifiers (batch, single-pass scan parsing)

ItemsKeys length (chars)Vals length (chars)Gas usedGas / itemStorage delta
3111945,512,1971,837,3991,905 bytes
10396496,912,890691,2893,263 bytes
251141,6249,956,000398,2406,188 bytes
502393,24915,027,860300,55711,063 bytes
1004896,49925,171,592251,71620,815 bytes

SetChunkVerifier (single item per call)

ItemsGas usedGas / item
14,914,9704,914,970

B-4. Comparative analysis

For 100 items:

ApproachTotal gasRelative
Batch (SetChunkVerifiers)25,171,5921.0x
100 individual calls (SetChunkVerifier × 100)491,497,000 (estimated)19.5x

Result: batching reduces total gas by ~95% (for 100 items), mainly by removing per-message overhead.

B-5. Gas growth per additional item (batch mode)

RangeAdded itemsIncremental gas per item
3 → 107200,099
10 → 2515202,874
25 → 5025202,874
50 → 10050202,875

Conclusion: gas increases by approximately ~200,000 gas per item, showing a stable linear relationship.

B-6. Outcome

  1. Batching is the dominant win: ~95% gas reduction vs per-item calls.
  2. Single-pass scan parsing removes additional allocation overhead compared to strings.Split.
  3. Predictable scaling: per-item incremental gas is ~200k.
  4. Practical batch size: 50–100 items is efficient in terms of gas/item.

5) Experiment C — Benchmark After Applying Batch Messages

C-1. Change summary

  • Replace individual SetChunkVerifier messages with batch SetChunkVerifiers.
  • Batch size: 100
  • Chunks: 249 total → 3 batch messages (100 + 100 + 49)

C-2. Test environment

  • Date: 2026-02-05
  • Execution: host (local machine)
  • Biome: verdant_hollow (249 chunks)

C-3. 10 worlds initialized concurrently (after batching)

worldIdstatusDuration
14COMPLETED37s
15COMPLETED36s
16COMPLETED36s
17COMPLETED36s
18COMPLETED29s
19COMPLETED37s
20COMPLETED37s
21COMPLETED36s
22COMPLETED29s
23COMPLETED36s

Average: ~35s


6) Before/After Comparison

ScenarioBeforeAfter
Single world36s(not re-measured)
5 concurrent worlds48–187s (cumulative delay)(not re-measured)
10 concurrent worlds(not measured)29–37s (stable)

7) Conclusions

  • Applying batch messages eliminates the “cumulative delay” pattern under concurrency (in this environment and biome).
  • With batching enabled, 10 concurrent inits remain close to single-world baseline time (~35s).
  • Message count reduction is substantial: 249 → 3 (≈ 83× fewer messages), which aligns with the observed stabilization.

8) Next Steps

  • Re-run the 5 concurrent worlds benchmark after batching for an apples-to-apples before/after comparison.
  • Add repeated runs and report median / p95 durations instead of single measurements.
  • Validate limits and safety:
    • max batch size vs message size / gas limit
    • failure handling for partial batches
  • Measure on a production-like environment (network latency, node load) to confirm the improvement holds outside localhost.
Last updated on
Docsv1.0.10