Still working on Volution but now it has shifted more towards agent-based modeling within a cellular automata. Here is a possible high-level implementation, but the design is always changing.
Using Godot, a TileMap corresponds to the Arena and Area2D's correspond to agents. The arena is generated through an automata that locally determines tile changes. The three types of tiles being unassigned (grey), sources (white), and regions (black). Starting with a small seed or pattern of black and white, the arena expands into the grey overtime to fill a valley or hill. Valley's and hill's discovered through different 2D noise functions of variable parameters truncated above or below zero:
https://auburn.github.io/FastNoiseLite/
Agents can move into regions but not sources or unassigned tiles. Agents scale from small populations to large populations by sourcing alpha. Matching something like a logistics curve where slowly alpha builds population until population explosion, then tapering off near 255. That is, each agent has a particular color (rgb: 0-255) and transparency (alpha: 0-255), and the source available to convert into rgba flow is increased by agentic regions having an edge with source tiles. Higher alpha agentic regions siphoning source at a faster rate. The colors corresponding to blue-water, green-organics, and red-inorganics, and the stocks/flows of which emerge into an economy of trading/arbitrage/imbalance between regions. Each region occupied by an agent has a particular rgba (red, greed, blue, alpha) value that can change from turn to turn. The simulator is turn-based where the agents each take their turns followed by the arena updating automata then agents taking turns again and so forth.
The simulation will probably be more of an open-ended sandbox game that allows players to cooperate or compete across multiple different scales. Dense agentic regions with high-alpha move slower, hit harder, and source faster; whereas sparse agentic regions with low alpha move quicker, hit softer, and source slower. Agents have only so much source per turn to spend on actions. Actions vary in required source and usually involve moving color and alpha around (i.e. drawing). Dense agentic regions requiring more source for more impactful actions. Agentic regions can be distributed over a larger area or consolidated into a smaller area, requiring actions that will span more turns the higher the overall alpha of the agent.
Other actions include regional awareness and attention in trade-off. That is, the further you see the less you see around you and vice versa. As agents distribute, they expand awareness and attention to include more of the arena allowing for increasing depth of strategy. Agents can go to war and trade simultaneously if they want (attacking, defending, trading, and sharing information in the same turn perhaps). A spy or scout can be created as a low-density region disconnected from the agent's center of density and in antagonistic and/or friendly regions. Regional awareness pauses the cellular automata on the respective regions allowing for the agent to build out their own geometry of high-density centers or cities connecting as a source flow network over discrete time.
Looking for other systems or rules that mesh in with these causes/constraints to add more depth. Relative proportions of rgba could lend itself to an action tree that unlocks with progression, or something like that. Not entirely set on rgb mapping to inorganics, organics, and water.
So, what you basically end up with is a population density map with deep history doubling as a colored drawing that can be exported as a picture/state each turn. Allowing for an exploration of divergent and convergent volutions. AKA a fractal rainline, rainbow, rainring, rainspiral, etc.
Developing an AI is essential for solo experimentation and for off-loading cognitive demands to automated agendas as your agent scales up to require more active decisions per turn.