Daylen's Journey

Where are you and where are you going?
daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

1. I do not really know how to describe this perfectly but language has increasingly become like a proxy in my mind for construction. Mostly I rely on simulation from basic sensory processing units (i.e. like features of the sensorium, mostly visual transformations). Simulations run on some finite set of rules. Some simulations are more complete or cover more fields/objects of study, some are more challenging to run, some are simpler to explain, etc. (a trade-off or optimization space emerges). More fit simulations get selected to join the model ecosystem, so to speak. When my mind was more like a grassland with a few subconscious shrubs here and there the early succession process took hold and started this simulation project. It was some time later that I started analyzing these transitions to the point of developing a simple visual model of how my mind works and it has served as a fairly stable tree-like addition to the ecosystem that I can use to gain vantage points over the landscape of developing models or simulations.

2. The Drawing Agency thread highlights the general idea of the core model: viewtopic.php?p=247731#p247731

Here is a simplification that has tended to be most helpful that has three stages of complexification: 1) agents and vegents as finite open sets that move or do not move on some manifold (i.e. like a sphere or ring) 2) add in closed structural sets that cover agents and vegents on the outside and add in points of convergence to the inside of agents and vegents 3) add in open map sets that cover some agents and vegents and add in open frame sets that cover some points inside agents and vegents.

This chart shows a single agent enclosed in a single map within a single structure that has within them a single frame with a single point. It's an agent since I am not sure the MBTI or PPPP mappings project into vegent-mind space very well:

Image

Now, several different "kinds" of rule sets can be interpreted from this basic picture with the help of group theory (or by presuming causal symmetries). At stage 1 with no outside or inside world, life just is what it is (agents and vegents on a manifold). Stage 2 involves introducing an open or closed causal loop between an outside thing/process/system/universe/etc. and an inside thing/process/system/cell/etc.. Stage 3 involves introducing an open or closed causal loop between an outside representation/model/simulation/etc. and an inside representation/model/simulation/etc..

So, the drawing is self-referential and so must sorta be raised to the "level" of a "meta-model". Which is just a map within a map and a frame within a frame.. it's a bottomless bucket of recursion that might waste a lot of cycles, or at least it use too, but seems to do a pretty good job now at constraining or freeing any other processing in the brain. Making it easier to steer through the state space.. or does it? I don't really know anything or maybe I do but I have forgotten.. but sometimes I re-remember in a different forest or desert.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

What if instead of a simple arena with complex agent-based modelling there was a complicated, fractal-based arena with somewhat simpler agent-based models embedded within?

Let a civilization be a koch curve on any degree polygon that is relaxed to allow for any degree polygon insertions inwards or outwards. So, an octagon with triangular bioregions with square cities with hexagon districts with triangular agents with pentagon organs.. for instance. The depth of the fractal updates leading to a dynamically evolving ontology or world. Updates can be locally rolled back or flattened to indicate systemic collapses.

Part of the curve can be cut out and projected onto a closed polygon retaining the sides. Allowing moving agents or unmoving vegents to be distributed into an arena (the inside of the parent koch curve). Gents all the way down.. but how can a balance of top-down and bottom-up causation be achieved?

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

As per the agent model above, a circular agent is an idealized limit towards an infinite edged polygon. A finite representation being of N edges and allowed to fractalate inwards towards the central point, black hole, or garbage collector of the agent and outwards towards the universal structure beyond the agent. More edge allows for more interaction between what is in and out. Gent bodies invite sub-gents that gradually or punctually dissolve the gent identity. Maps and frames approximate the boundaries across scale of this fractalation process. Or at least this is one way to map/frame it.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

daylen wrote:
Mon Aug 07, 2023 9:21 am
Gents all the way down.. but how can a balance of top-down and bottom-up causation be achieved?
Or gents emerge from the bottom and complexify to introduce middle-out causation. In functional terms that are agent-relative, the convergent perception and judgement axes (Ne-Si and Fe-Ti) invert into the divergent perception and judgement axes (Se-Ni and Te-Fi).

This invites the exploration of various n-ary operators between fully connected sections of the overall fractal.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

Started thinking about how knots could be brought into the fractal picture since both mathematical knots and koch curves are closed loops that can be represented easily in 2D. The koch curves could be cut, entangled, and glued to diverge from the default of the unknot. Then I started thinking about quantum information theory again and considering bringing communication between agents to the forefront. This is on a similar track as moving from agent-based models to fractal-based models that operate in some space other than physical space where movement and collisions time agent interactions. Knotting on a fractal might be associated with non-aligned communication between agents that are embedded in the fractal. So, one agent's "up" may be another agent's "down" along with the inverse, corresponding to a knot or over/under crossing. Perhaps each level of granularity or zoom into the fractal could represent sets of communicable agents with various alignments. High alignment and low noise between two agents allowing for a high density of information transfer. The simulation may get knottier as it volves through discrete states that entangle agents or otherwise become unknotted through disentanglement.

Knots are additive so are easily constructed, though an arbitrary knot is difficult to tell apart from the unknot or any other knot. Two knots can be proven to be the equivalent by using Reidemeister moves but proving two arbitrary knots are different is impossible(*). This may lend itself to some kind of asymmetry in computation that lends itself to a puzzle for the user/guide/player/steward of the simulation to solve or otherwise manage. At this point, I am basically searching a very large space of possible mathematical structures for interesting proofs that can be systematized and associated with some kind of dynamical process of entropy accumulation and complexity flux. With little idea of where it all leads yet enjoying the endless thought experiments along the way.

(*) Though possible in limited cases using p-colorability which is invariant to Reidemeister moves.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

Still working on Volution but now it has shifted more towards agent-based modeling within a cellular automata. Here is a possible high-level implementation, but the design is always changing.

Using Godot, a TileMap corresponds to the Arena and Area2D's correspond to agents. The arena is generated through an automata that locally determines tile changes. The three types of tiles being unassigned (grey), sources (white), and regions (black). Starting with a small seed or pattern of black and white, the arena expands into the grey overtime to fill a valley or hill. Valley's and hill's discovered through different 2D noise functions of variable parameters truncated above or below zero: https://auburn.github.io/FastNoiseLite/

Agents can move into regions but not sources or unassigned tiles. Agents scale from small populations to large populations by sourcing alpha. Matching something like a logistics curve where slowly alpha builds population until population explosion, then tapering off near 255. That is, each agent has a particular color (rgb: 0-255) and transparency (alpha: 0-255), and the source available to convert into rgba flow is increased by agentic regions having an edge with source tiles. Higher alpha agentic regions siphoning source at a faster rate. The colors corresponding to blue-water, green-organics, and red-inorganics, and the stocks/flows of which emerge into an economy of trading/arbitrage/imbalance between regions. Each region occupied by an agent has a particular rgba (red, greed, blue, alpha) value that can change from turn to turn. The simulator is turn-based where the agents each take their turns followed by the arena updating automata then agents taking turns again and so forth.

The simulation will probably be more of an open-ended sandbox game that allows players to cooperate or compete across multiple different scales. Dense agentic regions with high-alpha move slower, hit harder, and source faster; whereas sparse agentic regions with low alpha move quicker, hit softer, and source slower. Agents have only so much source per turn to spend on actions. Actions vary in required source and usually involve moving color and alpha around (i.e. drawing). Dense agentic regions requiring more source for more impactful actions. Agentic regions can be distributed over a larger area or consolidated into a smaller area, requiring actions that will span more turns the higher the overall alpha of the agent.

Other actions include regional awareness and attention in trade-off. That is, the further you see the less you see around you and vice versa. As agents distribute, they expand awareness and attention to include more of the arena allowing for increasing depth of strategy. Agents can go to war and trade simultaneously if they want (attacking, defending, trading, and sharing information in the same turn perhaps). A spy or scout can be created as a low-density region disconnected from the agent's center of density and in antagonistic and/or friendly regions. Regional awareness pauses the cellular automata on the respective regions allowing for the agent to build out their own geometry of high-density centers or cities connecting as a source flow network over discrete time.

Looking for other systems or rules that mesh in with these causes/constraints to add more depth. Relative proportions of rgba could lend itself to an action tree that unlocks with progression, or something like that. Not entirely set on rgb mapping to inorganics, organics, and water.

So, what you basically end up with is a population density map with deep history doubling as a colored drawing that can be exported as a picture/state each turn. Allowing for an exploration of divergent and convergent volutions. AKA a fractal rainline, rainbow, rainring, rainspiral, etc.

Developing an AI is essential for solo experimentation and for off-loading cognitive demands to automated agendas as your agent scales up to require more active decisions per turn.
Last edited by daylen on Tue Feb 20, 2024 9:02 pm, edited 3 times in total.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

Code for arena so far (will probably upload project to github at some point):

Code: Select all

extends TileMap

var time = 0
var turn = 0.1
var offset_speed = 5
var noise = FastNoiseLite.new()

# initialize
func _ready():
	build(2,3) # regions within sources
	# noise parameters
	noise.seed = 333
	noise.noise_type = 1
	noise.fractal_type = 1

# grow
func _process(delta):
	zoom()
	offset()
	# update
	time += delta
	if time > turn:
		automata()
		time = 0

# tiles { unassigned: -1, source: 0, region: 1 }
# sectors { southeast: (+,+), northeast: (+,-), northwest: (-,-), southwest: (-,+) }

func build(region: int, source: int):
	var x = 0
	while x <= source:
		var y = 0
		while y <= source:
			if x <= region and y <= region:
				set_cell(0, Vector2i(x,y), 1, Vector2i(0,0))
				set_cell(0, Vector2i(-x,y), 1, Vector2i(0,0))
				set_cell(0, Vector2i(x,-y), 1, Vector2i(0,0))
				set_cell(0, Vector2i(-x,-y), 1, Vector2i(0,0))
			else:
				set_cell(0, Vector2i(x,y), 0, Vector2i(0,0))
				set_cell(0, Vector2i(-x,y), 0, Vector2i(0,0))
				set_cell(0, Vector2i(x,-y), 0, Vector2i(0,0))
				set_cell(0, Vector2i(-x,-y), 0, Vector2i(0,0))
			y += 1
		x += 1

func zoom():
	if Input.is_action_just_released("zoom_in"):
		$Camera.zoom += Vector2(0.1,0.1)
	elif Input.is_action_just_released("zoom_out"):
		if $Camera.zoom.x >= 0.1:
			$Camera.zoom -= Vector2(0.1,0.1)

func offset():
	var speed = offset_speed/$Camera.zoom.x
	if Input.is_action_pressed("offset_east"):
		$Camera.offset += Vector2(1,0) * speed
	if Input.is_action_pressed("offset_north"):
		$Camera.offset += Vector2(0,-1) * speed
	if Input.is_action_pressed("offset_west"):
		$Camera.offset += Vector2(-1,0) * speed
	if Input.is_action_pressed("offset_south"):
		$Camera.offset += Vector2(0,1) * speed

func automata():
	var updates = []
	for cell in get_used_cells(0):
		var sources = 0
		# grow and count sources
		for neighbor in get_surrounding_cells(cell):
			if get_cell_source_id(0, neighbor) == -1:
				if noise.get_noise_2dv(neighbor) > 0:
					updates.append([neighbor, 0])
			elif get_cell_source_id(0, neighbor) == 0:
				sources += 1
		# source to region (death)
		if get_cell_source_id(0, cell) == 0 and sources == 0:
			updates.append([cell, 1])
		elif get_cell_source_id(0, cell) == 0 and sources == 4:
			updates.append([cell, 1])
		# region to source (birth)
		elif get_cell_source_id(0, cell) == 1 and sources == 2:
			updates.append([cell, 0])
		elif get_cell_source_id(0, cell) == 1 and sources == 3:
			updates.append([cell, 0])
	for update in updates:
		set_cell(0, update[0], update[1], Vector2i(0,0))
Way zoomed-out automata state after many steps, sharp edges indicate more exploration to be done by the automata (agents will add color when integrated):

Image

Will probably end up building interface for arena setup to experiment early with different 2d noise parameters and automata rules. Only using 4 sides instead of 8 like Conway's game of life for now. A high degree of dynamism can be achieved with only 4 so I may stick with it.

User avatar
mountainFrugal
Posts: 1335
Joined: Fri May 07, 2021 2:26 pm

Re: Daylen's Journey

Post by mountainFrugal »

What interesting stuff is going in @daylen's head right now?

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

My writing is a bit rusty, but this is from a recent email. Just a model for clarity, not a prescription!

Clarity is harder to achieve as the singularity nears. Game breaking acceleration of technological transformation distorts the field players play on. For most of human history nothing seemed to change much. In recent human history change has been expected but at a manageable rate. It is hard to say when exactly the singularity occurs when you are in the pull of it, distortion of the game is to be expected. Another thing to keep in mind is that the singularity is not evenly distributed in space or time. There is a shortening yet deepening lag between the epicenters of technology and everywhere else.

Some of the primary tech centers: Silicon Valley, Seattle, Boston, Austin, Toronto, London, Berlin, Stockholm, Paris, Zurich, Shenzhen, Beijing, Bangalore, Singapore, Tokyo, Tel Aviv, Dubai, Sydney, Melbourne, Cape Town, Johannesburg, Sao Paulo, and Seoul.

The singularity coincides with the poly-crisis and meta-crisis. Poly-crisis being the set of existential threats humanity is currently facing, and the meta-crisis referring to the set of underlying causes that must be addressed to reduce these threats to acceptable levels. As a general rule, threats are increased going into the singularity and decreased going out of it. In the far future, if life exists, it will likely be very hard to exterminate.

1. Scaling transformers into orgs

Transformers can operate on any kind of token imaginable allowing use wherever there is some kind of order along with some data to infer it. Agents will entrench the economy converging closer to human performance (or exceeding it) on all measurable tasks.

a. Tokens and transformers
Tokens are just bit-sized information structures that can reference anything with discernable order. Some of the most common token types at the moment are text, audio, image, video, chemicals, and actions. All of which can be used as inputs into a multi-modal model to output any combination thereof. Text-to-text, text-to-video, etc.

Transformers transform tokens, surprise! Their flexibility is unlike any other AI that has come before. They are what is called a connectionist approach as opposed to symbolic which is mostly what came before. During pre-training, the model is trained to compress a large dataset into parameters that act as knowledge about the data. In post-training, the model is tested and oriented towards the creator's vision using reinforcement learning. At inference time, the model is prompted into inferring an appropriate response.

Chain of thought allows transformers to evaluate multiple responses. This slows down inference but can improve performance. Similar to how humans can think for longer on a difficult math, coding, or engineering problem to get a better answer than intuition.

b. Agents that take actions
Action tokens enable models to manipulate the physical world step by step. This spans from moving the mouse on a screen to grasping an egg to driving a car. All being tasks that can be broken down into a set of action steps and encoded into tokens.

Like how models can think for longer with text tokens, models can think for longer about action tokens. This could enable robots to devise their own methodology out in the field or factory to solve novel problems that arise. Allowing for innovation to occur in the realm of atoms as opposed to just bits.

c. Scale
The stack of technology that powers transformers can scale rapidly. A few reasons for this include perfect memory, perfect copiability, and high latency. Once a stack has proven itself in one context, the entire stack can be sliced, replicated, and modified at nearly the speed of light around the globe.

Atoms move a heck of a lot slower than bits so computers can only be assembled at the rate of resource extraction, transport, and production. Likewise, power plants can only be created so fast and the computers behind transformers requite a lot of juice.

d. Alignment with human values
Post-training rigor is part of the solution to the alignment problem. Another part of the solution being a way to surveil and manage agents in real-time. Organizations could employ a human oversight committee working on-site or remotely aided by alerts and contingency plans in case of the org being steered off-track by agents.

2. Autopoiesis and the singularity

An autonomous factory producing all of the necessary components, robots, and vehicles to gather raw materials and replicate may be considered autopoietic. Keep shrinking machines and eventually they may become embedded into various life forms and beyond.

a. Criteria
Autopoietic systems are bounded in spacetime. They are born with a body and eventually that body disintegrates one way or another. Such systems produce and regulate themselves given simple inputs. Biology makes use of water, oxygen, sunlight, and food as inputs. Computers, although not yet autopoietic, require electricity, cooling, and replaceable parts.

b. Robots, vehicles, and factories
If you consider the entire network of vehicles and factories across earth as a distributed system, then this system is approaching autopoiesis as humans cease involvement. Eventually, factories that produce robots could enable self-production and self-maintenance.

Extreme vertical integration may allow a single factory to produce and maintain itself with the resources it gathers. Being neatly bounded in spacetime, it would be truly autopoietic and could create copies of itself by ramping up production. Add in a bit of variation and evolution takes hold.

Likewise, a robot or vehicle could become autopoietic.

c. Nanobots and micro-factories
As robots and factories miniaturize, eventually they may integrate with biology in such a way that binds their evolution. Nanobots swimming in and between cells to augment or substitute for proteins and cell-like micro-factories to produce more nanobots.

This degree of integration may be required to reduce the risk of cancer to essentially zero. It may also give us super-human capacities like rapid self-healing and enhanced senses.

d. The force
Small things like nanobots will be able to colonize surrounding space faster than big things like organisms. Nanobots may be programmed to go everywhere and prepare for life to follow. Behold, the force! Some organisms with psychic tendencies may be capable of harnessing this force to move objects or erect entire cities just by thinking. Sufficiently advanced technology is indistinguishable from magic. We may forget these powers came from technology in the first place.

3. Volution as a utilitarian framework

The universe appears to be headed in a particular direction. Along the path it may be helpful to consider what keeps us on it.

a. Light cones
Assuming that the speed limit of the universe is light speed, nothing can go beyond its own light cone. At any spacetime position there exist a light cone going into the past and a light cone going into the future. The light cone from the past reaches back to when light first started to emerge and appears to us as the microwave background radiation. The light cone into the future is a constraint on how quickly we can get to anywhere else in the universe. Lower mass stuff can accelerate closer to the speed of light and thus colonize more of the future light cone.

b. Energy conservation and conversion
The total amount of energy available is limited by the future light cone. Furthermore, the quality of that energy decreases overtime by converting to less useful forms. Generally speaking, the order of highest to lowest exergy is as follows: 1. nuclear 2. chemical 3. electrical 4. mechanical 5. high thermal 6. radiant 7. low thermal 8. acoustic 9. elastic 10. gravitational 11. ambient thermal

The universe started as a highly exergetic singularity that rapidly expanded into a bubble (or some other geometry) of high exergy sources immersed in high entropy clouds. Eventually atoms stabilized and stars were born. Now every clump of matter/energy is an exergy source slowly dissipating into the nearby heat sink of space. Black holes being very high exergy limited to gravitational energy, stars being high exergy fueled by nuclear to give off radiation energy, and planets being mildly exergetic playgrounds for several types of energetic conversions.

c. Trajectory of known universe
It has been thought that the universe is accelerating due to dark energy so that in the distant future galaxy clusters would become isolated from each other. Thus, limiting the light cone of useable energy even further. Recent studies are casting some doubt on this by considering relativistic differences between dense and sparce areas of the universe. The apparent acceleration may be caused by time dilation meaning that travelling through the mostly empty parts of the universe is quicker than travelling through the parts filled with galaxies. If true this would allow life to spread indefinitely, or at least until stars start to burn out and particles start to decay.

d. Complexity
Given these universal constraints we can devise a measure of what we care about. The first component being complexity. There are like a million different definitions of complexity but one way to think about it is as the amount of information required to describe a system completely. This is fairly intuitive for humans as we tend to spend less time describing rocks than each other and are typically overwhelmed when asked to describe society. Society is more complex than humans and humans are more complex than rocks. Rocks are usually thought to be less interesting than societies. Adding another rock to the universe does less for utility than does adding another society.

However, it is useful to distinguish between complexity and complication. Adding complication for the sake of complication is cumbersome and fragile. Rather, complexity emerges from the relationship a system has to the whole. A product or program can be complicated indefinitely but if it doesn't relate to the whole society in an adaptive way, it will eventually perish, adding no complexity.

e. Adaptability
The next component of utility being adaptability. Some organisms can only survive inside other organisms, some can survive in certain regions on earth, some can survive where there is light, some the deep ocean, and so forth. Biology is limited to a particular temperature and pressure range. Too much radiation can destabilize genetic processing. Machines on the other hand typically have a higher tolerance for extreme temperatures, pressures, and radiation levels. Thus, in general machines have more adaptive potential than biology. Though this potential is yet to be fully realized as machines can still be quite fragile in a variety of circumstances.

Higher adaptability allows for a more complete colonization of the light cone. There will always be an upper bound given that you can only get so close to a black hole or star and can only survive so long in the void of space without access to concentrated exergy.

f. Valence
Last but not least, valence is a way to account for positive and negative experiences. It is possible that zero valence machines of high complexity and adaptability could pervade the light cone devoid of life. Though most people would not find much utility in this outcome (or even be around to utilize it :p). Rather there must be an experiencer that cares about what happens.

Valence is a subjective measure of how good or bad an experience is from the perspective of an autopoietic system. A system that can signal high valence authentically is of greater utility than a system that is inauthentic or that authentically signals low valence. Normalizing valence across humans or species is very difficult and probably unnecessary. As people should be treated similarly regardless of their valence. For instance, perhaps it should be up to a negative valence person to signal their desire for non-existence via assisted suicide. Society as a whole maximizes utility by supporting the growth of all persons towards higher positive valence. Admittedly, it may be impossible to know if a system signaling valence other than zero actually has any experience at all (i.e. philosophical zombies). This will be touched on in the next chapter.

g. Utility
Putting it all together: utility = complexity * adaptability * valence

Presuming that complexity and adaptability can be between 0 and 1, and that valence can be between -1 and 1. Thus, if either complexity, adaptability, or valence are 0 then no utility exists. If valence is negative, then utility is negative, and non-existence is preferred.

It is difficult to actually measure utility for our society for instance. Though comparing the extremes of what is possible is a worthwhile exercise.

4. Valence physics

What does biology share with machines? How do they differ?

a. Substrate independent hypothesis
Although I think this hypothesis is ultimately wrong, it is a helpful starting point for speculation. It is the idea that substrate doesn't matter, that carbon-based systems should be treated the same as silicon-based systems of comparable complexity and adaptability. The structure of information processing is all that matters, in essence. The reason I think this is wrong is because information processing is inherent in the structure of substrate. There is a sense in which computation can be abstracted away from the substrate to be constrained by computational laws. Though if quantum effects are considered then these "laws" may be missing something big (or rather a bunch of small differences added up to a big difference).

b. Biological and artificial neural nets
At a high level, biological and artificial neural nets are similar. After all, artificial neural nets were inspired by biological neural nets. Both are composed of neurons in a network that fire and wire together. Ultimately pretty simple yet has proven universally capable. Neurons can combine into universal function approximators that can then be connected together to form any imaginable logical circuit to solve any kind of problem.

At a low level, biological neurons obviously have a lot more going on. Brains are packed full of chemical and electrical processing. It is clear that this extra energy is doing something, but it is not quite clear what the overall effects are on thinking.

c. Particle-wave duality
Computers work so reliably by engineering away the particle-wave duality. Transistors are in one state or another and never a combination. Deterministic parts add up to a deterministic machine. Complete predictability and programmability.

On the other hand, a bag of chemicals like the brain cannot be modeled with high fidelity by a bunch of deterministic parts because many of them are small enough to be subject to quantum effects. It is generally thought that these effects flicker in and out of existence very quickly and so may cancel each other out. It may also be the case that certain structures such as microtubules stabilize these effects for long enough to make a major difference in the aggregate. We just don't know yet but are currently piecing together the puzzle.

d. Dry versus wet computing
Generalizing a bit, it may be the case that dry systems like computers can never have non-zero valence no matter their complexity. Perhaps wet (but not too wet) systems where quantum effects can trickle up are the only systems with non-zero valence. This wouldn't exclude silicon-based substrates but would require a completely different architecture. This project may also nullify much of the adaptive potential of dry systems.

Using this principle, it may even be the case that astronomical bodies with liquid or gaseous activity are experiencing. To exclude geologically active planets, stars, and galactic dust we can apply an additional principle: systems with only a single apparent control layer shall not deviate from zero valence. Idea being that autopoietic systems like life appear to nest control loops like cortex -> limbic -> cellular (could be defined more deeply). Planets, stars, and galactic dust have causal pathways, but these can be modeled by 1-loops, controlled by one equation basically. Whereas organisms are modeled more accurately with n-loops operations at various scales from environment to body to cell.

Though at a type 1 civilization status, society couples with the earth requiring an extended control loop. At type 2: society couples to a star cloud centralized on the star. Type 3: a galactic cloud centralized on the black hole.

5. Future utility

Various equilibriums are possible between machine and biology. Which should we aim for based upon a volutionary analysis?

a. Oracles
In this trajectory, machines never quite reach the point of being widely automated (below 10x human population). Instead humans or human decedents stay in close proximity to machines used as oracles that can do some action and anticipate what the universe might look like in our light cones. Valence never posses much of an issue here as the machines do not reach autopoiesis.

b. Automata
Going a step further, machines may become autopoietic and capable of independent evolution. Though, human-likes guide this evolution in their favor. Machines are engineered so as to minimize the possibility of non-zero valence. Moral status belongs strictly to human-likes. A master-slave relationship, that has the potential to get reversed when balance of power shifts and cold automata cultivate us for some odd reason.

c. Inter-species federation
If the machines are autopoietic and engineered for non-zero valence (either by the creators or themselves), then this third scenario may result in a trajectory where human-likes and machine-likes are on equal footing in some kind of egalitarian society. Perhaps with pure machine autopoiesis being bridged to pure human autopoiesis by intermediate cyborg autopoiesis.

d. Computronium
Lastly, the fate of biology and machine might completely intertwine into a protocol for turning as much matter as possible into computational substrate. Acting essentially as a scalable superorganism that is completely internally aligned with itself. This could hypothetically be either non-zero valence or zero valence, or some combination.

7Wannabe5
Posts: 10706
Joined: Fri Oct 18, 2013 9:03 am

Re: Daylen's Journey

Post by 7Wannabe5 »

The wet vs. dry computing is an interesting spectrum. I don't think becoming autopoietic necessarily implies independent evolution absent mutation and/or motivation. To the extent that motivation is in alignment with human desires, the process might be roughly analogous to cross-species pollination dependency or artificial insemination of livestock. It seems unlikely that it would ever be in the interest of the human species to promote machine replication motivated and maintained simply by energy acquisition.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

It is possible that an autopoietic factory that does not vary at all in hardware could still be capable of reproducing itself on earth and beyond. Though, adding variation lower in the stack would certainly lead to more interesting evolutionary trajectories. The motivation is likely to be in alignment with humans.. at first anyway. Machine evolution could happen much quicker than biological evolution leading to an alien-like quality before we can say veto! I wish I could say that we will not be dumb enough to construct a bad automata that simply loses track of our true motivations in the mist of short-sighted energy optimization but seems plausible. A slightly better optimization being to spread over the galaxy finding potentially habitable exoplanets to geoengineer until we arrive (or even recreate ecosystems from earth in a lab). We are not great in space but if we can find a way to send a colony ships across the stars then this would further smoothen the extropic/entropic curve of the universe. Or maybe we should just stay within our solar system for a while and give the rest of the galaxy time to develop on independent paths to potentially cross-fertilize with us millions of years from now.

7Wannabe5
Posts: 10706
Joined: Fri Oct 18, 2013 9:03 am

Re: Daylen's Journey

Post by 7Wannabe5 »

Who is going to give birth to all the human babies necessary to colonize space?

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

The Amish, the rich, and mechanical wombs apparently.

Though, it may be the case that a societal phase change is underway where cities become less attractive, and a decentralized communal network emerges that reinvigorates humanity while we increase bandwidth between our collective cortex and the cloud to virtualize the economy and crawl the solar system for a few hundred years building up that pop.

7Wannabe5
Posts: 10706
Joined: Fri Oct 18, 2013 9:03 am

Re: Daylen's Journey

Post by 7Wannabe5 »

That sounds good, but I think many of us prefer modular and resilient as well as decentralized. The fifty humans you know in your current watershed, the fifty humans in your family and/or historical/current intimate circle as currently scattered or near, and the fifty humans in your internet span. I suppose the open question being how many of our Dunbar number spots we're willing to open up for AIs? I'm thinking maybe one colleague level spot in my internet span currently. Solar powered garden robot might fill one watershed colleague spot in the future. And I believe that any AI able to provide a reasonable degree of assistance with elder care will be gladly welcomed into many family circles, inclusive of my own, as soon as made available at reasonable price point, which might approximate 1 Jacob/year. Sad but true, this morning my 84 year old mother kept trying to wake up her Alexa by calling out her human caregiver's name which almost rhymes with Alexa repeatedly.

Theoretically, a human is in your Dunbar circle if you would feel comfortable joining them for a beer uninvited if encountered at a bar. So, maybe the concept does not (yet) translate well to AIs? Or it could be something like if you would feel comfortable asking that AI to charge your phone due to second degree relationship. Or if it was an AI refrigerator, the second degree relationship might warrant asking it for a beer, and it would remember what kind of beer you like and keep it stocked in proportion to likelihood of encountering you. If it was really intelligent, it would know that it's first degree human was trying to date you, route itself to an AI in your circle, and pre-determine the sort of beer to stock before you even confirm the date. Or it could coordinate customized fermentation of beer-like substance from local weeds with your solar powered garden robot. Yes, I think gossiping about their humans amongst themselves will be an early adaptation.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

Yeah, there are obviously a lot of details to flesh out. I presume that modularity would naturally emerge from within and without the watershed, familial, and internet levels of organization. Perhaps supported by a holarchy of DAO's that allow humans to move around relatively fluidly. A move towards liquid democracy with some mechanism to amplify expert voices and incorporate prediction markets. Watersheds specializing to some extent and aggregating into bioregions and so forth. City networks approaching something like a cybernetic collective that expands virtual Dunbar.

7Wannabe5
Posts: 10706
Joined: Fri Oct 18, 2013 9:03 am

Re: Daylen's Journey

Post by 7Wannabe5 »

Yes, and legal status for complex entities such as watersheds and other species to counter that of corporations. I can see this as acting as a driver towards planetary expansion absent much human population pressure. For example, planets terra-formed as historical re-genesis re-enactments of various species-states of Earth. Heck, given endless energy and artificial wombs, AI could even re-inhabit eras of Earth with genetic likenesses of the particular humans alive at the time. It's probably already reasonably possible to choose to give birth to Benjamin Franklin's late born identical twin; the genetic reconstruction would be trivial for AI, but we are as of yet loathe to cross that line.

jacob
Site Admin
Posts: 17116
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Daylen's Journey

Post by jacob »

7Wannabe5 wrote:
Sat Jan 11, 2025 3:06 pm
Who is going to give birth to all the human babies necessary to colonize space?
The colonizers. Shipping meat robots out of gravity wells is energetically expensive, so new meat robots are best made at the point of use.

Otherwise, metal robots are pretty much superior to meat robots in pretty much every aspect of space travel except "going uh and ah", writing Vogon poetry, and creating angsty space operas.

7Wannabe5
Posts: 10706
Joined: Fri Oct 18, 2013 9:03 am

Re: Daylen's Journey

Post by 7Wannabe5 »

Yes, human females have historically exhibited tendency to increase fertility rate in open frontier situations, but I wonder if this still applies at current level of distributable technology/affluence? By analogy, although the aristocracy of the 18th/19th century was affluent, having many children was an asset due to 1) securing line of inheritance, and 2) forming powerful inter-familiar connections through eventual marriage of children. Still, even given wet-nurses and nannies, educated and affluent women of this milieu would not infrequently attempt some limit on number of pregnancies they chose to endure and/or practice polyamory after providing an heir and a spare. I suppose some form of human head count homesteading grant policy might motivate fertility. Something like each human upon birth being granted X acres and Y AI terraforming robots. Although, matriarchy would likely make more sense in this model since mechanical robots would be well-suited for most traditionally male-assigned work, and it would be less expensive to just ship semen for insemination purposes. So, would pretty much depend upon female demand for male consorts, I suppose.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

Some nuances on my grand scifi vision

Cosmology
I may have gotten a little too excited about that time dilation finding. It may not be able to account for dark energy completely. The general idea is that inflation appeared to start accelerating around the same time as matter started clumping together into a more inhomogeneous structure. Traditional cosmology presuming a homogeneous structure at scale. Perhaps indicating that general relativity dilates the voids in-between matter to make the universe appear as if it is expanding, resolving the dark energy question (so called timescape cosmology). It would be an elegant solution, however I defer my amateur skills to future physicists.

I always thought it would be pretty interesting if we lived in a closed geometric universe that eventually crunches back in on itself. It would be so boring if galaxy clusters just drifted away from each other forever leaving nothing to see!

Singularity spiral
The autopoietic spiral from factory networks to factories to robots to cells to nanites will run into physical constraints. Hard to say when exactly. Carbon-based biology is the only evidence we have of autopoiesis, and the number molecular combinations explodes as you go up in scale. While some arbitrary macro-molecular structure may be in accordance with the laws of physics, it very well may not be engineerable, or not efficient to engineer. There are several promising materials such as carbon nanotubes and graphene that could form a backbone of sorts. We are not very good at producing these at scale yet, though we are good enough to reinforce traditional materials economically.

Exergetic Steps
The process of going from wood to coal to oil to nuclear is hard to reverse. Each step opens up economic potential that gets baked in over the decades to support a larger population and standard of living. Renewables serving as poor substitutes for oil in many circumstances, returning less energy per energy invested. Presuming a controlled degrowth is not feasible, including some more nuclear into the mix seems necessary to avoid collapse. This is not an easy feat given the expertise and time required to build and maintain these plants. Innovation in small modular reactors could significantly improve the situation. The economy as a whole does seem to be pretty good at figuring stuff out when there is enough incentive. Either way, a significant portion of the world is likely to suffer from this overshoot when oil prices rise and militaries hoard what is left to protect themselves.

World order
-> 17th century Dutch empire -> 18th, 19th century British empire -> 20th century American empire -> 21st century Chinese empire -> 22nd century African empire?

Perhaps highly autonomous warfare can serve as a proxy for devastating nuclear war. Nuclear war is bad for business so it may be in the best interest of the rising power to demonstrate force through sheer economic output of autonomous robotics, hence Chinese drone light shows :p. China has its own problems but still has phenomenal growth and production capacity. It may be about time for a changing world order. See Ray Dalio's work.

In the future, perhaps Africa could reach a tipping point of infrastructure that stabilizes political systems enough to transform the geography into an economic powerhouse. Africa's vast size, lack of navigable rivers, lack of ports, lack of unification, disease, and so forth have been holding it back from its economic potential. With enough geoengineering this might change and further push the world as a whole closer towards a type one civilization.

Next-gen operating systems
Transformers will likely augment the OS landscape to provide more hands-off options. This has been happening gradually with the internet of things and wearable electronics, but these have been held back by agent competency. As agents flood in over the next year or two, their competency will improve and they will start to take over more and more tasks that were traditionally done by keyboard and mouse.

This will further integrate with VR/AR to provide more immersive and sharable experiences. Overlaying the virtual with the real into a "metaverse" of possible interactions.

Tokenomics
As the types of tokens multiply and transformer-likes become more widespread, the vision of programmable crypto contracts may finally start to gain traction into the mainstream. Instead of expending compute cycles just to mine a highly volatile currency, compute might also go towards training or testing models. Intelligence may become, at least in part, decentralized. Various localities might further take control of the information flowing in and out. Perhaps forming more competent cybernetic boundaries. Also see the network state: https://thenetworkstate.com/

Deep multiculturalism
Green in spiral dynamics does not appear to have reached its pinnacle. A shallow acceptance of cultural differences is pervasive in many areas, though this acceptance tends to only runs so deep. The connectivity and relative wealth of the world has decreased group selection pressure causing productive drift. Though, this attractor towards monoculturalism may be partially responsible for declining birth rates and increased polarization, ironically. A deep multiculturalism might resolve this though an increased acceptance of cults experimenting with core values. Combined with the freedom to move between cults, selection will filter out worse cults in a similar way to how economics filters out worse firms. Better cults will reveal themselves and be looked up to as examples to follow.

Futarchy
Robin Hanson has speculated about a governmental structure that aims for a goal (e.g. going to the stars). This goal, or web of goals, would need to be important enough that the majority of humanity would be willing to make sacrifices for it when culture inevitably drifts off course. Asides from climbing the energy ladder from planets to stars to black holes, existential threat management could help get people on board. Partially unifying east and west.

Negative valence
I mentioned before that perhaps assisted suicide could be an option, though this would likely best be used as a final resort for the terminally ill or rare cases of incurable depression (or cult specific). We are getting better at addressing mental health and will likely continue to improve. Often the environment is to blame, so more freedom to try out new cultures should help. There are also happiness helmets or neuralinks, these seem inevitable at some point.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Journey

Post by daylen »

Cyclic economics
As resources here on earth become more constrained, incentive to recycle effectively will increase. Perhaps leading more towards what is called a product-as-service model where orgs recycle their own products or partner with specialized decomposers. Currently, we are quite bad at recycling but there is much room for improvement. Further into the singularity spiral I would expect recycling to become more efficient as technology starts to take care of itself and adapt to various environments. In addition, maker spaces widely dispersed across watersheds or small towns could combine well with decomposition and recycling efforts.

Planetary resources would become less constrained as more economic activity moves into the surrounding solar system. Finalized products that require significant economies of scale could be dropped into the ocean or sent down space elevators. A Dyson sphere forming around the sun to beam laser energy as vessels, gas giants acting as gas stations, and asteroid belt mining.

Quantum-hardened security
There is still some time, like a decade or so, to figure this one out so I am not too worried. We are making progress on alternative encryption techniques. Let's just hope that P doesn't equal NP and white hats outpace black hats. Legacy systems beware!

Post Reply