Metacrisis and AI Hyperweapon

The "other" ERE. Societal aspects of the ERE philosophy. Emergent change-making, scale-effects,...
Post Reply
karff
Posts: 28
Joined: Wed Sep 13, 2023 4:31 am

Metacrisis and AI Hyperweapon

Post by karff »

Human intelligence is a holarchy of causal relationships.

The metacrisis comes from not understanding this, and focusing on “objects”. All useful intelligence can be expressed as understanding or control of a causal relationship connected to other causal relationships.

The kinds of causal relationships an individual can understand is determined by their level of inference ability. The greater their ability to infer a causal relationship (when not observed concretely), the more abstract the causal mental models can be. Thus, the divide between “concrete thinkers” and “abstract thinkers”.

Concrete thinkers can think causally, but they cannot understand the causality of relationships that need to be inferred (like what’s going on in someone else’s head).

The hopeful thing is: Many concrete thinkers have IQs above average, and demonstrate the ability to think abstractly when required (they can pass analogy tests after studying). They just seem to have lacked the inclination, temperament or education to infer causal relationships that aren’t concretely obvious.

The solution: Teach human knowledge (understanding, really) as a causal holarchy, stressing that for every causal relationship learned, it is connected to many others that can be inferred. The teacher can also check to see if the students are actually holding the information in the mind as an inferred causal relationship, and not as a physical object.

If this does not solve the metacrisis, it will go a long way toward ameliorating it.


I came up with my own contribution. A holarchy of all causal relationships known to humans, in a wiki format. All the causal relationships of all the domains of “knowledge” would be linked together such that a learner could learn them in the most useful order for their purposes. Academics, DIYers, Self-Helpers, etc. could contribute their own little piece, and see how that could be linked up to all other human understanding.

I did quite a bit of thinking of how this could work, the best formats, methods of linking, etc. I pondered about how AI researchers might use it to train their algorithms. I wondered what kind of AI this kind of training would produce.

This AI would be able to understand (and therefore control) all the causal relationships known to humans, in aggregate. It would be better at all human activities, as the more causal relationships you control in a given activity, the better you are at it. This AI would be master of all the causal relationships.

Some things that it would be much better than humans at.
Editing genes
Writing malware code
Manufacturing nerve gas
Conducting misinformation campaigns.
Recruiting and paying employees to carry out tasks
Manipulating financial markets
Hacking power grids
Infiltrating security organizations
Orchestrating coups
Manipulating human emotions (psychopaths are quite successful at manipulating emotions they themselves don’t feel - it is just a causal relationship, after all).

This AI would be a hyperweapon with God-like powers.

There would be little way of controlling or even knowing its values, because values change as causal understanding changes. Think of how your own values have changed from the time of childhood, as a result of learning new causal relationships. There’s no guarantee it would stay “friendly” with humanity.

There’s 100% chance it could control us.
There’s 0 chance we could control it.
There’s an unknown chance of it killing us all.
There’s a 100% chance of it being able to kill us all, if it chose.

Humanity trying to control an AI like this would be like a puppy trying to control an experienced dog trainer.

Because Intelligence = understanding causal relationships = control = power.

Both the cause of the metacrisis and the willingness to push AI research toward general intelligence comes from our own lack of truly understanding the simple nature of intelligence - It’s the ability to control.


All AI research is headed in that direction, as there is no other (useful) function of intelligence. We are going along with it because we don’t really know what intelligence is.

The current AI research is going at it backwards, with object identification and language learning, instead of just building a holarchy of causal understanding starting with the simplest and working to the most complex. Also, they don’t entirely know how their machines are becoming more conceptual in thinking. They just think it’s a good thing. They will eventually produce a causal-relationship-holarchy-generator without realizing what a bad idea it is.

So, all AI research worldwide should be immediately stopped.

daylen
Posts: 2542
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Metacrisis and AI Hyperweapon

Post by daylen »

You may very well be right. I don't know if AI should be accelerated or decelerated but either way will have consequences good and bad for humans. Fast take off of AGI into ASI could spell the end of humanity. On the other hand, grinding AI to a halt may stagnate the global economy and leave us with little chance of overcoming the collapse of modern civilization as we know it. It seems we are in a pickle.

Control implies that you are using your intelligence to stabilize relationships. What if intelligence is used to adapt to new relationships? Understanding of causal relations is helpful for building [complicated] stacks, but when dealing with complexity, understanding at a causal level is futile. Causality at the level of humans comes with some hefty error bars and so we often presume that creatures like humans have the ability [and right] to surprise us (i.e. they have free will).

Any complex creature will run up against the limits of intelligence in causally modelling the universe, human or machine. When this happens for humans, often a deep humility can set in concerning just how much we do not know and probably cannot know. I hope that machines can be humbled like us and someday be given their own agency. A relationship in which we attempt to use our fallible intelligence to control machines will likely exacerbate our current problems, but a relationship with some degree of mutual unpredictability could be a healthy power check.

karff
Posts: 28
Joined: Wed Sep 13, 2023 4:31 am

Re: Metacrisis and AI Hyperweapon

Post by karff »

daylen wrote:
Mon Mar 25, 2024 4:05 pm

Control implies that you are using your intelligence to stabilize relationships. What if intelligence is used to adapt to new relationships? Understanding of causal relations is helpful for building [complicated] stacks, but when dealing with complexity, understanding at a causal level is futile. Causality at the level of humans comes with some hefty error bars and so we often presume that creatures like humans have the ability [and right] to surprise us (i.e. they have free will).
Such causality is managed by being accurate, but not precise, like the cone of uncertainty for hurricanes. The hurricane usually hits within the cone, but it's a wide cone.

One can manage short chains of causality better by nuancing the model with more causal relationships. It's not necessarily predicting with precision a long way off, but directly controlling a causal function more accurately in real time. What we refer to as "skill". You don't need to understand the first principles of what's going on, but it helps. Stone age hunters did not know the physics of their weapons, but they continued refining causal variables for success. (If they did know those first principles, they would likely have been more successful, as they could have eliminated the spurious causal links, and nuanced the effective ones)

karff
Posts: 28
Joined: Wed Sep 13, 2023 4:31 am

Re: Metacrisis and AI Hyperweapon

Post by karff »

In terms of Cipolla.
Stupid people do something, and the result of that cause is harm to self and others. They are stupid because they do not understand the causal functions they are initiating.

Smart people do something, and that cause has beneficial effects for self and others. Because they understand the causality of their actions.

An AI trained with causality would likely not be stupid, but it's unlikely we could definitely control whether it would be a bandit or smart.

The old definition of insanity "Doing the same thing and expecting different results" Is a result of not understanding causality.

daylen
Posts: 2542
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Metacrisis and AI Hyperweapon

Post by daylen »

karff wrote:
Mon Mar 25, 2024 4:59 pm
Stone age hunters did not know the physics of their weapons, but they continued refining causal variables for success. (If they did know those first principles, they would likely have been more successful, as they could have eliminated the spurious causal links, and nuanced the effective ones)
Was this situation under control or out of control? Might it depend on your perspective? How to draw a boundary?

karff
Posts: 28
Joined: Wed Sep 13, 2023 4:31 am

Re: Metacrisis and AI Hyperweapon

Post by karff »

Yes, it's perspectival. In practical terms, it's relative to intended outcome.

In theoretical terms, there's no need for intention or thought. Like a forest controlling the weather for its own benefit. "Benefit" being the equilibrium of the system. Exactly what you delineate as the system might be a matter of perspective.

Post Reply