Future of Artificial Intelligence

The "other" ERE. Societal aspects of the ERE philosophy. Emergent change-making, scale-effects,...
Post Reply
daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Future of Artificial Intelligence

Post by daylen »

Where is artificial intelligence headed? Where might the next breakthroughs come from? Have we already picked all the low hanging fruit? How might AI change our lives in a high-energy future?

My thinking is that AI has developed past the embryo stage into its early childhood in the sense that most of the formal mechanisms are being utilized. Advancement will continue slowly as data quality/quantity improves and as our mental models enable messier cradle design. Cradles 'containing' and 'constraining' newly born AI's as they merge into the world. Biological metaphors becoming more and more useful in guiding the developmental process of these artificial beings.

There are some concerns for AI safety and caution that should be exercised, though I am becoming more optimistic towards our ability to handle the AI alignment problem. It seems actually quite difficult to build/grow an AI that is capable of strategic flexibility and also 'evil' by our standards. Part of this intuition comes from exploring the concept of evil deeper in its roots. Though, this is a topic for another time perhaps.

Image

The above chart is one way to frame the requirement of more expansive mental metaphors in the development of AI. As the spiral extends outwards, four phases could be thought to emerge. In no particular order..

1. Qualia Research as socially biased hacking. The epicenter of this paradigm seems to be QRI (qualia research institute) and the principia qualia paper https://opentheory.net/PrincipiaQualia.pdf Empathesis here on exploring and integrating both low and high depth states of conscious experience with mathematical isomorphisms (especially topology, mereology, and valence). A simple mapping near the base is the valence triangle..
Image

Principa Qualia also lays out steps towards solving the easy and pretty hard problems of consciousness..
Image

Psychedelics are big in the space as they have this uncanny capacity for dissolving and resolving boundaries of experience.

2. Behavioral Simulation as technically biased hacking. This paradigm champions much of the progress being made by AI companies currently. Tesla autopilot being an example of having to solve the perception problem which is the first-person equivalent to third-person behavioral simulation. This kind of thinking is video-game like in that so long as the features of the environment are predictable, characters can be controlled reliably in it. Also relates to my personal quest to build genome-connectome-biome models with the Godot engine.

3. Cognitive Sciences as technically biased academics. This paradigm includes the softer approach of cognitive science with the "harder" approach of computational neuroscience. Some players I follow here are John Vervaeke and Yohan John. Curious to see how the soft and hard sides coalesce in the near future.

4. Process Philosophy as socially biased academics. Starting with Whitehead (which is my new favorite thinker to study), process philosophy starts with processes and relations as fundamental instead of substance and objects. Whitehead also understood the role of a telos in setting up a metaphysics that includes findings from quantum theory and relativity theory. Modern science is quite allergic to the concept of a telos or future-attractor (even though such are not so subtly sneaked in to clarify points). Even science has a purpose and Whitehead's cosmology recognizes this.

--------------------
In the early stages of AI ascension, hacking and technically biased approaches tend to work quickly. Projecting to the later stages, academics and socially biased approaches will tend to work slow to flesh out the possibilities. Though, I suspect the influence of 1-4 will be more spiral-like with minor, discrete leaps in capability upon shifting quadrant focus. What might the details of the spiral be thus far? I don't know.. haven't gotten that far.
Last edited by daylen on Tue Oct 11, 2022 4:21 pm, edited 2 times in total.

User avatar
Ego
Posts: 6357
Joined: Wed Nov 23, 2011 12:42 am

Re: Future of Artificial Intelligence

Post by Ego »

I believe it is already being applied to economics and politics which would explain the unexplainable moves we've seen (ie. the confounding way we pulled out of Afghanistan) and I know it is being applied to war.

The Catch 22 is that the better AI gets, the better it will be at making it seem that AI wasn't involved.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Ego wrote:
Tue Oct 11, 2022 12:22 pm
The Catch 22 is that the better AI gets, the better it will be at making it seem that AI wasn't involved.
Right, and this will go hand in hand with the blurring of organism and machine boundaries. Basically, I think there is no inflection point at which ai becomes general and starts rewiring itself like in the book "Superintelligence" by Bostrom. Rather, the collective network of machines that are trained upon and dependent upon the organism networks will peer into us.. and we will peer right back.. from there a lot can happen, but the staring contest will couple us to them such that the genocide of either side becomes effective suicide for the aggressor. It is a possibility that a lone machine created by a lone research group will invent the silver bullet to AGI like Bostrom imagines but this seems like a slim probability to me given how distributed this research is and all the gotchas along the way that incentivize collaboration.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Would seem that the middle math section of the (metaphysics, math, interpretation) procedure above is all that is needed to weaponize AI by ill-intent actors in the organism networks. Metaphysics and interpretation are required to bind sufficiently to the algorithmic developmental process allowing for the foundations of a more intimate organism-machine relationship covering the biosphere with something like synthetic technocracies rooted in other human structures that sustain a high-energy future.. for a while at least.

With just the math, the whole project is susceptible to an inflexible strategy that manifest as some form of totalitarianism. Which is likely to just push collapse a little down the road (entering a low-energy future) while destroying a bunch of our goodies in the process.

Probably should also note that some collapse of the true, good, and beautiful is likely required to flesh out the metaphysics and interpretation ends of the math as is apparent in the valence chart above. Though, splitting them up within context is vital to any humanistic(+ non-humanistic) pursuits.

AnalyticalEngine
Posts: 949
Joined: Sun Sep 02, 2018 11:57 am

Re: Future of Artificial Intelligence

Post by AnalyticalEngine »

This may be a bit of a tangent, but we were discuss AI in criminal justice last night at the police department, and it was a different perspective than I normally see. Someone had brought up if the PD was going to use AI analytics to predict criminal behavior and the PD said that they wouldn't because it introduces too much bias into the process. Basically because you are innocent until proven guilty, public safety is much more of a reactive process (the detectives solve the crime after it happens) than a proactive one (AI predicts a crime before it happens).

It made me think that AI conversations are brought up a lot as if AI happens in a vacuum, but domain knowledge of where the AI is operating may be more important than technical knowledge of the AI itself. In an area where people have constitutional rights, the application of AI is a lot more slippery than in a domain where all you're doing in encouraging people to buy more shoes.

But! What was interesting was the PD does use pretty advance analytics in deciding where to deploy more patrol cars or draw district boundaries. The technology they use in criminal justice actually seems like they're 10-20 years behind silicon valley. This means AI might have more applications in criminal justice than we're seeing presently, and how those applications manifest may be different than existing AI domains.

7Wannabe5
Posts: 9369
Joined: Fri Oct 18, 2013 9:03 am

Re: Future of Artificial Intelligence

Post by 7Wannabe5 »

How do humans move from algorithm/procedure to concept?

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

It may be possible to build web3 meshes of ecologically informed coordination processes that distribute micro-incentives across some virtual space that organisms access to cast micro-votes. With such structures drawing attention to how future attractors like starvation, heat, and crime waves should map to diffuse responses that learn.

The high-energy future may see some erosion of critical expertise in favor of some domain flexibility that allows fast and slow acting responses to environmental changes. Perhaps domains can re-imagine how they fit under the larger umbrella of a planet-wide ecology mediated by artificially sentient meshes of efficient negotiation engines that are self-restricting via technocratic law (which operates on a more rapid time-line than the physically instantiated systems can comprehend).

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

@7w5 Hard for me to do much useful with that question except to symmetrize like..

algorithm -> concept -> algorithm
or
concept -> algorithm -> concept

Then through this symmetric relationship that is closed in time, asymmetrical balancing of algorithm vs concept adds depth to their relationship.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

If we take the algorithm to be the math, then the concept transforms form metaphysical to interpretive upon applying it.

7Wannabe5
Posts: 9369
Joined: Fri Oct 18, 2013 9:03 am

Re: Future of Artificial Intelligence

Post by 7Wannabe5 »

I was just thinking about how teaching mathematics to humans procedurally vs. conceptually might relate to AI. Algorithm can build on algorithm on algorithm, scaffolding all the way up, and algorithm can build on concept, but concept can only build on concept...

Never mind. I am currently spending approximately 12 hours/week attempting to teach algebra to students who still frequently resort to adding on their fingers when deprived of access to a calculator and approximately 12 hours/week trying to teach myself how an operating system works, so wondering how/if the leap from, for instance, ability to perform multiplication based on ability to perform addition to ability to comprehend multiplication based on comprehension of addition occurs or seems to occur in humans or could occur in AI system.

IOW, how does an AI come to ask itself "Why?"

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Converting how's into why's may require a cosmological model socially constructed by participating organisms (i.e. process philosophy) that each virtual node holds as a volving microcosm of the relevant outer world/cosmos in deep time.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

I think computers based on turing machines or even simple quantum machines lack internal states complex enough to feel for paradoxes or subtleties in how they flow through time. Perhaps limiting "Why?" to extremely brief moments of bonding confusion among atoms moving relative to each other (i.e. an atomic crisis). In some sense only the atoms need to know of each other to transist through programmed mechanical movements. Perhaps an argument could be made that the transistor level is capable of slightly more complex self-reflection that results in more intense confusion. Though, all the way up there is no confusion about what transistors go where. Allowing their use to us as deterministic machines.

Within qualia research, something like integrated information theory or the like may serve as a test for the degree of consciousness in various configurations of matter. I suspect there is no shortcut to creating consciousness other than to develop a sophisticated cradle that nests turing machines within each other with carefully engineered purposes revealed through a corresponding nesting of tests (i.e. advanced turing tests constraining turing machines across scale). Gradually becoming less distinguishable from an organism. Until perhaps somehow we manage to construct a machine that reproduces in a messy way to allow uncontrolled selection processes to step in. Though, I would be surprised if we were still using silicon instead of carbon at this point.

What exactly constitutes a "nesting" is up for analysis. Virtual machines inside base machines don't count as they are embedded on the same tape. Rather, I think there would need to be some indetermination in how one layer is nested inside another (all the way up). Pushing atomic self-reflection up to a more encompassing layer of confusion about self and other.
Last edited by daylen on Sat Oct 22, 2022 8:19 pm, edited 1 time in total.

User avatar
Ego
Posts: 6357
Joined: Wed Nov 23, 2011 12:42 am

Re: Future of Artificial Intelligence

Post by Ego »

daylen wrote:
Wed Oct 12, 2022 9:05 pm
Until perhaps somehow we manage to construct a machine that reproduces in a messy way to allow uncontrolled selection processes to step in.
Along those lines, a few days ago Deep Mind announced that the AI project Alpha Tensor was successful at creating a better version of itself.
https://www.deepmind.com/blog/discoveri ... lphatensor

Azeem Azhar comments from Exponential View:
AlphaTensor has improved a foundational aspect of what it is to build AI systems such as… AlphaTensor. Foundational. It’s improved on a 53-year piece of maths, Strassen Algorithm. The approach unveiled by AlphaTensor is 10-20% faster than previous methods according to DeepMind’s boss, Demis Hassabis. (Strassen’s method, the previous best, required two 4-by-4 matrices and needed 49 steps. AlphaTensor has found a 47-step method.)

Matrices are at the heart of artificial neural networks. So, is this potentially a case of recursive self-improvement — that is, making improvements in the ability to make improvements? Recursive self-improvement could lead to the “intelligence explosion” as proposed by mathematician Ian Good.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Pretty cool, that could have some major ripple effects. I would call that clean reproduction in that the mutation process has no chance to change the hardware. Messy reproduction could mutate the hardware-software relationship itself.

I think perhaps intelligence explosions like explored in superintelligence require an agent embedded in the cosmos with a purpose. A black or grey box that barely interacts with the outside world has little chance to relate to it meaningfully. What goes in goes out. Giving a box access to the internet just mirrors us into it. Attaching sensors is a start towards embodiment, yet something like a flesh and bone body is likely required for efficient movement when interacting to develop the relationships necessary to deduce/induce purpose with intellect.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Though, I think the last thing we actually want is a conscious ai.. until we figure ourselves out a bit better at least.

Really I think it is up to us for why we want to do whatever we are going to do. Humanity may gain a pandoras box and oracle, and what we do with these superpowers remains an open question with potentially incalculable consequences.

User avatar
Ego
Posts: 6357
Joined: Wed Nov 23, 2011 12:42 am

Re: Future of Artificial Intelligence

Post by Ego »

daylen wrote:
Wed Oct 12, 2022 9:57 pm
I would call that clean reproduction in that the mutation process has no chance to change the hardware. Messy reproduction could mutate the hardware-software relationship itself.
Well, it could be argued that we have already passed that Rubicon.

As inventors of AI, we are the biological hardware. Without AI we would have been unable to develop the biological program that many of us injected into ourselves.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

This is part of the blurring of boundaries that occurs when we stare into the progressively alluring glass ball.

Though for now it is quite easy to discern between silicon and carbon based hardware by external look. Thus allowing us to cut off the relationship gradually to check for sharp drops in communion that may indicate asymmetric agency.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

One potential alignment between machine talk and human talk goes a little like..

finite state machine instance : transware : sensation
context free language instance : software : thought
turing machine instance : hardware : embodied memory or resonate feeling
undecidable instance : wetware : intuition through cellular criticality

Biological evolution may then point towards cellular cohesion being like a wet transware system steering a body that cuts through undecidability with a goto statement... i mean a state change. Bodies evolving hardware structures or symmetries that initialize memory along with software to resolve easily computable problems during development.

In some sense, computers hold software inside hardware whereas organisms hold hardware inside software inside wetware. In another sense, the nestings can be inverted so that computational software comes from outside the hardware and so that the solidity of biological organisms contains the impressions imposed with age as well as a deep liquidity of cellular relata.

Another way to think of it is that any stable complex system is going to have entropy working inwards and outwards (negentropic) towards/from some central attractor. Otherwise what would we pay attention to to simplify. Though, the map not being the territory meaning [in this context that] it is hard to go from soft talk to hard talk that operates strictly on typed structures instantiated in memory and even harder to go from hard talk to wet talk due to the chaos of fluid flow(*). Transware is transitory with respect to what states point to and what memories sense. Wetware may be a necessary flesh adding meat to soften mechanical bones from sensor and actuator transitions.

(*) Go a little further and air talk is relatively easy in the aggregate but details get lost. Space being a state limit towards the infinite that opens up pockets of free movement or decentering given the respective pole of intuition that selects.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

"Undecidable instances" are also paralleled in the math lit by the axiom of choice which is required to select according to some random distribution a ball out of a bag, a side on a coin, or an instantaneous limit towards a floating point in some pool of fluid transitions that induce (like counting and its derivatives) from simple state maps.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Another angle is that the wetware is a bundle of perceived imperfections that do not have an earnest place along the transitor tape, machine, genome, or skeleton (solid hardware) and cannot be imitated well with software (malleable code or language). Though, it is difficult to say if such imperfections aggregate into adaptive value modulated by a transware mediating between hardware mutating intergenerationally with rarely punctuated equilibrium's, software mutating quickly, and wetware mutating extremely quickly.

Post Reply