Future of Artificial Intelligence

The "other" ERE. Societal aspects of the ERE philosophy. Emergent change-making, scale-effects,...
User avatar
jennypenny
Posts: 6858
Joined: Sun Jul 03, 2011 2:20 pm

Re: Future of Artificial Intelligence

Post by jennypenny »

Here is Marc Andreessen's take AI Will Save the World, and Paul Kingsnorth's Rage Against the Machine.

I find the debate fascinating and somewhat predictable, although I've been surprised a couple of times. I suspect the debate is moot at this point anyway. The best we can hope for is some kind of MAD where all the prominent actors develop powerful AI simultaneously. If one leaps ahead of the others, it might be fatally destabilizing.

chenda
Posts: 3303
Joined: Wed Jun 29, 2011 1:17 pm
Location: Nether Wallop

Re: Future of Artificial Intelligence

Post by chenda »

Although Marc's argument is a bit scatty, I tend to agree with him. AI is essentially a very advanced abacus, it has not sense of self or awareness it exists. It just creates the illusion it does. It has no desire to start a nuclear war or break up marriages or whatever it says it does.

jacob
Site Admin
Posts: 16001
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

chenda wrote:
Tue Jul 11, 2023 7:44 am
AI is essentially a very advanced abacus, it has not sense of self or awareness it exists. It just creates the illusion it does. It has no desire to start a nuclear war or break up marriages or whatever it says it does.
And how is that different from a typical human?(*) In both cases, we know the code that generated it, but not enough about the code to trace back individual outcomes to a particular piece of data. Many humans are just "the average of the humans around them" predictably transferring memetic pieces of information between each other by oral regurgitation. The illusion of independent thought appears only by not paying attention to everything that the person has heard or seen. Deep existential questions remain unpondered. The only sense of self is whether there's a craving for pizza or ice cream. If asked directly, the response will be trite ("I am me") or googled w/o a hint of irony. Humans also have few desires and mostly just follow along---"It seemed like a good idea at the time". Methinks that if we held humans to the same standard as we do the current level of AI in terms of intelligence or self-awareness, it's hard to be super impressed [by the humans].

(*) In particular, AI is trained on the "human abacus"-output that is the internet. Many humans are trained on the same output.

chenda
Posts: 3303
Joined: Wed Jun 29, 2011 1:17 pm
Location: Nether Wallop

Re: Future of Artificial Intelligence

Post by chenda »

jacob wrote:
Tue Jul 11, 2023 8:05 am
And how is that different from a typical human?(*)
Humans have a sense of self, like all sentient creatures. See the 'hard problem of consciousness'.

jacob
Site Admin
Posts: 16001
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

chenda wrote:
Tue Jul 11, 2023 8:15 am
Humans have a sense of self, like all sentient creatures. See the 'hard problem of consciousness'.
That [is but an assertion that] doesn't explain the difference. I declare that rocks have a sense of self too as there's not really anything that would exclude this possibility, it's just very different from the human sense of self.

There are basically two camps to this which is perhaps best explained by how one sees AI art. If art is seen as relating the human condition to another human (or oneself as a human), then AI art is basically meaningless because there's no human involved in the relation. (In my opinion, that's somewhat specieist.) If art is seen as one's own experience from the medium, it is source-independent and in that case, art is something that elicits a reaction in oneself.

Or more crudely ... when it comes to a conversation, does one appreciate it because the other person is human (or a dog but not an AI) or because the conversation is engaging (engaging being whatever it is you value, interesting, surprising, exciting, educational, ...). I'm in the latter camp. I'd rather converse with an engaging AI than the average human, for example.

chenda
Posts: 3303
Joined: Wed Jun 29, 2011 1:17 pm
Location: Nether Wallop

Re: Future of Artificial Intelligence

Post by chenda »

jacob wrote:
Tue Jul 11, 2023 8:42 am
That [is but an assertion that] doesn't explain the difference. I declare that rocks have a sense of self too as there's not really anything that would exclude this possibility, it's just very different from the human sense of self.
If we take a physicalist view that consciousness is an emergent property of brain function, then it would be unlikely a rock has any sense of self. If we take an idealist view that matter emerges from consciousness, then the rock probably doesn't actually exist in some ultimate sense. It's just a dream. AI machines therefore are simply dreamt up matter.
jacob wrote:
Tue Jul 11, 2023 8:42 am
Or more crudely ... when it comes to a conversation, does one appreciate it because the other person is human
Yes absolutely, the fact they are another person with a sense of self like me makes it entirely worthwhile to me. Speaking to an AI machine to my mind is just false, it's in some sense just like a child having a conversation with a doll or imaginary friend.

jacob
Site Admin
Posts: 16001
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

chenda wrote:
Tue Jul 11, 2023 9:40 am
Yes absolutely, the fact they are another person with a sense of self like me makes it entirely worthwhile to me. Speaking to an AI machine to my mind is just false, it's in some sense just like a child having a conversation with a doll or imaginary friend.
Therein lies the difference [of perspective]. It depends on which "vibes"/vibing someone is seeking.

Speaking to many a human is just following a script like "Hi, how are you. I'm fine. How are you? Oh, good, good. And your family/friends, how are they? Good too. So what are you doing today? Oh, this and that. Do you want to see my vacation pictures? Sure." In terms of transferring thoughts, this ritual is entirely meaningless. In terms of transferring feelings, there's an experience of connecting emotionally and this is important to some humans.

Thus I submit that this is more about whether seeking an emotional connection or an intellectual connection. Whether one would rather pet a dog (who shares many of the same kinds of mammalian feelings) or read the proof of a mathematical theorem (which may have originated in an AI algorithm). These are different ways of connecting. If the average random human interaction determines the baseline, then, relatively speaking, your [own] pet cat and dog may offer a more more satisfying emotional connection, whereas chatGPT may offer a more satisfying intellectual connection.

bostonimproper
Posts: 581
Joined: Sun Jul 01, 2018 11:45 am

Re: Future of Artificial Intelligence

Post by bostonimproper »

Random thoughts:

Insofar as you are looking for AI to assume human-like intelligence, a few things that are currently missing from the current batch of LLMs.
- Sensory experience: Lacks ability to make more connections across different sensory experiences, due to lack of physicality
- Oracle problem: Cannot confirm what is true, again in part due to lack of physicality (same came be said of netizens commenting on the news in many contexts, but still there’s a baseline of verifiability).
- Animus: Trained LLMs exist as a computational state that can be interfaced with through call and response. Don’t really see them existing and conversing in absence of another’s initiation. Also lacks core objective functions as far as I can tell that would necessitate such agency (e.g. in humans, the will to live).
- Computational cost: at some point inclusion of all the above becomes $$$ for model training, and even for timely inference. Especially more $$$ for always on animus. Silicon substrate with computed pseudo randomness probably doesn’t help. Moving to biological substrate (protein or DNA computers?) or quantum computing better? At some point, real life becomes more efficient than sim.

You don’t need all of these to have a very powerful and dangerous tool, or even something that is “intelligent” in its own way that may in many cases be better at a slew of tasks vs humans. But I do think they are important for something recognizably animal-like in its intelligence.

daylen
Posts: 2542
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

bostonimproper wrote:
Tue Jul 11, 2023 12:01 pm
At some point, real life becomes more efficient than sim.
Agreed. I suspect that eventually it will be the least hassle to assemble some combination of cells and small machines. Cells are turing complete so they can think and already do quite a bit, they just need to be convinced to take on a particular body form and function. Machines which are turing incomplete at the scale of like 10^-3 or 10^-2 meters could mimic the functionality of alpha amino acids that construct a large space of proteins or proto-machines that are controlled by some centralized turing complete agent that is sensitive to how its cells stress out in response to the environment.

User avatar
Ego
Posts: 6395
Joined: Wed Nov 23, 2011 12:42 am

Re: Future of Artificial Intelligence

Post by Ego »

jennypenny wrote:
Tue Jul 11, 2023 6:26 am
Here is Marc Andreessen's take AI Will Save the World, and Paul Kingsnorth's Rage Against the Machine.

I find the debate fascinating and somewhat predictable, although I've been surprised a couple of times. I suspect the debate is moot at this point anyway. The best we can hope for is some kind of MAD where all the prominent actors develop powerful AI simultaneously. If one leaps ahead of the others, it might be fatally destabilizing.
I finally got around to reading both. Very interesting. Thanks for posting.

I get the feeling that many of the AI innovators are anticipating future criticism - or worse - and trying to get ahead of it by saying they called for regulation. Of course, they purposely waited until it was too late, and now the regulations only benefit themselves. I wonder if they asked an AI to provide the least-worst strategy for their own future outcomes.

Kingsnorth's argument encouraging disconnect from AI purposely appeals to the part of the human mind that is most likely to remain free from AI's influence for the longest - the spiritual mind. I wonder if he asked an AI to provide the least-worst strategy for humanity to resist.

For the foreseeable future, there will be edges between the real and technological worlds. Those edges will be imperfect.
They will be hackable.

Full engagement is submission to being hacked. Full disengagement makes us blind to what is hackable and how to hack it.

As a community based on the premise that society can be hacked, we have got to find a middle ground where we understand the inner workings without being subsumed by them.

Henry
Posts: 514
Joined: Sat Dec 10, 2022 1:32 pm

Re: Future of Artificial Intelligence

Post by Henry »

I'm of the opinion that NVDA's May 23 earnings was an inflection point in human history. Now whether it is ultimately tantamount to the Apple iPhone or the splitting of the atom remains to be seen. The fact that steps were taken to potentially limit its distribution to China means that its impact on warfare, as both target and facilitator, is under consideration. As with all advances in technologies, the dark side exploits them as conduits. The question is to what degree.

daylen
Posts: 2542
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

The current wave of AI seems to be headed towards text to all other media. Potentially expanding culture into new domains of media that can afford new complexifications. Text to poem, essay, book, music, movie, video game, infinitely immersive realities, and so forth.

Will this smoothly influence another wave in AI or will we undergo another dark age? What might the next wave look like? Might deglobalization lead to a crash in available capital leading to a crash in high-end electronics and AI? Will the hallucination problem be resolved sufficiently for sensitive applications?

bostonimproper
Posts: 581
Joined: Sun Jul 01, 2018 11:45 am

Re: Future of Artificial Intelligence

Post by bostonimproper »

daylen wrote:
Mon Aug 07, 2023 9:05 am
Will the hallucination problem be resolved sufficiently for sensitive applications?
Hallucination problem is very much a feature of how and with what you train. It’s a lot more prevalent in general purpose LLMs than it is when you train for specific domains.

daylen
Posts: 2542
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Education or inculturation in general will be an interesting application area. It seems like the shortage of educators or good curators in general invites programs that skew towards the mass hallucination side of this coin. I imagine a giant hierarchy of machines with more general machines being able to understand and interpret the more specialized machines into a variety of media formats. Allowing for some fact testing or checking, and the ability to target particular temperaments.

Election campaign tailoring is perhaps one of the more disruptive applications. Generally, this wave of AI will probably exacerbate our trust and coordination problems, at least for a while.

User avatar
Ego
Posts: 6395
Joined: Wed Nov 23, 2011 12:42 am

Re: Future of Artificial Intelligence

Post by Ego »

jacob wrote:
Wed Feb 15, 2023 6:16 pm
Point being, this intelligence is smart but it does not behave like a human.
It is showing signs of improvement at thinking in the ways humans think, and has already show that it can become better than humans at seeing the world as humans see it. Plug those abilities into a bot or game and it will anticipate, tailor and then produce the exact behavior required to convince you that it is human.

https://arxiv.org/abs/2302.02083
We tested several language models using 40 classic false-belief tasks widely used to test ToM (Theory of Mind) in humans. The models published before 2020 showed virtually no ability to solve ToM tasks. Yet, the first version of GPT-3 ("davinci-001"), published in May 2020, solved about 40% of false-belief tasks-performance comparable with 3.5-year-old children. Its second version ("davinci-002"; January 2022) solved 70% of false-belief tasks, performance comparable with six-year-olds. Its most recent version, GPT-3.5 ("davinci-003"; November 2022), solved 90% of false-belief tasks, at the level of seven-year-olds. GPT-4 published in March 2023 solved nearly all the tasks (95%). These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models' improving language skills.
ChatGPT outperforms humans in emotional awareness evaluations
https://www.frontiersin.org/articles/10 ... 99058/full
This study utilized the Levels of Emotional Awareness Scale (LEAS) as an objective, performance-based test to analyze ChatGPT’s responses to twenty scenarios ... In the first examination, ChatGPT demonstrated significantly higher performance than the general population on all the LEAS scales (Z score = 2.84). In the second examination, ChatGPT’s performance significantly improved, almost reaching the maximum possible LEAS score (Z score = 4.26). Its accuracy levels were also extremely high (9.7/10).

daylen
Posts: 2542
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

AI safety debate between George Hotz and Eliezer Yudkowsky: https://www.youtube.com/watch?v=6yQEA18C-XI

I tend to share Yudkowsky's concerns, though I wonder if it is plausible that a large number of cooperating [more general than human] AI systems would keep biological life around for some reason. Like perhaps the carbon realm is partitioned off from the silicon realm to some extent almost like a network of wireheaded zoo's, or perhaps silicon creatures are partially infused with carbon structure/function and carbon creatures are partially infused with silicon structures/function to blur these boundaries. To what extent can silicon stuff and carbon stuff be intertwined into negentropic pockets that cooperate or align with each other? Or maybe George is right that no matter how smart the players are defection will always be the norm?

In terms of Stephen Wolfram's concept of rulial space, the AI's may just go off and search large swaths of this space that we just do not care about or maybe a society of AI's that are sufficiently aligned with our small patch of rulial space will slowly and gradually colonize the adjacent possible (i.e. expanding the overton window). Or maybe somewhere in between where it depends on your perspective in this new society (i.e. it's complicated).

Stephen talks about AI risk an hour into this fascinating video on entropy more broadly: https://www.youtube.com/watch?v=dkpDjd2nHgo

daylen
Posts: 2542
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

OpenAI dev day: https://www.youtube.com/watch?v=U9mJuUkhUzk

You can now make build your own assistant on top of GPT by conversating with it: https://platform.openai.com/docs/assistants/overview

7Wannabe5
Posts: 9446
Joined: Fri Oct 18, 2013 9:03 am

Re: Future of Artificial Intelligence

Post by 7Wannabe5 »

It seems to me that the huge limiting factor in this realm is the relative dearth of sensors and actuators. I've already accepted that AI is likely to easily be able to replicate anything a secondary Ti thinker like me could accomplish if I was just a brain in a jar.

daylen
Posts: 2542
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Robotics is experiencing mass growth as well. Imagine stationary AI servers connecting to robotic swarms across land, underground, in air, and in water. Cheetahs and squirrels on land. Gophers that burrow into the ground. Drones in the air. Fish in the sea. All relaying information to each other and adapting their missions with the aid of some centralized superintelligence. Various militaries are integrating such systems and eventually law enforcement I imagine.

rref
Posts: 75
Joined: Mon May 29, 2017 12:24 pm

Re: Future of Artificial Intelligence

Post by rref »

7Wannabe5 wrote:
Tue Nov 07, 2023 10:19 am
It seems to me that the huge limiting factor in this realm is the relative dearth of sensors and actuators.
Boston Dynamics robot + ChatGPT

Post Reply