Future of Artificial Intelligence

The "other" ERE. Societal aspects of the ERE philosophy. Emergent change-making, scale-effects,...
ducknald_don
Posts: 380
Joined: Thu Dec 17, 2020 12:31 pm
Location: Oxford, UK

Re: Future of Artificial Intelligence

Post by ducknald_don »

Just don't ask it if your partner has been faithful or not:

https://www.tovima.com/society/greek-wo ... at-on-her/

philipreal
Posts: 69
Joined: Thu Sep 12, 2024 8:17 pm

Re: Future of Artificial Intelligence

Post by philipreal »

candide wrote:
Fri May 02, 2025 9:14 am
I am really enjoying working with chatGPT using this prompt:
Probably a good thing to have a prompt like that, earlier this week there were reports of Chat-GPT4o being absurdly sycophantic, completely silly/scary levels of just enabling whatever the user is putting in, which certainly isn't good if you care about truth. See
https://xcancel.com/EnablerGPT/status/1 ... 4391928027 and https://xcancel.com/colin_fraser/status ... 5958983789 for some funny(?) conversations. It has since been rolled back but it seems to me that OpenAI in many ways is optimizing for user-engagement more than correctness, or creating a positive product. In any case, be wary of fully "trusting" answers you receive.

ducknald_don
Posts: 380
Joined: Thu Dec 17, 2020 12:31 pm
Location: Oxford, UK

Re: Future of Artificial Intelligence

Post by ducknald_don »

In a similar vein AI could be making you delusional:

https://www.rollingstone.com/culture/cu ... 235330175/

User avatar
Ego
Posts: 6663
Joined: Wed Nov 23, 2011 12:42 am

Re: Future of Artificial Intelligence

Post by Ego »

Esther Perel on AI Artificial Intimacy.

"The way I began to think of it is, I am living a sort of assisted living, but prematurely. I am being assisted by a host of predictive technologies that are basically saying, ''You don't have to know, I'll know for you. I'll recommend the next song to listen to, who to date, where to eat.'. And you would think that that would make us feel more confident, more at ease because I am neutralizing the unpredictable, the unknown. But the unknown demands that you interact with it on a daily basis.... And when you erase all of this you make people more anxious, more unsure.".

"The rubbing. The living in close proximity with the messiness of another person's... helps you know who you are in the presence of others... and the digitally facilitated connections are lowering our competence in the intimacy between humans.. and makes us less able to be with people who challenge us. '

https://youtu.be/plkTbnN1GUY?

Henry
Posts: 983
Joined: Sat Dec 10, 2022 1:32 pm

Re: Future of Artificial Intelligence

Post by Henry »

Ego wrote:
Mon May 05, 2025 7:59 am
Esther Perel on AI Artificial Intimacy... makes us less able to be with people who challenge us. '
If Esther still wants to wait on line at the DMV, that's her fucking business. But as far as I'm concerned, any technology that minimizes asshole interaction is a good technology.

Henry
Posts: 983
Joined: Sat Dec 10, 2022 1:32 pm

Re: Future of Artificial Intelligence

Post by Henry »

To the best of their abilities, the US conducted a forensic study on the contributors to DeepSeek and discovered that the majority were educated in China. This dispelled the notion that the Chinese were using the US educational system as a technological smash and grab. This seems similar to the US/Soviet space race of the 60's. I think this is why people need to put down their spray cans and realize the significance of Elon Musk and his companies, including Tesla, in a much broader context.

jacob
Site Admin
Posts: 16995
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

https://nymag.com/intelligencer/article ... chool.html

I am increasingly grateful that I did not push for a career as a professor. I don't think I would have been able to go through the pretense. In some sense, AI hasn't changed anything about cheating. It's just dialed the ability up to 11. It took me until becoming a TA in grad school that I realized that the average student wasn't really interested in learning anything despite no longer being involuntarily committed to the "learning" institution.

I had three types of students. Each class always had a couple of students---destined for grad school---who were capable and motivated. There were the same number of incompetent students who were motivated but ultimately incapable. But 80% of the students, neither motivated nor capable, quickly figured out the game and did what they could to most efficiently "process" their homework. Back then it didn't involve chatGPT to provide the answers. Rather, it typically meant asking one of the top 10% of the students to borrow their notes or homework.

After some frustrating months and having identified the various types, I finally surrendered to the process. I would spend my time correcting the papers of the good students and incompetent ones---the two groups who made an effort. For the large number, I would lay out their homework side by side and process them in parallel paying little attention because they were basically the exact same answers shamelessly copied from the same 1 or 2 people.

Students haven't changed. Only their methods.

Currently the working force is run by the same few people who actually know what they're doing, that is, they've memorized facts, they can do math in their head, and they can come up with creative solutions based on what they know. And then there are the majority, who are "good at finding answers from other people" but will generally never come up with anything on their own. The last group just went to college to gain a network, an entry ticket to the job market in the form of a "degree"... learning just enough to fake it and talk the talk. It is this group that seems increasingly replaceable. Instead of hiring 10 people to "generate some words" and have one person check their work for "misunderstandings", that person can just buy 10 language generators instead and check that work for "hallucinations", which, given the lack of even trying on part of some humans, is but a harsher word for "misunderstanding".

Add: I actually tried to reform the system once. The usual form of TA'ing undergrads would be for them to do their homework on their own (ha!) and then pick random students to go show their (haha!) work to the class on the blackboard, basically doing a presentation. Well, for one class, I tried to do it in reverse. We'd spend the 2hr class working on next week's homework assignment. The good students immediately went to work having just been given two additional hours they would otherwise have to spend at home. The rest were just sitting there staring desperately at a blank piece of paper. They did not know how to apply even the simplest principles on their own nor how to even begin to analyze the problem in front of them. Thus, this experiment was a brief one.

As such, when well-meaning commenters suggest that all the professors have to do is to revoke laptop privileges and have the students write an essay with paper and pencil, I suspect that this would too quickly reveal that the emperor already has no clothes on.

We should perhaps be more worried about the "future of human learning". Because insofar AI is but a "language generating algorithm", I find little reason to believe that the majority of humans are anything more than a "language generating algorithm" as well. Right now, we have the advantage that the LLM are actually trained on output from people who know how to think on their own. This will not be the case one generation from now.

7Wannabe5
Posts: 10580
Joined: Fri Oct 18, 2013 9:03 am

Re: Future of Artificial Intelligence

Post by 7Wannabe5 »

jacob wrote:Each class always had a couple of students---destined for grad school---who were capable and motivated. There were the same number of incompetent students who were motivated but ultimately incapable. But 80% of the students, neither motivated nor capable
Didn't you occasionally encounter an unmotivated but capable student? For example, somebody who manages to integrate the topics of permaculture and polyamory into a paper assigned to be on the topic of database security?

jacob
Site Admin
Posts: 16995
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

7Wannabe5 wrote:
Thu May 08, 2025 7:34 am
Didn't you occasionally encounter an unmotivated but capable student?
They'd be counted in the 80%. The problem with "unmotivated but capable" shows up once one gets past the 101 classes. 201 typically requires understanding of 101 methods while 301 requires both 201 and 101, and so on. Often (at least in physics), this dependence is lateral too. As such, someone who is supposedly capable but who hasn't been motivated to develop enough foundational knowledge would not be able to bullshit or razzle dazzle their way through having to solve an actual problem at the 301 or 401 level instead of just presenting someone else's 401 solution.

Which is what I demonstrated with my little class room experiment.

Therein lies the problem insofar education (or culture as a whole) becomes too focused on the performative (talks and presentations as opposed to actual problem solving). It seems that the state of "higher education" is definitely at that point now. This is a problem insofar the function of "higher education" is to train the human ability to think. Humans using chatGTP et al to do their thinking for them is rapidly driving humanity in the other direction. We risk ending up with a culture/society that is only interested in having performative conversations and eventually becoming only capable of such lacking any personal development of memorized facts and methods to evaluate whether what they're hearing is BS or not... as long as it "sounds good".

7Wannabe5
Posts: 10580
Joined: Fri Oct 18, 2013 9:03 am

Re: Future of Artificial Intelligence

Post by 7Wannabe5 »

jacob wrote: The problem with "unmotivated but capable" shows up once one gets past the 101 classes. 201 typically requires understanding of 101 methods while 301 requires both 201 and 101, and so on. Often (at least in physics)
Yeah, but real life and real life problems are often more trans-disciplinary and/or more requiring of wide competencies. For example, I've had the experience of being in a grad level Economics class in which I was the only student who ever actually ran a business, and I've also had the experience of being in a grad level IT class having much less practical experience than my median classmate, but more advanced math and writing skills. For example, it might also prove problematic if somebody was an 801 expert on satellite and rocket technology, but possessed little working knowledge of moral or political philosophy.

jacob
Site Admin
Posts: 16995
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

7Wannabe5 wrote:
Thu May 08, 2025 9:24 am
Yeah, but real life and real life problems are often more trans-disciplinary and/or more requiring of wide competencies.
What is the end goal of the often quite expensive educational process?

For example, when it comes to essays, is the end goal to increase the student's ability to draw together different facts, thoughts, and opinions into a synthesis or a higher level abstraction? Or is the goal for the contemporary student to figure out how turn in a given essay on demand in the most efficient way possible whether that's asking chatGPT what to write or to get chatGPT to write the whole thing ... or to use a slightly more neanderthal approach: To go online and figure out where and how to pay a professional essay writer $10 to crank out an essay that will "guarantee a B- or your money back".

(One might distinguish between the game (former) and the meta-game (latter) here.)

Because unlike the situation even ten years ago, the fact that the student hands in their homework no longer differentiates between these two approaches. Whereas it used to be that if someone hadn't learned anything, their essays would reveal this.

Now the question in terms of not only "the future of human intelligence" but also [second-order] the future of artificial intelligence (for as long as it derives from human intelligence as it currently does) depends on which of these two tracks society takes.

One is "making it" and the other is essentially "faking it" or at the very least doing something very differently. Therein lies the difference between the "performative" and the "functional". My take is that a performative society or a performative employee lives on borrowed time. Perhaps more precisely, they're easily replaceable. Indeed, much of AI enterprising seems to be about to which degree anything or anyone with a loose level of insight can be replaced by AI agents.

On a parallel note, if society and societal culture as a whole gets taken over by the performative, it is not sustainable because it is no longer anchored in reality or the ability to work with reality.

In particular, what happens to the "future of artificial intelligence" once it has to train on "performative humans". Garbage in -- garbage out, perhaps?

daylen
Posts: 2634
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

It seems like the era of human specialists is giving way to the era of human generalists. In so far as STEM just becomes more capable at self-maintenance and self-reproduction, humans will be capable to the extent that they can prompt the systems into getting the outcome they desire (which requires a kind of problem solving in itself). Science self-correcting with measurement, technology self-correcting by successful use, engineering self-correcting through simulation, and math self-correcting through automated proof. Math is already running into limits with human capacity, opening way to more collaborative setups aided by verification systems that break up problems into many components that can be solved independently.

Some humans will likely continue to dive deep into their area of choice and perhaps that will be enough to keep STEM from collapse (in conjunction with the feedbacks above). Outside STEM, the arts and humanities seem to be entering a golden age as self-expressive friction is minimized. Perhaps pointing us back towards distributed communities interfacing through the virtual with various degrees of hands-on-ness. This seems to be the only way the many can gain and pass on the wisdom necessary to restrict the sand gods.

jacob
Site Admin
Posts: 16995
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

daylen wrote:
Thu May 08, 2025 9:52 am
It seems like the era of human specialists is giving way to the era of human generalists.
What I'm trying to say is how I don't think this is what is happening.

A specialist who is capable of abstracting (to whatever level) is someone who knows and commands facts&methods in depth so as to develop them in increasingly more abstract ways. Typically a specialist will focus on what makes one problem different from another.

A generalist who is capable of abstracting (to whatever level) is someone who knows facts&methods in width, that is, from two or more fields , so as draw them together typically because they have something nonobvious in common. Typically a generalist will focus on what one problem has in common with another.

However, this is not what is happening when a student (or non-student) completely outsources the need to know things whether it's in the vertical or in the horizontal. Students skip the vertical by asking chatGPT for an executive summary of the assigned book so they don't have to read it themselves. *POOF* a lot of opportunity to get the neurons in their own brains to fire together in order to wire together for later creative use was just wasted.

Then the same student outsources the need to be widely read by simply asking chatGPT what general connections exist. chatGPT dutifully responds according to its training material and the student produces an essay based on that.

What's happening here is that knowledge generation and creativity becomes a closed group that can never be escaped. Once there are no genuinely creative humans left, students will basically be regurgitating slight variations on the training set. *Everything will turn into "AI slop"*

Unless ... AI becomes genuinely creative. In that case humans are no longer needed.

But if AI never becomes genuinely creative, our current approach is a very scary trap because it's literally training people to stop thinking on their own.

daylen
Posts: 2634
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

I am not entirely yet convinced that if AI doesn't become genuinely creative then humans will be trained to stop thinking. Seems like thinking is something humans innately do to various degrees, with generational changes in content. Although am not confident enough about this to alleviate my fears. Not sure what's worse.. drowning in n'th order slop or being swallowed by our silicon successors. :(

delay
Posts: 688
Joined: Fri Dec 16, 2022 9:21 am
Location: Netherlands, EU

Re: Future of Artificial Intelligence

Post by delay »

jacob wrote:
Thu May 08, 2025 9:50 am
how to pay a professional essay writer $10 to crank out an essay that will "guarantee a B- or your money back".
When I studied physics I used to do programming assignments for EE or IT students. I got paid in beer.

Later on, these students (your 80% I suppose) would get good jobs as business consultants, IT architects or management trainees. Their peak earrnings are considerably higher than mine.
jacob wrote:
Thu May 08, 2025 9:50 am
What is the end goal of the often quite expensive educational process?
Perhaps our employers prefer obedient, average, predictable workers over creative specialists? I can see how a set of standard lego blocks adds up to more than a set of uniquely shaped lego blocks.

jacob
Site Admin
Posts: 16995
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

delay wrote:
Thu May 08, 2025 10:33 am
Later on, these students (your 80% I suppose) would get good jobs as business consultants, IT architects or management trainees. Their peak earrnings are considerably higher than mine.
Same here. Our banking system is now in their hands.

7Wannabe5
Posts: 10580
Joined: Fri Oct 18, 2013 9:03 am

Re: Future of Artificial Intelligence

Post by 7Wannabe5 »

jacob wrote:when it comes to essays
AI destroying the value of essays as a means to judge learning (brain wiring) is just one more step along the path of digital information technology destroying the written tradition in a manner not unlike the destruction of the oral tradition by the invention of the printing press. I think I accepted this was a done deal one day in the summer of 2009 when I had a conversation over coffee with a friend who worked in publishing.

Actually, AI will soon be able to solve this problem, because you could just wire up a classroom full of students to brain-imaging devices and then AI could determine which brains have been wired with the learning material by tracking responses to something like a slideshow of relevant questions. Kind of like detecting cancer in a mammogram. Of course, at this juncture, there will also no longer be much purpose provided by educational institutions or certifications or resumes, because brain scans could just be presented to employers directly. Although, it seems like the only advantage that might be offered by a human employee would likely be at the chaotic edge of installed textbook knowledge and sensual/sensory/emotional interaction with the real world.

daylen
Posts: 2634
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Creativity is becoming increasingly measurable in relation to whether new content tends towards collapsing various models as opposed to opening them up to new connections. A striking related example is jailbreaking where competitions are being held to break models out of their "constitution" of values and instructions. Perhaps many competitions will be continuously held in the near future for creative content of all kinds. Open source does seem to be travelling a similar path as operating systems (*nix). What seems likely to me barring any major setback [in capital, chips,..] is that the landscape/ecosystem of models will continue to grow and diversify alongside the number of different forces pushing the models across valleys of design and optimization. This could in an of itself lead to tensions between models (especially the heavily distilled and specialized). These tensions being alleviated through some mixture of human prompting, human/model consensus, top-down influences from bigger models, and bottom-up measurement/use/simulation/proof. Perhaps the whole system can be more than the sum of its parts, incentivizing and enabling the parts to move in a high dimensional space filled with novel pockets of discovery.

daylen
Posts: 2634
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Future of Artificial Intelligence

Post by daylen »

Like how gene-sequencers allowed us to more accurately peer into the lineage of species, meme-sequencers can allow us to more accurately peer into the lineage of cultures. All the artifacts being compressed into an ever larger distributed data center. With an ever lingering collective shadow that might one day outpace the light.

7Wannabe5
Posts: 10580
Joined: Fri Oct 18, 2013 9:03 am

Re: Future of Artificial Intelligence

Post by 7Wannabe5 »

The two could even be interspersed. For example, there are frugality genes and frugality memes.

Post Reply