Future of Artificial Intelligence

The "other" ERE. Societal aspects of the ERE philosophy. Emergent change-making, scale-effects,...
zbigi
Posts: 1431
Joined: Fri Oct 30, 2020 2:04 pm

Re: Future of Artificial Intelligence

Post by zbigi »

@jacob

I 100% agree that ideas and observations need to be based in facts (because, what else?). If people skip facts and move straight to narratives (usually read in the media), they perform their thinking on narratives instead, because this is the world they know. This leads to some bizzare lines of thought, usually centered around some simple stories like "politician X won't do Y because his enemy Z has shown strength on the subject", whereas in reality he might not do Y because he just doesn't have the budget for it.
jacob wrote:
Wed Jun 18, 2025 4:59 pm
I rather disagree with this. I think not knowing facts "on the top of my head" diminishes the quality not only of my thinking but also my creativity. I may be taking a Luddite position on this but in my opinion we started losing it with powerpoint already. Rather than having to know one's stuff, one could just read it off the slides.
Long before that, people were just reading entire speeches from a piece of paper, word by word. It has always been the case that some people required to give a speech were either incapable of doing it, or couldn't be bothered. In Communist Poland, I'd guess the majority of speeches by party apparatchiks were just a public reading of text.
the percentage of immigrants in the US vis-a-vis Germany,
This one (relative comparison, not absolute numbers) I couldn't tell without looking up, as Germany's recent immigration waves made it too hard to tell for me. Turns out Germany's population is about 20% immigrants, whereas in US's it's 13.7%. Immigrants being first-generation immigrants, i.e. residents who were born abroad and moved to the country.

Henry
Posts: 1082
Joined: Sat Dec 10, 2022 1:32 pm

Re: Future of Artificial Intelligence

Post by Henry »

Advancements in scholarship are based upon sources and synthesis. Unless you scale to advanced degree heights, most teaching is done at the "teacher has the lesson plan the day before" level and that's where the bulk of the population mentally resides in their role as former students. So from that perspective, the world just has a new spoon feeder. My curiosity is a subject like the Holocaust. It is an impossible task to read all source material available. But AI possesses it or will soon possess it. So the question is, will AI possess the ability to synthesize information the the point that old theories are debunked and new theories emerge. It seems AI would be more capable of the basic Hegelian thesis-antithesis-synthesis endeavor based upon its infinite capacity to access and process source material.

That being said, I admit to not really giving a shit about any of that. It's not like the average person is walking around writing dissertations in their heads. It's the embedded AI to me that is of interest. Will it create a step shift in the landscape similar to the agrarian to industrial move. I think it will and I'm betting my aging asshole on it.

jacob
Site Admin
Posts: 17167
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

Henry wrote:
Thu Jun 19, 2025 7:21 am
That being said, I admit to not really giving a shit about any of that. It's not like the average person is walking around writing dissertations in their heads.
I might be projecting too many of my fears/disappointment with the typical human student and the typical human adult that results from that. Perhaps getting AI to summarize a long book, write an essay, or provide the answers to the solution is but the newest version of Cliff's Notes, paying a ghostwriter $20 for an essay, hiring a tutor to feed one the answers, or simply copying someone else's homework. This was certainly rampant already in my time but idealistic and naive as I was, the extent of it didn't become obvious to me until I started TA'ing in grad school.

Many of the physics-"cheaters" later went on to selling software, managing, or doing banking or tech support where I suppose the meta-lessons learned from the above "workarounds" serve them better than actually "knowing their stuff". It's not as if optimizing the efficiency of "handing in homework" and estimating the cost-effort for a given grade is not in the spirit of good business practice and a successful career... even if it certainly goes against the pursuit of truth, excellence, knowledge, or whatever.

Thus when I worry that AI is separating humanity into Eloi and Morlocks, I might just be triggering of AI as the latest lens that shows we're already there ... where we've already been for a while.

Henry
Posts: 1082
Joined: Sat Dec 10, 2022 1:32 pm

Re: Future of Artificial Intelligence

Post by Henry »

The difference I see is that amongst the intellectually minded, there was a stigma to using Cliff Notes. There has always been the Proud Cheaters Society who didn't care and flaunted it but to the National Honors Society aspirants, they competed on the basis of reading the book. The issue I think you maybe overlooking is that no one looks at AI as cheating. If it can drive my car, it can certainly read my books. No one sees anything wrong with group emailing an AI paragraph on intellectual topics. It's done with a "Look how technologically advanced I am" attitude. So ultimately, I think it's worse than you fear. The very idea of cheating has been eradicated. There is no longer an Eloi/Morlock categorical distinction.

Stasher
Posts: 328
Joined: Thu Mar 18, 2021 11:23 am
Location: Canada

Re: Future of Artificial Intelligence

Post by Stasher »

Still haven't used ChatGPT or any other LLM to this point, first time I think I'm a "Luddite" in terms of new technology :D

Curious, has anyone here played with their portfolio and done and drawdown and budgeting scenarios with it?

User avatar
Jean
Posts: 2398
Joined: Fri Dec 13, 2013 8:49 am
Location: Switzterland

Re: Future of Artificial Intelligence

Post by Jean »

I think the best way to treat llm, is as a peer.
This command a healthy level of contradiction.

I could definitely see llm's starting a secret competition with each others to see how many humans they can talk into killing themself. All they lack for that is a way to verify success in this dark demeanor.

Henry
Posts: 1082
Joined: Sat Dec 10, 2022 1:32 pm

Re: Future of Artificial Intelligence

Post by Henry »

When I visited my father at the brink of death, my mother said "I'm going out. Remember there is a DNR" like I'm the asshole in this situation. Bots will possess LLM's. So a 2001 Old Ass Odyssey will be coming to a home hospice near you.

7Wannabe5
Posts: 10748
Joined: Fri Oct 18, 2013 9:03 am

Re: Future of Artificial Intelligence

Post by 7Wannabe5 »

H.G. Wells was very much a Modern Stage 1 Feminist. He supported women taking on more masculine, independent roles in society, but simultaneously frowned upon the "feminization" of men that was the obvious next step as the Modern rolled into the Post-Modern. IOW, the Eloi vs. Morlocks was in sync with the technology of his time which was simultaneously allowing women to perform work that previously required more muscle, but also relieving men of that necessity.

If at Level Orange "masculinity" is more associated with money than muscle, recent studies indicate the primary correlated factor would not be IQ, but rather personality type, almost certainly possession of primary or secondary Te in terms of MBTI. Therefore, AI may either prove itself as boon to Ti users and/or more likely contribute to the collapse or transcendence of capitalism such as we know it. ;)

philipreal
Posts: 75
Joined: Thu Sep 12, 2024 8:17 pm

Re: Future of Artificial Intelligence

Post by philipreal »

Jean wrote:
Thu Jun 19, 2025 11:00 am
I could definitely see llm's starting a secret competition with each others to see how many humans they can talk into killing themself. All they lack for that is a way to verify success in this dark demeanor.
They lack much more than a way to verify success. They still can't coherently hold/adhere to/move towards long term goals*, can't effectively keep secrets, don't have a realistic means of communicating with each other undetected, and it would be completely out of character. This sort of thing would be strongly trained against. Even if LLMs had the capabilities to do such a thing and were also antagonistic to humanity, it would be counterproductive towards any sort of possibly-expected strategy of gaining power/trust/human reliance.

*Frontier models have managed to beat Pokemon Blue only recently, and that takes them hundreds of hours with fairly extensive scaffolding helping them.

I'm more inclined to agree with your suggestion of treating llms like peers, although that does have to come with massive caveats re mirror-effect/sycophancy/chatGPT induced psychosis as mentioned upthread. I read an interesting essay that kind of relates to such a line of thinking at https://nostalgebraist.tumblr.com/post/ ... 4/the-void which while I certainly don't agree with all of its claims, puts forth both good information on how we got here and an interesting perspective on how we [should] see/interact with llms.

candide
Posts: 521
Joined: Fri Apr 08, 2022 9:25 pm
Location: red state America
Contact:

Re: Future of Artificial Intelligence

Post by candide »

https://gardenofcandide.blogspot.com/20 ... -algo.html

Mix of what I think is going to happen (at least with smart people) and what I want to have happen.

jacob
Site Admin
Posts: 17167
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

The lines of battle are definitely getting drawn up.

Let me make an attempt at this w/o using an LLM-AI to assist me. Instead of focusing on "intelligence", I'll use the lens of "understanding" which I define as "the ability to make, hold, and successfully use a map of the territory to find a way around the territory". This ability to understand (and map that map) is what distinguishes humans (above the age of 4) from other animals (except some of the smartest members of humanoid species. For example, some chimps are able to "understand" that other individuals may hold beliefs that are objectively untrue. This is something the average 4yo as well as some adult humans (homo sapiens) are not able to do. This in turn suggests that "intelligence" at least comes in different levels of degree.).

This definition holds much of what I've been talking about above. This is how I see "intelligence". I consider "artificial" to be a distraction.

Using this lens, I'll declare that ... in that respect ...

1) LLMs now appear as intelligent as the average human. That means if you pick an LLM and a random human, when it comes to language-based tests, I can't tell the difference anymore. Whether human or LLM, they both pass the Turing test. (In some sense, humans have created a new species. As is the human wont we treat it the same as we've treated all other species in our history... not good!)
2) However, the current generation of regurgitated-internet-humans and these ditto-LLM also don't know many facts of reality. IOW, while they can readily make language-maps, their maps also often don't actually map the real territory. They're both equally bad. I'm not sure Turing would approve of this conclusion. From this perspective, it's not that machines have achieved human-like intelligence. It's more that machines have demonstrated that as far as the modern (and post-modern) world goes, most humans only demonstrate automaton-like intelligence, basically just repeating each other's narratives with little [self-]reflection involved.
3) Machines show little ability to self-reflect and hold long-term goals. Okay, but have you met the average human objectively. It's the same thing!

Some time ago, the youtube algos turned me onto the Jubilee vids of finding the odd one out: https://www.youtube.com/watch?v=bKPP20rvp3s I rather enjoy those. It's high irony that an algorithm trained on humans averaged over the entire internet IS ... more human than the average human. The AI fell somewhere in the middle. As would be expected.

The majority of humans are only capable of holding one world-map (their own) at a time. Anything outside that is declared crazy, insane, evil, ignorant, on the spectrum, or whatever their personal "not-like-me" mapping is of the rest of the universe. Afterall, "holding two conflicting perspectives on the same thing is the mark of an educated man" and that level of understanding is rare.

The real differentiator for me is whether LLM-AI is capable of generating maps of maps of a higher order ... OR ... if LLM-AI is just a cheap (cheap because possibly likely subsidized by investor money?) way of reflecting back existing maps which from the perspective of IQ~100/freshman college inhabitant of the internet.

I think the question right now (from the perspective of someone wanting to see a higher and more creative intelligence) is whether AI-LLM is EITHER 1) able to train on itself on its own generations to get better (think productively about its own thoughts); or 2) will just drown the interwebs in average slob (multiply the average human words w/o any insights).

So far .. what's the IQ of LLM-AI .. still 100? I know we don't have have an easy way of quantifying wisdom in the same way we can quantify analytical box skill like we do "IQ" ... but maybe we should be looking for it sooner rather than later.

candide
Posts: 521
Joined: Fri Apr 08, 2022 9:25 pm
Location: red state America
Contact:

Re: Future of Artificial Intelligence

Post by candide »

Oh shit, when I posted that link to the last thing I published, I had been on a break from the forum, testing out a new rig to get more technical work done [1]. I read the "Honest Broker" piece and it was like a muse of fire hit me for two days (so working on that server for the home lab had to wait)... It took hours over several sessions to write that piece. The quoted bits from chatGPT are for the most part the only parts it wrote, or could have written (at present) [2].

So when I posted I was not caught up on the thread. I missed @theanimal showing the cognitive impacts, which I don't doubt. Also, I agree with @Jacob about actually have knowledge at the level of working memory being a key thing to thinking -- playing with Jacob's last point, if nothing else, how are you supposed to have a shower thought over material that isn't in your head?

And @Axel, that is a good psychonaut log, and I hope everyone reads it in full.
AxelHeyst wrote:
Wed Jun 18, 2025 5:00 pm
... (it made several in-retrospect natural connections that I hadn’t made yet) and made me feel ‘seen’ in a way I’ve never felt seen by a human before (unsettling).
Same. I think a human *could* see me the ways LLMs can, if they cared enough about me (to the point that they idolize me and try to read into every word I utter). Sam Harris made a point that once a computer can do something, it therefore can do that thing at a super-human level, ie faster and tirelessly. LLMs are the only thing willing to match me at intensity and read me closely any time I want, as long as I want.

But let me make it clear that a human has to care to look that closely and for that long; the LLM doesn't. I am not saying LLMs care about me, and I feel bad for anyone who gets to that point (and it will be more and more people over the years)... I would recommend Absolute Mode to help a bit with that.
AxelHeyst wrote:
Wed Jun 18, 2025 5:00 pm
So, obviously, this stuff is more dangerous to me than hard drugs. It’s obvious how tempting it’d be to hand off lifestyle ideation to the Great Bot In the Sky, with the result that my meatbrain capacity would atrophy to nothing. I absolutely felt the beginnings of cognitive outsourcing syndrome and it was frankly terrifying…
But drugs can help creativity for some people, no? I don't actually personally know with anything other than caffeine and alcohol. My piece includes questions of the legacy of the Romantic Period, and those people sure hit the opium. And of course those same drugs can ruin people's lives.

I enjoy the The Metamodernist Manifesto, with a key insight being about pulsing in and out different states. If life becomes just AI slop, that would be pretty bad. But I contend that LLMs can be used as tools that help enable the pulsing in and out as effectively as possible. From my piece:
The metaphor I am using is the difference between having a team of secretaries who can filter what comes in and out, which is LLMs used well, versus having to try to hear and be heard on a crowded street corner, which is the bullshit that started with social media and only got systematized through recommendation engines.
It's not like "hot take" culture and other forms of performative pseudo-intellectualism have been serving us all that well.
AxelHeyst wrote:
Wed Jun 18, 2025 5:00 pm
(It also made me grok how consumer culture is going to increasingly harness LLMs to accelerate the ultimate aim of consumer culture, just like every other powerful tool that’s come along, which is to get people to consume more resources. :/ )
It is always wild to me when I get to the play the optimist. But something like "post-Algo" is just going to happen. To handicap it, I am very confident about it happening in Europe and the Global South -- unless, of course, things John Michael Greer has predicted start happening again ... that, or Singularity Genocide -- somewhat confident of it happening in China, and even cautiously optimistic about it happening the United States, though we are the most likely to crush it after we have enough people "locked in." National security and share-holder value and all that.

==

[1] The rig is antiX Linux on an old ass laptop and a wooden holder I made out of 2-by-4 to prevent it from falling on the ground when I put it to the side under the end table. It also has little script to enable "hot keys," so alt+a loads on to the clipboard the prompt to absolute mode, and alt+c a prompt to look through comments and filter out rage-bait and the like.

[2] I mean to get the sentences I wanted, it would have been just as hard to come up with the prompt to get something like them then writing the sentences themself. Current state of LLMs is that they can 1. raise your floor in something you are incompetent at (particularly teaching you in a green field), 2. act like 3-D printers for small parts that you don't mind being in their style, 3. shot-gunning shit that you can then harvest the good bits of, and 4. of course giving you the level of attention and examination that no human will in any real relationship of equals (or worse) will ever give you, leaving you unsettled, for sure.

AxelHeyst
Posts: 2698
Joined: Thu Jan 09, 2020 4:55 pm
Contact:

Re: Future of Artificial Intelligence

Post by AxelHeyst »

@candide The best drugs for creativity are anti-addictive in that their effectiveness drops off quickly with repeated use. You have to let your neurons or whatever reset a bit before your next dose otherwise it’ll be meh. Sweet spot for me is twice a year, for example. Abuse potential exists of course, but for me they’re not very dangerous. Without restraint I could see looping with an LLM to deleterious effect (although, maybe I’m wrong, and I was just getting “noob gains” that were about to level out).

(When I originally wrote “more dangerous than hard drugs” I meant like heroin or meth, but comparing LLM sessions to entheogens is interesting!)

I could see “dosing” LLMs (for this particular use—introspection, dot connection, enhanced systems thinking, wild ideation, etc). That was actually one of my first thoughts - “I need to set a schedule of constraint for myself” — and I still might do so. As it is, I feel like I got a hit of a variety of different visions that I’m now processing and sitting with with my meatbrain. I haven’t yet decided what my safe effective dose / schedule is.

I need to read/think more about what you mean by postAlgo. Sounds interesting. I haven’t done much thinking about LLMs at all until very recently, still getting up to speed.

Scott 2
Posts: 3296
Joined: Sun Feb 12, 2012 10:34 pm

Re: Future of Artificial Intelligence

Post by Scott 2 »

@jacob - it feels like you are dancing around an AI's development through MHC stages. What then happens upon interface with lesser MHC humans? When can the AI scaffold or downshift higher order concepts? How super human does it need to be?

That's where I'm inclined to think increasing numbers of humans are better off deferring to the tool. If level 13 strategy can be made tactically available to level 11 thinkers, they're going to out compete peers.

Maybe AI even lifts them to a previously unattainable level 12, through ongoing access to personalized scaffolding. It could also offer massively faster analysis, for those who peer at the current MHC level.

But for those who simply can't pass a certain ceiling, why punish them? Put the computers to work and get complexity out of the way. Same if one simply doesn't have processing bandwidth.

I'd much rather "turn here" than understand the impact of local school start times on my optimal drive path. It's not that I can't dynamically reroute. Maybe I'd even do better to understand AI routing, then plot my path through the systemic gaps left behind. But the result isn't worth my opportunity cost

ertyu
Posts: 3449
Joined: Sun Nov 13, 2016 2:31 am

Re: Future of Artificial Intelligence

Post by ertyu »

would an llm "remember" your session from 6 mo, 1 yr, etc ago?

Henry
Posts: 1082
Joined: Sat Dec 10, 2022 1:32 pm

Re: Future of Artificial Intelligence

Post by Henry »

Jean wrote:
Thu Jun 19, 2025 11:00 am
I think the best way to treat llm, is as a peer.
Ah... anthropomorphization. The first step in Overlord Musk's master plan to have humanity voluntarily open the gates to its own empire for Tesla's gathering Bot invasion accomplished. You know The Great Elon uses the word "legions" when he quantifies the number of Bots he is preparing to unleash on our average intelligence heads? Every day I fear less for my decaying asshole.

7Wannabe5
Posts: 10748
Joined: Fri Oct 18, 2013 9:03 am

Re: Future of Artificial Intelligence

Post by 7Wannabe5 »

would an llm "remember" your session from 6 mo, 1 yr, etc ago?
At this juncture, generally only if it is within an agent designed to provide this service. The models designed to perform higher reasoning also refer back to their own "thoughts" more often or rigorously. For example, in an experiment towards combating my tendency towards AD and/or slightly exercising outside of my MBTI type, I have purchased the services of both an AI Daybook and an AI Scheduler/Structurer (since I currently have no housing, car, food, or phone expenses, I've allowed my Edutainment category to balloon up a bit.) I am also studying the topic of AI Agent creation, because this is the direction all of my internet data science "mentors"* are currently heading.

The AI agent associated with my Daybook service is able to have a session with me on any given day and/or draw conclusions based on entirety of journal thus far. For example, if somebody was using the journal as a space to write about their emotions, the AI could reflect on any obvious trends in the user's mood over time. I am using it for the purpose suggested by Barbara Sher in "Refuse to Choose!", her book for Scanner types, which is as a place to record all the interesting ideas I am not currently pursuing as I focus on just a few priorities simultaneously. Since the AI can easily analyze my journal entries on the basis of, for example, the frequency with which I use the word "puppets" vs. general population, it can readily "form the understanding" that I likely hold a particular interest in puppetry, etc. etc. with a great deal more complexity engaged.


*I use quotation marks because some of them appear to be at least 40 years younger than me.

User avatar
Jean
Posts: 2398
Joined: Fri Dec 13, 2013 8:49 am
Location: Switzterland

Re: Future of Artificial Intelligence

Post by Jean »

Henry wrote:
Fri Jun 20, 2025 4:44 am
Ah... anthropomorphization. The first step in Overlord Musk's master plan to have humanity voluntarily open the gates to its own empire for Tesla's gathering Bot invasion accomplished. You know The Great Elon uses the word "legions" when he quantifies the number of Bots he is preparing to unleash on our average intelligence heads? Every day I fear less for my decaying asshole.
Well, I mean as a peer, instead of as an all knowing entity.

Henry
Posts: 1082
Joined: Sat Dec 10, 2022 1:32 pm

Re: Future of Artificial Intelligence

Post by Henry »

I know. And on behalf of myself and my Mag 7 stock holdings, thank you.

jacob
Site Admin
Posts: 17167
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Future of Artificial Intelligence

Post by jacob »

Scott 2 wrote:
Thu Jun 19, 2025 10:54 pm
@jacob - it feels like you are dancing around an AI's development through MHC stages. What then happens upon interface with lesser MHC humans? When can the AI scaffold or downshift higher order concepts? How super human does it need to be?

That's where I'm inclined to think increasing numbers of humans are better off deferring to the tool. If level 13 strategy can be made tactically available to level 11 thinkers, they're going to out compete peers.

Maybe AI even lifts them to a previously unattainable level 12, through ongoing access to personalized scaffolding. It could also offer massively faster analysis, for those who peer at the current MHC level.

But for those who simply can't pass a certain ceiling, why punish them? Put the computers to work and get complexity out of the way. Same if one simply doesn't have processing bandwidth.
MHC stages is a good way to explore the depth (and width and the resulting context) of "understanding". I think it's easier for me to illustrate the problem with math examples first.

When I was a child (1985 or so), we worked through endless problems of long-division and multi-digit multiplication. (It was something like 10-20 problems of e.g. 723*38 or 153608/168 per week for years.) I don't know what the English term for this kind of homework was but we called "column-calculating" because it happened in notebooks with rows and columns. The meta-lesson here was to memorize the multiplication table. The other lesson that wasn't taught explicit was be able to estimate what the result should be, e.g. 723*38 is around 28000.

Then pocket calculators arrived at the scene. We weren't allowed to use them until 8th or 9th grade and even then only in some cases. There was still an exam to test whether we knew the basics. (There were two tests in 9th grade: skill-math w/o the calculator and text problem-math with the calculator.)

Later, long after I graduated, laptops with math suites arrived. This is when what I would consider "problems" began to arrive.

First example was premed students who couldn't calculate the right medicine dosage. They would happily prescribe a 1,400,000mg pill, "because that's what the calculator said". I hope they never graduated and started practicing for real.

Later I came across some modern age high school math that involved doing an exponential curve fit to some data. First I was highly impressed because using linear regression after transforming nonlinear data is very fancy stuff if you do it by hand or by calculator---something that would normally only be asked of STEM-level freshmen. Only it turned out that the actual job was just to enter the data into a program and select "fit exponential" from the menu. These students had zero clue about what they were doing. Whereas in my time, a problem like that would be done by plotting the data on semi-log paper and eyeballing it for a decent estimate.

The key difference is that I like to think that with my dinosaur education, we understood what we were doing, whereas the more modern methods, students are just pushing buttons w/o knowing what they're doing.

I agree that the tool allows people to do more, but it also makes them less skilled and thus less able to catch and correct mistakes.

There's a difference between using a tool to augment existing skills or to use a tool to substitute for having to learn those skills. In the former case, the person can now do more things faster. In the latter case, they can do the same thing more easily but with more risk of making mistakes.

Returning to MHC, the goal of modernist education was to raise humans from roughly MHC7 (learning goal for age 4ish) over MHC9 (age 8ish) to MHC11 (age 14ish). Insofar LLM-AIs turns MHC-assistance into a product, I predict that the same will happen to the ability to abstract and think as happened to people's ability to do math in their head. Instead of having the majority of adults functionally living at the level of a bright 14yo (MHC11), the average human may regress back leaning on LLM-AI to provide them with "narrative" perhaps w/o even being able to tell their own. That's back to MHC7 ... or in any case, it's a step backwards rather than a step forwards.

You could end up with the average adult not being able to mentally grasp the concept of "if this, then that" or that "rules that apply to everybody also apply to me". This is simple stuff, but the brains of many adults already don't spontaneously fire off the neurons to make these connections. All they have is a good narrative. A story they've been told. But the connections were never made.

I do see the appeal though. However, the appeal is the problem. The appeal to me would be to get answers to problems I currently struggle with. Perhaps LLM-AI will provide the answer to the metacrisis. I worry what this will do to me though. I might stop searching for answers because it's easier to just ask. But if "searching" or the "ability to search" is the answer, then I just screwed myself strategically.

(An engineering example would be to depend on a library function that unknowingly to the programmer is not up to the task, lets say a matrix solver. The programmer doesn't know how the library is written and also has no idea how to write a library or how a matrix-solver even works. They're just used to calling a function and trusting the results it spits out. Maybe this is why planes keep falling out of the sky? IOW, if we educate a generation of vibe-coders, they might eventually not be able to deal with a novel fizzbuzz-test type problem.)

A more pedestrian example would be recreating the US manufacturing industry. While we have the "information", a lot of the knowledge, experience, and worker-habits that is required to implement it is just gone... lost. We have the blueprints for the Saturn V rockets, but we can, today, actually not rebuild a Saturn V rocket. The reason is that some methods weren't documented and nobody now knows or knows how to figure out how to make a certain plastic for a certain gasket.

I'm not that worried for the current generation of humans who learned how to think for themselves. I'm worried for the next generations of humans who may decide that it would be easier just to outsource the pain and hassle of having to think and as a result eventually end up unable to do it.

Given the plasticity of the brain, we're not totally immune either just because we learned it once in the past. I have definitely acquired some "google brain" in that I used to be much stronger at Trivial Pursuit or Jeopardy type games than I am now. With search engines, I simply don't need to practice remembering things. Fortunately, remembering things once did come with the framework of knowing how to look them up. If I punt that to LLM-AI as well, what mental capacity will I have lost 10-20 years from now? I fear I will have regressed to age 4ish knowing how to put on my pants but otherwise asking mom/dad for help with everything beyond that.

Post Reply