I think the consensus is that AI is already at the level of a bright college graduate who intermittently takes LSD while on the job. I think you are actually indirectly making the point through your argument, because the upheaval will be based on no longer being able to hold your roles as worker, consumer, investor, etc. as independent from each other, because in each/every role you will be in competition with AI.jacob wrote:And likewise, will e.g. "average IQ AI" do anything beyond replacing office workers with half an associate degree with an "IT system" or replace $40,000/year taxi and bus drivers with $40,000/year robo-drivers? Of course if I was a bus driver or investing in AI-bus systems, it would make a difference for me. As a passenger, I don't see much difference.
Future of Artificial Intelligence
Re: Future of Artificial Intelligence
Re: Future of Artificial Intelligence
Well, TSLA is a trillion dollar company specializing in autonomous transportation and to my knowledge it hasn't announced a solar powered fat tire scooter product so I think whatever the market is for self-driving driving vehicles for affluent old Americans it's greater than the solar powered fat tired scooter market whether its in Africa or any other place.
And it just seems to me that if you can decrease the cost, danger and energy consumption of the millions of vehicles that travel our out of date and too expensive infrastructure it's better than maintaining the status quo.
Re: Future of Artificial Intelligence
@Henry:
Sure, but that's small potatoes in the possibility window. "It is easier to imagine the end of the world than the end of capitalism." -Fredric Jameson, "The Seeds of Time."
Sure, but that's small potatoes in the possibility window. "It is easier to imagine the end of the world than the end of capitalism." -Fredric Jameson, "The Seeds of Time."
Re: Future of Artificial Intelligence
I am finding with novelty gone, most of my AI use is lazy information searches. My life doesn't demand critical thought very often, I guess?
I did try tasking it with finding my next credit card churn. Chat GPT did an abysmal job at that. Same with getting me the best price on cat litter. Or identifying the best coupon for my latest online purchase. The special purpose tools were better in all cases.
I didn't trust the AI search on Instacart. I dug around until I could find the category menus instead, because I didn't want to miss any options. I also tend to favor individual reddit threads over Google AI summaries. By the time I've left chat gpt, I've abandoned interest in an AI answer.
I did try tasking it with finding my next credit card churn. Chat GPT did an abysmal job at that. Same with getting me the best price on cat litter. Or identifying the best coupon for my latest online purchase. The special purpose tools were better in all cases.
I didn't trust the AI search on Instacart. I dug around until I could find the category menus instead, because I didn't want to miss any options. I also tend to favor individual reddit threads over Google AI summaries. By the time I've left chat gpt, I've abandoned interest in an AI answer.
Re: Future of Artificial Intelligence
Well, I think most humans with bright college graduate level IQ would also slack off if given such boring assignments. Pretty much like the episode of Seinfeld when Mr. Pitt makes Elaine go sock shopping for him.Scott 2 wrote:I did try tasking it with finding my next credit card churn. Chat GPT did an abysmal job at that. Same with getting me the best price on cat litter. Or identifying the best coupon for my latest online purchase. The special purpose tools were better in all cases.
Re: Future of Artificial Intelligence
The end of the world, I know where I'm going. The end of capitalism, not sure, but at least I won't have to drive myself.
Re: Future of Artificial Intelligence
As I try to listen to all of Nate Hagen's podcasts now that @7Wannabe5 turned me onto his great website and content, seems relevant to share a recent one he just posted from his "Frankly" series of short reflections on current topics.
https://www.thegreatsimplification.com/ ... aces-of-ai
The archetypes of AI relationships with society.
https://www.thegreatsimplification.com/ ... aces-of-ai
The archetypes of AI relationships with society.
Re: Future of Artificial Intelligence
@Stasher:
I believe Axel Heyst originally introduced Nate Hagens to the forum. I just yak about stuff I like more than he does.
I believe Axel Heyst originally introduced Nate Hagens to the forum. I just yak about stuff I like more than he does.
Re: Future of Artificial Intelligence
A problem with Nate Hagens to me is his credibility. Last year, he invited some fellow who was making some truly bizzare claims about war in Ukraine, e.g. that Ukraine is on its last legs and Russia will take Kiev by Dec 2024. He was backing that up with some statistics that can be only heard in Russian propaganda. Nate was mostly nodding his head in agreement. The whole episode sounded like some kind of heavy conspiracy theory podcast - the guest claimed to have a lot of knowledge that contradicts everything the public knows about the war.
Re: Future of Artificial Intelligence
@zbigi:
I think most of the podcast hosts who try to cast a wide net towards not forming an echo chamber will occasionally have less than credible guests. Since I am a long-time reader, and very short-time somewhat skeptical podcast consumer, I always almost immediately research background, books and articles written by hosts and guests in order to gauge credibility. I find Nate Hagens' guests to be largely credible, but often varying in terms of their preferred approach to the meta-crisis or related matters. For example, his guests who have an emotional approach to destruction of nature are quite different than his guests who have a rational approach to national debt level.
I think most of the podcast hosts who try to cast a wide net towards not forming an echo chamber will occasionally have less than credible guests. Since I am a long-time reader, and very short-time somewhat skeptical podcast consumer, I always almost immediately research background, books and articles written by hosts and guests in order to gauge credibility. I find Nate Hagens' guests to be largely credible, but often varying in terms of their preferred approach to the meta-crisis or related matters. For example, his guests who have an emotional approach to destruction of nature are quite different than his guests who have a rational approach to national debt level.
Re: Future of Artificial Intelligence
A joke from Hacker News on AI replacing developers:
I think the joke is great because it contains a kernel of truth - ultimately, the tool doesn't matter, you still need technically minded people who will be able to explain the computer the logic that the program needs to follow. Whether you do it in assembly, COBOL, Java or a million chat GPT prompts is secondary.
For those unaware, COBOL is one of the earlier programming languages, conceived in 1960. It has made programming much easier compared to alternatives existing at the time."Great news, boss! We invented this new tool that allows nontechnical people to write code in English! Now anyone can deploy applications, and we don't have to hire all those expensive developers!"
"Wow, show it to me!"
"OK here it is. We call it COBOL."
I think the joke is great because it contains a kernel of truth - ultimately, the tool doesn't matter, you still need technically minded people who will be able to explain the computer the logic that the program needs to follow. Whether you do it in assembly, COBOL, Java or a million chat GPT prompts is secondary.
Re: Future of Artificial Intelligence
MIT just released a new study "Your Brain on Chat GPT"
Project/study website: https://www.brainonllm.com/
They had three groups: one that used LLMs, one that used search engines, and another that was brain only, engage in writing exercises. They consistently found that there was a significant difference in neural connectivity between the LLM group and the latter two. They found that the vast majority of the Chat GPT group couldn't quote anything from essays that they had written just a few minutes earlier. Brain scans showed that writing an essay without assistance (brain only) led to far more neural activity than assisted writing (with Chat GPT). Later, when the Chat GPT group was asked to write unassisted, they performed worse than others who hadn't used LLMs, which suggests possible deterioation of cognitive ability with continued use. Very intriguing to say the least.
Project/study website: https://www.brainonllm.com/
They had three groups: one that used LLMs, one that used search engines, and another that was brain only, engage in writing exercises. They consistently found that there was a significant difference in neural connectivity between the LLM group and the latter two. They found that the vast majority of the Chat GPT group couldn't quote anything from essays that they had written just a few minutes earlier. Brain scans showed that writing an essay without assistance (brain only) led to far more neural activity than assisted writing (with Chat GPT). Later, when the Chat GPT group was asked to write unassisted, they performed worse than others who hadn't used LLMs, which suggests possible deterioation of cognitive ability with continued use. Very intriguing to say the least.
-
- Site Admin
- Posts: 17137
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: Future of Artificial Intelligence
Color me not surprised ...
Whereas google turned "looking up facts" into a product, thus making it easy not to have to remember anything except how to find it, chatGPT is doing the same thing for "thinking" or "connecting the facts". I fear humanity will soon reach a state, where the default changes from "it's on the internet so it must be true" to "but my AI says that ..."
It can easily become that case that AI is not so much augmenting human brain capacity as substituting for it. Losing command of basic facts about the world is one thing---and that's already bad enough---but losing the ability to connect these facts and draw conclusions. Yikes!
Whereas google turned "looking up facts" into a product, thus making it easy not to have to remember anything except how to find it, chatGPT is doing the same thing for "thinking" or "connecting the facts". I fear humanity will soon reach a state, where the default changes from "it's on the internet so it must be true" to "but my AI says that ..."
It can easily become that case that AI is not so much augmenting human brain capacity as substituting for it. Losing command of basic facts about the world is one thing---and that's already bad enough---but losing the ability to connect these facts and draw conclusions. Yikes!
Re: Future of Artificial Intelligence
The article appeals to me. I want to identify as a "brain to LLM" user, who will accelerate my learning with better tooling. But maybe that's clinging to an old paradigm, where the "hard" stuff validates my strengths.
Memorizing information HAS become dramatically less valuable. My brain is less reliable than the 24/7 internet, so I tend not to bother. If I retain something through use, that's cool. But 20 years ago I'd really make an effort to know. Before I appreciated retention has a half-life. Unused information isn't durable anyways. Now the skill is in validating sources, knowing what fails the "smell" test.
What if the LLM is better at abstract thought and inference, than we are? Maybe the landscape of useful analytical skill shifts? Then the exercises we use to validate personal development, need to shift as well. Essays always felt dumb to me. For those trained in the new paradigm, to suck at the old paradigm, isn't ground breaking. Now the average person becomes a leader, who needs to quickly evaluate expert recommendations. Create competing initiatives. Use accelerated feedback loops to iterate. Ask probing questions that look 3-4 steps ahead.
Admittedly, there's a big difference between automating something you understand, and blindly trusting the box. Starting someone's education from scratch, with these tools, the potential for ignorance is high.
Memorizing information HAS become dramatically less valuable. My brain is less reliable than the 24/7 internet, so I tend not to bother. If I retain something through use, that's cool. But 20 years ago I'd really make an effort to know. Before I appreciated retention has a half-life. Unused information isn't durable anyways. Now the skill is in validating sources, knowing what fails the "smell" test.
What if the LLM is better at abstract thought and inference, than we are? Maybe the landscape of useful analytical skill shifts? Then the exercises we use to validate personal development, need to shift as well. Essays always felt dumb to me. For those trained in the new paradigm, to suck at the old paradigm, isn't ground breaking. Now the average person becomes a leader, who needs to quickly evaluate expert recommendations. Create competing initiatives. Use accelerated feedback loops to iterate. Ask probing questions that look 3-4 steps ahead.
Admittedly, there's a big difference between automating something you understand, and blindly trusting the box. Starting someone's education from scratch, with these tools, the potential for ignorance is high.
-
- Site Admin
- Posts: 17137
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: Future of Artificial Intelligence
I rather disagree with this. I think not knowing facts "on the top of my head" diminishes the quality not only of my thinking but also my creativity. I may be taking a Luddite position on this but in my opinion we started losing it with powerpoint already. Rather than having to know one's stuff, one could just read it off the slides. At this point substance was already getting replaced with performance. I grant that this may be a feature or a bug depending on what kind of problems one is trying to solve.
The difference between the two is like the difference between someone who follows the directions of google maps and someone who knows how to navigate. The former is faster and more convenient, but when things break down and people get lost, it's clear that only the latter approach actually works in the sense that only the navigator knows where they actually are. The former is just following directions.
For example, I don't think I should trust or value the political opinion of someone who doesn't know e.g. the significance of Jun 6th 1944, or e.g. the percentage of immigrants in the US vis-a-vis Germany, or e.g. the population of Russia or Iran, or the size of US GDP and the size of the federal budget. These "facts" are pretty easy to look up, but in practice nobody habitually does so. Given this ignorance, a fact-blind mind has no context for understanding ... well much of anything unless "they do their research". For example, if someone claims that "this new bill will save the government 37 billion dollars"... then without knowing how big the actual budget is, one has no idea of knowing whether that's a huge sum or a relative pittance.
Yes, it "possible" to outsource this validation to a trusted source. Lets suppose Politician XYZ claims "37 billion is a lot", then that basically comes down to whether I trust XYZ. Whereas if I know the actual size of the budget, I can apply my own logic to see if the claim is objectively consistent and coherent with other facts. Being able to causally connect statements to a framework of known facts seems to be a far more reliable way of verifying truths than relying on some authority. It also allows me to fill in the blanks.
Another big problem with being fact-weak is the risk of Dunning-Krugering oneself. In order to look things up, one needs to know what to look for. If not, we get confirmation bias. As a result, people pretend to debate each other but in reality none of them know what they're talking about as they're just looking up talking points on the internet. Performance over substance.
In terms of creativity, intuition basically runs on facts and their connection. I don't know about LLMs, but sitting down in front of google will certainly not result in creative ideas. Creativity seems to obtain subconsciously by connecting something I know with something else I know. But if I increasingly know less and less ...
Re: Future of Artificial Intelligence
I recently started playing with chatGPT, partly due to reading some of the conversations Candide was having with it re: introspection.
The tl;dr is that in a few days of feeding it “dots” of visions and ambitions for my life, as well as some context on my personality and way of thinking, it was creating compositions of connected dots phrased as narrative arcs from three, five, ten years into the future. These compositions were *extremely* compelling to me (it made several in-retrospect natural connections that I hadn’t made yet) and made me feel ‘seen’ in a way I’ve never felt seen by a human before (unsettling). I was also impressed by how well it was able to create stuff outside my Overton window - I gave it direction to push the envelope, bonus points for weird/eccentric etc, and it made some very interesting compositions the like of which I’m not sure I’d be able to come up with myself. Really blew me away.
So, obviously, this stuff is more dangerous to me than hard drugs. It’s obvious how tempting it’d be to hand off lifestyle ideation to the Great Bot In the Sky, with the result that my meatbrain capacity would atrophy to nothing. I absolutely felt the beginnings of cognitive outsourcing syndrome and it was frankly terrifying… possibly a tiny foretaste of what certain kinds of cognitive degenerative diseases feel like?
However, it did inspire me to get better at out of bounds ideation myself. In a way it widened my own Overton window in terms of the process of lifestyle ideation - it’s almost like I’ve got a new benchmark in terms of ideation output. Not a lot of people are sharing their dope life vision documents, so there isn’t much to copy in the way of patterns of composition. My experiments with GPT provided a ton of fodder in terms of what my own visions could even look like (the patterns) and I feel like I’m better off for that.
But I definitely need to not become a regular user. My experience confirmed my previously casual position of not becoming a regular user.
(It also made me grok how consumer culture is going to increasingly harness LLMs to accelerate the ultimate aim of consumer culture, just like every other powerful tool that’s come along, which is to get people to consume more resources. :/ )
The tl;dr is that in a few days of feeding it “dots” of visions and ambitions for my life, as well as some context on my personality and way of thinking, it was creating compositions of connected dots phrased as narrative arcs from three, five, ten years into the future. These compositions were *extremely* compelling to me (it made several in-retrospect natural connections that I hadn’t made yet) and made me feel ‘seen’ in a way I’ve never felt seen by a human before (unsettling). I was also impressed by how well it was able to create stuff outside my Overton window - I gave it direction to push the envelope, bonus points for weird/eccentric etc, and it made some very interesting compositions the like of which I’m not sure I’d be able to come up with myself. Really blew me away.
So, obviously, this stuff is more dangerous to me than hard drugs. It’s obvious how tempting it’d be to hand off lifestyle ideation to the Great Bot In the Sky, with the result that my meatbrain capacity would atrophy to nothing. I absolutely felt the beginnings of cognitive outsourcing syndrome and it was frankly terrifying… possibly a tiny foretaste of what certain kinds of cognitive degenerative diseases feel like?
However, it did inspire me to get better at out of bounds ideation myself. In a way it widened my own Overton window in terms of the process of lifestyle ideation - it’s almost like I’ve got a new benchmark in terms of ideation output. Not a lot of people are sharing their dope life vision documents, so there isn’t much to copy in the way of patterns of composition. My experiments with GPT provided a ton of fodder in terms of what my own visions could even look like (the patterns) and I feel like I’m better off for that.
But I definitely need to not become a regular user. My experience confirmed my previously casual position of not becoming a regular user.
(It also made me grok how consumer culture is going to increasingly harness LLMs to accelerate the ultimate aim of consumer culture, just like every other powerful tool that’s come along, which is to get people to consume more resources. :/ )
-
- Site Admin
- Posts: 17137
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: Future of Artificial Intelligence
There's something called "chatGPT induced psychosis" in which the AI is basically mirroring(*) back whatever ideation the user is presenting them with in a way that leads increasingly deep into a rabbit hole. People have lost relationships after coming to believe that they're Neo from the Matrix or that they can literally fly if only they disbelieve the laws of gravity strongly enough. I mean if you have something or someone who appears to know you very well and basically encourages every tangent of every thought w/o any guardrails, it's not hard to see what could possibly go wrong. It's like the ultimate gas-light psy-op. A DIY kit for building your own bespoke conspiracy in which you are the hero.AxelHeyst wrote: ↑Wed Jun 18, 2025 5:00 pmSo, obviously, this stuff is more dangerous to me than hard drugs. It’s obvious how tempting it’d be to hand off lifestyle ideation to the Great Bot In the Sky, with the result that my meatbrain capacity would atrophy to nothing. I absolutely felt the beginnings of cognitive outsourcing syndrome and it was frankly terrifying… possibly a tiny foretaste of what certain kinds of cognitive degenerative diseases feel like?
(*) The mirror-effect is extremely strong. I'm not an expert on classical psychotherapy, but something-something about Pygmalion falling in love with his own statue.
-
- Site Admin
- Posts: 17137
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: Future of Artificial Intelligence
FWIW, whenever I feel critical about AI and its use, I ask myself what it is that AI can do for a human that it's not already the case that a human can do for a human. IOW, what if it was suddenly revealed that there really isn't any AI but it's all a big hoax in which a gazillion investment dollars were spent on a giant call center in India where a few million people are ready to answer any question?
IF that turned out to be true ... would it change anything? What I'm fishing for here is that if humans are merely replacing other humans with AI but not really gaining any functionality except a cheaper substitute, I should worry less.
IF that turned out to be true ... would it change anything? What I'm fishing for here is that if humans are merely replacing other humans with AI but not really gaining any functionality except a cheaper substitute, I should worry less.
Re: Future of Artificial Intelligence
In case you aren't already aware, this has already happened. London based company Builder.AI went bankrupt recently after others discovered that their "AI agents" were really just a team of 700 engineers in India. They were backed by Microsoft and the Qatari wealth fund, the former investing $455 million (!!).jacob wrote: ↑Wed Jun 18, 2025 5:23 pmFWIW, whenever I feel critical about AI and its use, I ask myself what it is that AI can do for a human that it's not already the case that a human can do for a human. IOW, what if it was suddenly revealed that there really isn't any AI but it's all a big hoax in which a gazillion investment dollars were spent on a giant call center in India where a few million people are ready to answer any question?
IF that turned out to be true ... would it change anything? What I'm fishing for here is that if humans are merely replacing other humans with AI but not really gaining any functionality except a cheaper substitute, I should worry less.
https://www.techspot.com/news/108173-bu ... neers.html
Last edited by theanimal on Wed Jun 18, 2025 5:43 pm, edited 1 time in total.
Re: Future of Artificial Intelligence
@jacob - the art is in filtering what we retain, to essentials that support constructive decision making. Some information is worth knowing. Most can be discarded.
I've been driving to my orthodontist for 3 years. I couldn't tell someone how to get there. It's not relevant. Any energy I might have put into directions or mapping, goes to enjoying my latest audio book. Blindly following GPS gives me bandwidth for other mental processing, upgrading my software. In practice, the GPS doesn't really break. If it did? Dunno. Guess I'd wander till I recognize something.
I'm also skeptical most humans are capable of true creative inference. Whatever they make up, is likely naive and significantly inferior to established experience in a field. Most problems have been encountered and solved. The majority will do better to search and execute. It's humbling (hard even) to say "I suck at this and should defer".
@Axel - my experience with GPT as a conversation partner, is the novelty wears off. There's a very specific voice. It becomes repetitive. After awhile, it feels like the tool is restating the obvious. Giving permission to believe what you can already infer or intuitively know. I've found it more useful as a critic, than the source of ideation. If anything, it helps me get over the mindset that I could be even smarter about what I'm doing. Instead it moves me towards trying something and seeing what happens.
I've been driving to my orthodontist for 3 years. I couldn't tell someone how to get there. It's not relevant. Any energy I might have put into directions or mapping, goes to enjoying my latest audio book. Blindly following GPS gives me bandwidth for other mental processing, upgrading my software. In practice, the GPS doesn't really break. If it did? Dunno. Guess I'd wander till I recognize something.
I'm also skeptical most humans are capable of true creative inference. Whatever they make up, is likely naive and significantly inferior to established experience in a field. Most problems have been encountered and solved. The majority will do better to search and execute. It's humbling (hard even) to say "I suck at this and should defer".
@Axel - my experience with GPT as a conversation partner, is the novelty wears off. There's a very specific voice. It becomes repetitive. After awhile, it feels like the tool is restating the obvious. Giving permission to believe what you can already infer or intuitively know. I've found it more useful as a critic, than the source of ideation. If anything, it helps me get over the mindset that I could be even smarter about what I'm doing. Instead it moves me towards trying something and seeing what happens.