Scott 2 wrote: ↑Thu Jun 19, 2025 10:54 pm
@jacob - it feels like you are dancing around an AI's development through MHC stages. What then happens upon interface with lesser MHC humans? When can the AI scaffold or downshift higher order concepts? How super human does it need to be?
That's where I'm inclined to think increasing numbers of humans are better off deferring to the tool. If level 13 strategy can be made tactically available to level 11 thinkers, they're going to out compete peers.
Maybe AI even lifts them to a previously unattainable level 12, through ongoing access to personalized scaffolding. It could also offer massively faster analysis, for those who peer at the current MHC level.
But for those who simply can't pass a certain ceiling, why punish them? Put the computers to work and get complexity out of the way. Same if one simply doesn't have processing bandwidth.
MHC stages is a good way to explore the depth (and width and the resulting context) of "understanding". I think it's easier for me to illustrate the problem with math examples first.
When I was a child (1985 or so), we worked through endless problems of long-division and multi-digit multiplication. (It was something like 10-20 problems of e.g. 723*38 or 153608/168 per week for years.) I don't know what the English term for this kind of homework was but we called "column-calculating" because it happened in notebooks with rows and columns. The meta-lesson here was to memorize the multiplication table. The other lesson that wasn't taught explicit was be able to estimate what the result should be, e.g. 723*38 is around 28000.
Then pocket calculators arrived at the scene. We weren't allowed to use them until 8th or 9th grade and even then only in some cases. There was still an exam to test whether we knew the basics. (There were two tests in 9th grade: skill-math w/o the calculator and text problem-math with the calculator.)
Later, long after I graduated, laptops with math suites arrived. This is when what I would consider "problems" began to arrive.
First example was premed students who couldn't calculate the right medicine dosage. They would happily prescribe a 1,400,000mg pill, "because that's what the calculator said". I hope they never graduated and started practicing for real.
Later I came across some modern age high school math that involved doing an exponential curve fit to some data. First I was highly impressed because using linear regression after transforming nonlinear data is very fancy stuff if you do it by hand or by calculator---something that would normally only be asked of STEM-level freshmen. Only it turned out that the actual job was just to enter the data into a program and select "fit exponential" from the menu. These students had zero clue about what they were doing. Whereas in my time, a problem like that would be done by plotting the data on semi-log paper and eyeballing it for a decent estimate.
The key difference is that I like to think that with my dinosaur education, we understood what we were doing, whereas the more modern methods, students are just pushing buttons w/o knowing what they're doing.
I agree that the tool allows people to do more, but it also makes them less skilled and thus less able to catch and correct mistakes.
There's a difference between using a tool to augment existing skills or to use a tool to substitute for having to learn those skills. In the former case, the person can now do more things faster. In the latter case, they can do the same thing more easily but with more risk of making mistakes.
Returning to MHC, the goal of
modernist education was to raise humans from roughly MHC7 (learning goal for age 4ish) over MHC9 (age 8ish) to MHC11 (age 14ish). Insofar LLM-AIs turns MHC-assistance into a product, I predict that the same will happen to the ability to abstract and think as happened to people's ability to do math in their head. Instead of having the majority of adults functionally living at the level of a bright 14yo (MHC11), the average human may regress back leaning on LLM-AI to provide them with "narrative" perhaps w/o even being able to tell their own. That's back to MHC7 ... or in any case, it's a step backwards rather than a step forwards.
You could end up with the average adult not being able to mentally grasp the concept of "if this, then that" or that "rules that apply to everybody also apply to me". This is simple stuff, but the brains of many adults already don't spontaneously fire off the neurons to make these connections. All they have is a good narrative. A story they've been told. But the connections were never made.
I do see the appeal though. However, the appeal is the problem. The appeal to me would be to get answers to problems I currently struggle with. Perhaps LLM-AI will provide the answer to the metacrisis. I worry what this will do to me though. I might stop searching for answers because it's easier to just ask. But if "searching" or the "ability to search" is the answer, then I just screwed myself strategically.
(An engineering example would be to depend on a library function that unknowingly to the programmer is not up to the task, lets say a matrix solver. The programmer doesn't know how the library is written and also has no idea how to write a library or how a matrix-solver even works. They're just used to calling a function and trusting the results it spits out. Maybe this is why planes keep falling out of the sky? IOW, if we educate a generation of vibe-coders, they might eventually not be able to deal with a novel fizzbuzz-test type problem.)
A more pedestrian example would be recreating the US manufacturing industry. While we have the "information", a lot of the knowledge, experience, and worker-habits that is required to implement it is just gone... lost. We have the blueprints for the Saturn V rockets, but we can, today, actually not rebuild a Saturn V rocket. The reason is that some methods weren't documented and nobody now knows or knows how to figure out how to make a certain plastic for a certain gasket.
I'm not that worried for the current generation of humans who learned how to think for themselves. I'm worried for the next generations of humans who may decide that it would be easier just to outsource the pain and hassle of having to think and as a result eventually end up unable to do it.
Given the plasticity of the brain, we're not totally immune either just because we learned it once in the past. I have definitely acquired some "google brain" in that I used to be much stronger at Trivial Pursuit or Jeopardy type games than I am now. With search engines, I simply don't need to practice remembering things. Fortunately, remembering things once did come with the framework of knowing how to look them up. If I punt that to LLM-AI as well, what mental capacity will I have lost 10-20 years from now? I fear I will have regressed to age 4ish knowing how to put on my pants but otherwise asking mom/dad for help with everything beyond that.