daylen wrote: ↑Fri Jan 31, 2020 1:11 pm
0: null
1: heap
2: stacks and queues
3: sorters and finders
4: space and time measures
5: error estimator
Sounds like the CCCCCC model (or many other models, like Dreyfus). It's interesting how you have some the other way around from the way I did it and then how it's ultimately going in a different direction. This might mean something (beyond making models out of mashed potatoes) and suggest differences in our respective thinking-modes or influences.
My order was:
Copying
Comparing
Compiling
Calculating
Coordinating
Creating
A guiding principle was that the previous level provided required but not sufficient ingredients for the next level. I didn't consider the brain-dead nulls

, but we agree on the heap~copying. Cells can copy (maybe that's as good a definition of life as any?!). Simple machine language (CPUs) has registers. However, you put stacks and queues second whereas I put comparing or the <-functionality second (!= might also be considered, but that's nitpicking). It's interesting because when I made the CCCCCC model, I was not familiar with all the ordered or unordered lists, queues, or hash-sorted splendor that computing has invented. I was thinking simple filters, like transistors in which one compares two variables. That's also saying that I consider an ordered list more fundamental that the abstract concept of a list. Hmmm ...
Space and time measures suggest to me the concept of a "measuring stick" or a standard. When we say something weighs 70kg, what do we really mean? Well, we identify "1 kg" which is a standard weight held somewhere in France (with copies around the world). In order to understand what 70 means, we need to
compile a list and sort it by
comparing numbers: This is how we count 1 2 3 ... 69 70 generating ordinal numbers. We also identify 70 which means we have to
copy that French reference weight 70 times and
compare it to the reference. Then we have to put the 70kg object on one side of a
comparing balance and our 70 reference weights on the other. We are now
calculating (following an algorithm of
lists,
comparison, and
copying functions).
Error-estimator is in my framework a meta-level comparator. This is basically what allows feedback. To have feedback you need to compare to a reference copy and change behavior. It does not necessarily require a list.
Bateson did a lot of work in that regard. His learning levels are basically about error-feedback with turtles all the way down. Errors, errors of errors, errors of errors squared, etc. Like a Keynesian beauty contest. Or playing poker (what do they think that I think that they think ... ). So to Bateson error-estimating is the key to the entire structure (Hegelian constructor)... at least from the point beyond copying, comparing, and compiling.
Recall:
http://epubs.surrey.ac.uk/1198/1/fulltext.pdf
TL;DR - There are many roads that lead to Rome ... and we have many ways to describe the ways. But it's not clear (to me) what or where Rome is. IOW, we're still talking about the means the destination---a learning process---than what the destination is.