Daylen's Instinctual Dump

Where are you and where are you going?
Riggerjack
Posts: 3199
Joined: Thu Jul 14, 2011 3:09 am

Re: Daylen's Instinctual Dump

Post by Riggerjack »

Metal would be rare and technology would be very rare.
Consider that metals will not be disposable as they are today, and they will last. So as they became more needed, the runs into hot zones will be rich salvage operations. The handworking as skilled labor will keep artifacts in reuse and reuse. Copper and aluminum will be easily remelted and cast, over and over, with minimal losses.

Also, mineral deposits not exploited currently, (covered in ice, or remote enough not to be worth exploiting) will be by then.

Just a thought.

User avatar
Jean
Posts: 2384
Joined: Fri Dec 13, 2013 8:49 am
Location: Switzterland

Re: Daylen's Instinctual Dump

Post by Jean »

ckii is a medieval sandbox. You can play it as a grand strategy, but it's purposefully not balanced, so if you don't enjoy roleplaying (as in living a fake story), you probably won't enjoy it.
Endless legend is a more classical 4x, just a very good and nice one. I often play it just for its athmosphere.
I have a few hundred hours in both.
I never seriously thought about creating a videogame, because you'de need a too big team. I've alway been more draged toward a tabbletop game, because Id'e only need an illustrator to team up with.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

Interesting points all around. I am a bit tired today, so I am chilling and thinking about how to construct a 'Standard Cognition Model' or SCM. Here is what I have so far..

SCM consist of two parts: a descriptive chart and circuits. The chart intersects ( T, F, N, S ) with a scale ( 0-5 ). Here are the labels for now..
T: parsers (tokens and trees)
F: tests (regression and classification)
N: representations (clustering of sensations)
S: sensations

0: null
1: heap
2: stacks and queues
3: sorters and finders
4: space and time measures
5: error estimator

So, T1 is just a crudely organized pile or heap of parsers (unconscious access), T2 is T1 with stacks and queues of parsers, T3 is T2 with sorters and finders, T4 is T3 with space and time measures, and T5 is T4 with an error estimator. All together, T5 can estimate parsing error, from spacetime measures, with sorters and finders, for stacks and queues, in a heap of parsers.

3-5 are based on the common trade-offs between [sorting and searching] and [space complexity, time complexity, and uncertainty]. Motivation for this came out of the book "Algorithms to Live By" that describes the trilemma between memory, run time, and uncertainty in computing.

Extroversion and Introversion can also be intersected for a third dimension discriminating between having and seeking these structures. For instance, Fe3 is a skill that seeks sorters and finders for tests.

The circuits have these structures as elements and serve to model the 16 types somehow. The types in this model are switch cases that describe learning algorithms for the cognitive functions or something like that. After what Jacob said, I realized that I should probably work on my Te, but I do not know how. :lol:

This is obviously extremely abstract right now, and I am not even sure if it makes any sense at all.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

The descriptions are not really fitting in place with this outline, so I will probably alter it significantly while retaining some elements.

This is largely how my thought process works. Often, I start with the model outline then work backwards to fill in detail and link to reality. This takes a lot of mental effort, because you must continuously restart from scratch (partially why INTP's are somewhat uncommon). It is a whole lot of fun though when you stumble upon an outline where everything just falls into place.

I definitely want to retain and implement circuits since this is largely what will distinguish SCM from other cognition theories. The primary troubles with outline above are 3: sorters and finders (in conflict with the underlying heap) and extroversion/introversion (in conflict with derived circuits). I will try to keep you all updated. Overall, this is a meta experimental log for how INTP's think with the goal of demystifying.

-------------

On the topic of developing Te, I need to stop overthinking everything while making an effort to identify objectives. Otherwise, I cannot judge relative trade-offs. To synchronize this with my default tendency to overthink, perhaps it would be best to simultaneously work on a variety of models of varying complexity. This will open up the SCM to a wider audience and make setting objectives easier for me. The first level could be a 1D vector description along with a single circuit, the second level could be 2D, etc.. or after an atomic 1D model is developed I could branch off into a tree of models with differing kinds of complexity.

On top of this, a meta Te analysis indicates that this would be in conflict with my current goal of developing coding skills. Therefore this will probably take some time, but eventually, I could develop a Standard Cognition Frame (SCF) with many cognition models and merge them with my newly found coding abilities. Transitioning into a computational frame would allow rapid processing in addition to automatic validation. Much of Ti is debugging, and although I like to think I am fairly good at this, real computers are better.

------------

By the way, I am now offering an editing and debugging service if anyone needs a second pair of eyes. My rate will be negotiable (likely very reasonable.. especially if the topic is interesting). I suppose this is me Te'ing my social capital. :)

The diversity of this forum is awesome, and I should probably start taking advantage of this more while it lasts.
Last edited by daylen on Sat Feb 01, 2020 11:57 am, edited 1 time in total.

jacob
Site Admin
Posts: 17132
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Daylen's Instinctual Dump

Post by jacob »

daylen wrote:
Fri Jan 31, 2020 1:11 pm
0: null
1: heap
2: stacks and queues
3: sorters and finders
4: space and time measures
5: error estimator
Sounds like the CCCCCC model (or many other models, like Dreyfus). It's interesting how you have some the other way around from the way I did it and then how it's ultimately going in a different direction. This might mean something (beyond making models out of mashed potatoes) and suggest differences in our respective thinking-modes or influences.

My order was:

Copying
Comparing
Compiling
Calculating
Coordinating
Creating

A guiding principle was that the previous level provided required but not sufficient ingredients for the next level. I didn't consider the brain-dead nulls :lol: , but we agree on the heap~copying. Cells can copy (maybe that's as good a definition of life as any?!). Simple machine language (CPUs) has registers. However, you put stacks and queues second whereas I put comparing or the <-functionality second (!= might also be considered, but that's nitpicking). It's interesting because when I made the CCCCCC model, I was not familiar with all the ordered or unordered lists, queues, or hash-sorted splendor that computing has invented. I was thinking simple filters, like transistors in which one compares two variables. That's also saying that I consider an ordered list more fundamental that the abstract concept of a list. Hmmm ...

Space and time measures suggest to me the concept of a "measuring stick" or a standard. When we say something weighs 70kg, what do we really mean? Well, we identify "1 kg" which is a standard weight held somewhere in France (with copies around the world). In order to understand what 70 means, we need to compile a list and sort it by comparing numbers: This is how we count 1 2 3 ... 69 70 generating ordinal numbers. We also identify 70 which means we have to copy that French reference weight 70 times and compare it to the reference. Then we have to put the 70kg object on one side of a comparing balance and our 70 reference weights on the other. We are now calculating (following an algorithm of lists, comparison, and copying functions).

Error-estimator is in my framework a meta-level comparator. This is basically what allows feedback. To have feedback you need to compare to a reference copy and change behavior. It does not necessarily require a list.

Bateson did a lot of work in that regard. His learning levels are basically about error-feedback with turtles all the way down. Errors, errors of errors, errors of errors squared, etc. Like a Keynesian beauty contest. Or playing poker (what do they think that I think that they think ... ). So to Bateson error-estimating is the key to the entire structure (Hegelian constructor)... at least from the point beyond copying, comparing, and compiling.

Recall: http://epubs.surrey.ac.uk/1198/1/fulltext.pdf

TL;DR - There are many roads that lead to Rome ... and we have many ways to describe the ways. But it's not clear (to me) what or where Rome is. IOW, we're still talking about the means the destination---a learning process---than what the destination is.

Fish
Posts: 570
Joined: Sun Jun 12, 2016 9:09 am

Re: Daylen's Instinctual Dump

Post by Fish »

jacob wrote:
Sat Feb 01, 2020 11:43 am
"1 kg" which is a standard weight held somewhere in France (with copies around the world).
This doesn’t invalidate the point you were trying to make, but I think you and others here would be interested to know that the kilogram was redefined as of May 2019 and no longer relies on the physical prototype. https://en.wikipedia.org/wiki/Kilogram#Definition

jacob
Site Admin
Posts: 17132
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Daylen's Instinctual Dump

Post by jacob »

Yeah, I knew that, but it would ruin the pedagogical effect ;-) It's like how the meter was redefined as well going from a stick to an atomic frequency times the speed of light (<- not an easy reference to come by with kitchen utensils!). In "operational practice" this doesn't make the measurement simpler as precision is gained via experimental complexity that definitely requires a lot of calculation. In any case "measuring" definitely requires some way of counting and some way of comparing to a standard.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

@jacob That post triggers two thoughts.

1. I was thinking the same thing about space and time measures corresponding to standard units that combine into a basis spanning a vector space. It does appear that at least one missing ingredient is a level for comparison operators (like !=, ==, >, <, >=, <=), so I will give that some more thought.

2. I thought about the error estimation being on the meta level where similar calculations must be iterated first (and so forth for nesting). What I decided to do instead or in addition (..or is it still the same ultimately?) was to include a level where the space and time information along with lower-level algorithm success data would be enough to estimate the expected error for whole new calculations that have never been done before. Perhaps this would be done by first estimating a constraint curve in the [memory, run-time, uncertainty] space for the low-level algorithms then composing somehow and applying. This is like the coordination step.

Let's see if I can make this more concrete. Suppose you already have implementation data for bloom filters and brute force checking (for testing if elements are in a set) that allows you to estimate the constrain curve above, then you encounter a whole new algorithm that involves a cuckoo filter and use that estimate to predict what the error will be for composite calculations using this new algorithm. Now, obviously some algorithms are more efficient in seemingly every way than others, but what if it is possible to keep track of some other information or develop a NN that can predict this using features not yet thought of? I may just be confusing myself.
Last edited by daylen on Sat Feb 01, 2020 2:23 pm, edited 9 times in total.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

Thinking a bit more.. this could just be equivalent to solving P=NP in some roundabout way or may just be NP and therefore not feasible.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

I think I know where I am getting confused here. What I described would just be another form of calculation in your model.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

I have a better idea. We could start with the circuit topologies with the Bateson levels as units to make computational implementation easier. Working on drawings now.
Last edited by daylen on Sat Feb 01, 2020 3:13 pm, edited 1 time in total.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

Image

The number of levels in the drawings is arbitrary. What matters is that vertices represent data structures and edges represent transformations. There are many more options, but I intuit that these are the most promising/straightforward.

The bottom figure would imply that there are S+N and F+T structures in addition to S1,S2,S3 .. N1,N2,N3 .. F1,F2,F3 .. T1,T2,T3.. as well as a main loop between S+N and F+T.

In accordance with my previous thoughts on introversion/extroversion, perhaps data structures are introverted and transformations are extroverted (which would actually make them equivalent fundamentally). Another option is that the extroverted versions of each function get their own nest, and introversion corresponds to teaching and extroversion to learning (or the inverse).

jacob
Site Admin
Posts: 17132
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Daylen's Instinctual Dump

Post by jacob »

(This is a response to the posts before your most recent one.)

I tried [drawing].

Using the basic Bateson feedback mechanism and recursing. I made a mess out of it... all the way down the turtles. Didn't develop any insight, that is beyond how "this is a nice principle". Yet, beyond a few levels it also illustrates the painful difference between human- and god-level---there's just too much to process in practice.

Similar to how it's unpossible to effectively grok higher Wheaton or Kegan levels before experiencing them. The structural principles are clear, but the details are lacking! After drawing increasingly convoluted feedback loops from here to infinity, I didn't find, realize, or grok it at a practical or usefully innate level.

OTOH, perhaps it's mainly a matter of key/framework. Maybe my "generator" is weak.

I'm still thinking about your recent posts. Just wanted to interject.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

What if the goal was not to model how humans think they think but to develop a computational model that fits communication data in order to provide humans insight into how an algorithm thinks they think (that is developed from a whole lot of humans thinking over a long period of time).

jacob
Site Admin
Posts: 17132
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Daylen's Instinctual Dump

Post by jacob »

daylen wrote:
Sat Feb 01, 2020 1:13 pm
Let's see if I can make this more concrete. Suppose you already have implementation data for bloom filters and brute force checking (for testing if elements are in a set) that allows you to estimate the constrain curve above, then you encounter a whole new algorithm that involves a cuckoo filter and use that estimate to predict what the error will be for composite calculations using this new algorithm. Now, obviously some algorithms are more efficient in seemingly every way than others, but what if it is possible to keep track of some other information or develop a NN that can predict this using features not yet thought of? I may just be confusing myself.
I've sort of convinced myself that the difference between Bloom and Cuckoo and what allows the next step is the ability to compare and modify the feedback. Cuckoo has memory (a list). Bloom does not. Sorry, I'm don't see where this is going ...

The Bateson constructor is simpler in the sense that it only uses the memory and functions of the previous step, so f_n(x, a_n)=f(f_n-1(x,a_n-1)). The hard part is figuring out f()... and whether a is constant, a list of a function, etc.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

jacob wrote:
Sat Feb 01, 2020 3:53 pm
Bloom does not. Sorry, I'm don't see where this is going ...
Ignore that part, ha. I figured out my confusion there. In my most recent post, I am just wondering if it is a worthy objective to go through the work of developing an algorithm with an overly complex topology that can provide humans feedback on how it thinks they communicate/think as opposed to a model humans can actually use themselves.
Last edited by daylen on Sat Feb 01, 2020 4:06 pm, edited 2 times in total.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

Also, at any time feel free to ignore me and let me figure it out myself. I am a bit all over the place, and I know this is exhausting for others.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

That sounded a bit condescending. :(

What I meant is that my thoughts go through phases and during a (Ti<->Ne) phase I just generate a bunch of bullshit that is not likely to be interpreted accurately by others. Sometimes it is best to wait until after a (Si<->Ni) phase has passed to communicate with me because by then I will have corrected my errors. Also note that during a (Ti<->Ne) phase, I reread and edit my posts obsessively and this can add to the confusion.

I am not nearly as good as 7w5 at controlling Fe while brainstorming (or in general). I should probably work on this too. It also did not help that I was amped up on caffeine.

Anyway, I just went on a walk, so I will try to explain what was going through my head a little better. I was essentially ignoring the details of bloom filters and cuckoo filters entirely. I was trying to say that if you have used algorithm A and algorithm B to solve a particular problem, then it may be possible to infer how another algorithm C will perform without using it. Perhaps by using information other than space complexity, time complexity, or uncertainty.

After that I realized that none of that mattered because it would still be at the previous level (calculation) anyway. Then I jumped to a whole new approach that ignored everything I said prior (i.e. working backwards from graphs). At this point, you rightfully noted that I was getting a bit carried away from the details.

To which I responded that perhaps it is possible to develop an algorithm that is not fully understood by humans but that could still provide feedback to them. This is currently a major problem with some learning algorithms (especially deep neural nets), but at the time I was ignoring this bit of information. I was off in cortical land somewhere.

Hopefully, this clears things up a bit.
Last edited by daylen on Sat Feb 01, 2020 11:11 pm, edited 4 times in total.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

daylen wrote:
Sat Feb 01, 2020 5:22 pm
Sometimes it is best to wait until after a (Si<->Ni) phase has passed to communicate with me because by then I will have corrected my errors.
There must be some level of Fe above this that would not presume that others should change because of my communication flaws. Albeit Fe is a whole lot easier in person.

I am learning a lot from this journal.

daylen
Posts: 2646
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: Daylen's Instinctual Dump

Post by daylen »

Dropping the game creation objective. Too much work for not enough payoff. Call it a Te awakening. :)

I reread the gauging mastery section in the ERE book, and I have a question about CCCCCC. At what level would you say that an agent would purposely interject randomness into their decision making process? It seems that it would be rare for this to occur before the coordination stage and becomes a significant element of the creation stage.

It is fascinating just how many algorithms rely on pseudo random number generation to perform well. Mainly this arises in instances where some data source is too large for direct use and must be sampled uniformly or normally. Could this be a defining characteristic of coordination and/or creation?

May also be related to an increased tolerance for ambiguity.

Post Reply