A Model of Cognition

Move along, nothing to see here!
daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

A Model of Cognition

Post by daylen »

For a while now I have been trying to understand and describe how people think and act in the world. Here is a framework that I have been working on to do just that. I should be clear here that my intention is not to measure human cognition, but to create an internally consistent way of describing it that does not require much memorization for myself. I figured I would post it here in case it gives people insight somehow. Part of my strategy is to use common ideas with visual components to help represent different aspects of cognition. Shapes and colors are some of the primary concepts I like to play around with, and this allows me to represent and simulate complex social situations using a relatively small amount of working memory.


Image
This first image presents the shapes I will be working with. A dot corresponds to a representation / language / symbol / abstraction. Dots can be connected to other dots or shapes to form a network. Dots can also be joined to form a line that represents a dimension of difference. A dimension may also have an associated unit of measurement. The ends of a line can be joined to form a circle which represents a system. In my model, systems are defined by their boundary (to distinguish them from agents and maps). A triangle represents an agent that makes decisions and communicates desires (all humans are included and potentially some other organisms). Finally, a square represents a map of some underlying territory. Maps are bounded representations of some larger territory (a map is not the territory mapped).

The bottom of the page goes into a few more details that are elaborated on in the second image. Systems often have cycles or feedback mechanisms. Agents have an awareness that is dependent on how the self distinguishes between signal and noise. The self can also be thought of as the ego or bias. If any side of the triangle is removed, then the agent ceases to exist. This lines up well with my intuition that "ego dissolution" is a temporary moment where time becomes meaningless and you become one with the universe (often associated with spirituality or psychedelics). A map must specify a coordinate system or have landmarks for the concept of position to make sense.


Image
This image looks into more detail of what happens when two of the same shapes are considered simultaneously. Two systems may become coupled where their individual identification is not guaranteed and the two systems must be understood as a whole to understand either one as an individual (this has an analogy is quantum physics where two particles become entangled). This also relates to the concepts of "interaction" and "similarity / difference". The two triangles represent the subjective and objective realities, and a simple (but not complete) way of distinguishing these is that objective reality can be measured and other agents can agree on the measure consistently across time. By extension, the subjective reality is everything else (like pain, desire, utility, signal /noise). Subjects attend to objects and agree upon form with other agents. Once the form is agreed upon, measures of growth / decay, movement / stagnation, loud / quiet, cold / hot, and so forth can be made. The two squares represent two adjacent maps where one could be thought of as representing known territory and the other representing unknown territory. Part of the reason I associated a square with maps is that a square maximizes area/perimeter, and the purpose of a map is usually to display as many things as possible while minimizing edge distortions (..and scaling ability). The underlying territory could potentially be unbounded.


Image
The last image relates all of this to the cognitive processes from MBTI. I am more interested is representing thoughts as opposed to typing agents; typing has some statistical validity, but this is less useful to me than being able to visualize an abstraction of how agents act in any particular situation (or to construct working definitions of complex social phenomena). The perceiving functions are paired into (Se,Ni) and (Si,Ne), and the judgement functions are paired into (Te,Fi) and (Ti,Fe). I have defined each of these in my own way as follows. (Se,Ni) is a loop of exchange between many objects(Se) and a single intuition for how those objects interact(Ni). (Si,Ne) is a loop of exchange between many intuitions(Ne) and an impression for what a single object can do(Si). (Te,Fi) is a loop of exchange between choosing a map of the territory(Fi) and a single agreed upon description of many maps(Te). (Ti,Fe) is a loop of exchange between choosing a description of a map(Ti) and a single map with many agents describing it(Fe). There are also one-word descriptions of each function in the image.

The lower section is something I am still thinking about, but the basic idea is that each introverted function is like a loop of fixed values surrounded by a blob of potential values that is revealed by the extroverted functions. Each angle of difference could represent a different dimension (e.g. how an impression of a hammer is related to all the ways it is actually used by other agents in the territory). The possible interpretations are endless.

I hope this was interesting to someone. Maybe someday I will describe how I use color.
Last edited by daylen on Sat Mar 23, 2019 11:32 pm, edited 3 times in total.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

A map can contain the positions of objects, systems, or agents. Sometimes I like to differentiate between objects and systems as follows: objects have representations that the majority of the agents in the territory agree on, so often they do not need to be defined when conversing in that territory. Systems need more elaboration before a set of agents can converse meaningfully about them. An object, system, or agent may be seen as either an obstacle or a tool in a particular map for a particular agent or set of agreeing agents. A continuous line on a map between an agent and some other position may be called a path. A path could be described by how a set of objects should interact conditionally (Se,Ni or algorithmic thinking). The perceiving function loops do not actually choose paths or descriptions; this is what the judgement function loops are for. My definition of agency may be thought to assume partial free will. In some interpretations, representing a company or city as an agent may be useful, because such systems often behave like organisms. Universalizing agency can lead to moral paradoxes, so such an interpretation should be limited.

The (Fi,Te) loop searches for maps based on the desires of an individual agent; this loop is responsible for the emergence of price in an economic system. The (Fe,Ti) loop searches for descriptions based on the collective desires (or values) of agents in the territory; in some sense, this is an approximation of price. New maps and their descriptions can lead to the discovery / construction of new territories, paths, obstacles, and/or tools. The line between a map and its description can become blurred, so it may be useful to think of a map as the collective perception of agents in a particular situation and a description as any individual perception that can be communicated in other maps (perhaps limited to vocal or written communication). Another interpretation is that a description can be recorded so that agents not in the territory being mapped can gain an impression of that map. So a map is basically a situation or experience that does not perfectly represent the underlying objective reality and cannot be described completely.

I may have contradicted myself, and I have certainty assumed prior definitions based on my own bias. This is fine, because it is meant to be fluid and evolving. I may update it as I see fit. The important criteria for me is that it can be used to generate many interpretations of reality. It could be used as a prototype to generate a more consistent and context-dependent optimization problem for instance.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

Update: Slightly revised the definition of (Ti,Fe) and fixed a few errors.

Jason

Re: A Model of Cognition

Post by Jason »

I always figured you were up to some type of da Vinci shit in your spare time.

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: A Model of Cognition

Post by 7Wannabe5 »

Very interesting set of Tinkertoys. Few random thoughts I had were that some agents might be non-living, a human is a living system as well as an agent, and homeostasis vs. dynamic equilibrium vs. mechanical control might be relevant.

So, for instance, the set of decisions I can make about my internal body temperature vs. the weather vs. the setting on the thermostat. Also, difference between resolving to sell specific equity if it hits $120 vs. resolving to leave husband if he has another affair- at what juncture does resolution become decision from perspective of other agents?

How would you use your model to analyze an event such as:

One human is wading in a swamp. Another human is in a boat with the other human's dog, when he perceives a very large alligator heading towards the first human. He decides to throw the dog in the water to distract the alligator.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

The model is agnostic about what agents are, so this depends on how the user selects from an infinite superposition of maps. Homeostasis is not a bad model in many situations, but this can conflate cities (and other such complex systems) with agents. The model allows this, but I also set forth another criteria that attempts to avoid certain moral paradoxes(*). It appears to me that systems are more likely to be considered agents if they make decisions according to their desires/preferences and attempt to express them. The second part is key, because it discounts interpretations where cities are considered to have agency. It may be argued that a city communicates, but if the majority of the agents in a territory cannot agree on what it is saying then this is not persistent. Ultimately, persistence is the only measure of existence. A dog may potentially be an agent in some maps, because some agents can communicate reliably with some dogs.

The model is not meant to be a formal system that follows from a set of agreeable axioms(**), therefore my attempt to analyze your scenario is not exact. Essentially, the principle needed here is superposition.

Image

The picture is a little hard to see; I need to work on taking better pictures. There are four periods in time arranged in the vertical dimension. The first three are in superposition where the dog is both a system and an agent. I bet you can figure out which friend has the boat :P. In the last image the superposition collapses into a single reality because the dog is no longer living. The cloud like figure surrounding everything is the territory (i.e. swamp). As you can see, the maps are separate at first when each agent is not aware of each other. The two individual maps remain when they approach a shared reference frame, but a single map becomes emergent from the two smaller maps and this larger one takes priority in my interpretation.

After the event, both agents may try to describe what happened to a shared friend. There is likely very little confusion about what happened (Se,Ni), but their impressions may be different. To the dog owner, the dog was more than just a living system that got thrown out of a boat to be consumed by an alligator; the owner was also coupled to the dog in multiple other territories with multiple shared mappings (Si,Ne). When each of these agents describe the map to their shared friend they may slip in value judgments about what ought to have happened (Fi,Te), but the receptive friend may be using (Fe,Ti) to select from multiple descriptions that may reflect multiple interpretations for how the situation should have played out. Perhaps the agent in the boat made a faulty judgement; for instance, maybe the alligator was immobilized from a cold night and was just trying to warm up until free food landed in front of its nose.


(*) By some measure, a state maintains its own form of homeostasis via a variety of feedback mechanisms; does/should this mean states are obligated to act a certain way? .. or are states culpable for certain actions? Perhaps, but often this form of thinking does not get anywhere. There are a variety of different holistic measures for complex systems (though they are all very similar by necessity); the goal is typically to find something that is constant across time for an isolated system (like energy).

(**) Mathematics already does this quite well.
Last edited by daylen on Mon Mar 25, 2019 11:29 pm, edited 6 times in total.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

I tend to use (Ti,Fe) a lot, so I will use this loop to elaborate a bit on why these cognitive functions need to be coupled together. First off, I do not think that an aspect or function considered in isolation has much meaning. When describing complex systems, functional relationships are required for dynamic interpretation (this is related to kegan5). I spend a decent amount of time observing the (Te,Fi) process, because this gives me an idea of what other humans desire to pay attention to and how they describe the associated maps. This helps when I prioritize certain descriptions over others, because otherwise I would be dependent on my own desires to choose between maps. The problem is that I was predisposed towards not knowing which maps I desire. Spending time on ERE helps me formulate my own description of a map that is less dependent on my own desires. Everyone must use each loop, but specializing in one perceiving loop and one judgement loop can be efficient early on.

A few more notes:
Story-like memory, nostalgia, and metaphor are emergent from (Si,Ne).
Muscle memory, flow, and future awareness are emergent from (Se,Ni).
Prices and morals are emergent from (Fi,Te).
Symbols and ethics are emergent from (Fe,Ti).

Moral frameworks are subjective rule-sets that guide behavior in any territory. Ethical frameworks are objective rule-sets that guide the behavior of a population in a specific territory.
Last edited by daylen on Mon Mar 25, 2019 7:12 pm, edited 1 time in total.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

Here is a chart with all of the types along with their preferred loops. Note that any agent may be acting as any type in some map, but one type may be dominate for some agent across many maps.

Red = (Se,Ni)
Green = (Si,Ne)
Blue = (Ti,Fe)
Gold = (Te,Fi)

Image

Sometimes I like to use large paper rolls, masking tape, and oil pastels when constructing a visual model. After a while, the imagery comes "alive" in my mind. By that I mean the base model hyperlinks to many potential uses until eventually I am left with an impression for its limitations (heavy Si,Ne).

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: A Model of Cognition

Post by 7Wannabe5 »

@daylen:

I was able to figure out which friend had the boat :D

I must admit that I am recently rather stuck thinking about cognition in terms of the embodied mind.
Image

So, I am kind of wondering whether or not an ISTJ human would be most like an alligator?

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

That is an interesting diagram and it shows some resemblance to how I defined an agent. I wouldn't be surprised if this is just a different interpretation emergent from similar cognitive origins. The autopoietic unit is essentially similarity across time; the other two categories represent time-dependent variables. Either the environment or the cognition could be considered a signal in my model (making the other one noise), and the autopoietic unit is what determines the distinction based on past replication. Whatever form is being replicated may or may not actually matter. In my model, the form must be agreed upon to be objective (this includes what is being reproduced e.g. genes and memes).

The alligator may have some form of communication with other alligators, but I have very little intuition for how a typical alligator prioritizes loops. My model assumes the form of communication can be recognized between multiple agents before any typing of an individual agent can occur. The perceiving loops may even start to break down logically if no judgement or communication is possible. Perhaps this indicates that the alligator must exchange information with "something else" in order for perception to make sense? If this is the case, then kegan1 is simply not possible unless it represents a system without consciousness. From another angle, kegan1 seems like an attainable state for humans sometimes, but maybe this is just an illusion where the mind makes up a story to fill a gap?

This is paradoxical and fun. :D

Evolutionary theory is basically about the continuous deconstruction and reconstruction of two opposing hierarchies via dialog. One hierarchy is spatial / dynamical / economical / ecological, and the other is temporal / informational / reproductive / genealogical. In this framework, the dialog or production of evolutionary models is just the noise with respect to my framework (what is ignored to create a clean, organized theory). What I find interesting is that many biologists assume that there is a direction to the spacial hierarchy which determines what is being selected for. This is basically a reductionist view where higher-order structures are emergent from the lower-levels, but I am not sure this is necessarily true. This is very controversial, but higher-order structure may actually be partially selecting for lower-level structure. What if the universe can not actually exist as a whole without satisfying some higher-order conditions? This is not testable, and has been responsible for numerous debates throughout history. It seems that the only consistent view is to not make any assumptions regarding the direction of hierarchical selection. TL;DR Maybe emergence is only part of the story.

This relates to an age-old dichotomy in physics between form and substance, and it also relates to free will versus determinism; it is all a giant, metaphorical cluster-fuck.
Last edited by daylen on Mon Mar 25, 2019 9:26 pm, edited 3 times in total.

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: A Model of Cognition

Post by 7Wannabe5 »

daylen wrote: The autopoietic unit is essentially similarity across time
True, even though we say the caterpillar turns into a butterfly.
The alligator may have some form of communication with other alligators, but I have very little intuition for how a typical alligator prioritizes loops. My model assumes the form of communication can be recognized between multiple agents before any typing of an individual agent can occur. The perceiving loops may even start to break down logically if no judgement or communication is possible. Perhaps this indicates that the alligator must exchange information with "something else" in order for perception to make sense? If this is the case, then kegan1 is simply not possible unless it represents a system without consciousness. From another angle, kegan1 seems like an attainable state for humans sometimes, but maybe this is just an illusion where the mind makes up a story to fill a gap?
From my perspective there even exists communication between individual plants and supportive networks of fungi which promote mutually beneficial exchange of minerals and sugars. An alligator does not communicate with your conscious brain, but I would suggest that it certainly communicates with your unconscious brain through the mechanism of your optic sensorium and mammalian prey reaction. Many people who write about spending a good deal of time in the wilderness, even people who are very much not inclined towards "woo-woo" thinking, experience the sensation of more direct communication with other species, as in "the raven told me that a bad storm is coming." One adventurer wrote of having the odd experience of finding himself auto-magically assuming dominant posture when challenged by a chimpanzee, with no fore knowledge of how or when to assume dominant posture, so maybe that is a bit like Kegan 1? I have had similar reflections after a few occasions when I have auto-magically screamed.
What I find interesting is that many biologist assume that there is a direction to the spacial hierarchy which determines what is being selected for.
I don't think any serious 21st century biologists believe this to be true. It's just difficult to explain anything without resorting to metaphors such as "the ladder of life." There are many well-known examples of devolution of species from innate human values perspective, as in "this tiny blind creature evolved from a much larger creature that had eyes." Sight would seem to qualify as emergent property, but it is unlikely to persist in pitch black environment, even if that environment has greater total inflow of useful energy than well-lit environment.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

Here is an update or rather an upgrade. I changed a few core elements and expanded the framework to include more "applications". Consider this an elaboration of a comment I made earlier .. "Try to think on a different time-scale where the memes you are accustomed to will feel cyclic and terminal. Try to engage in a higher-order ebb and flow where the lower-level boundaries dissolve into a boundless topological space from which a multitude of geodesics are possible.". This model will probably not make you more money, but it might help you think a little differently. I believe I can explain this upgraded version in a more straightforward manner.

Image
Here is the core. Much of this is shared with the previous version. Abstractions are now linked directly to objects/systems, and selection is implicit. This model is now somewhat detached from MBTI (or maybe not). My primary concerns are clarity and ease of recognition.

Concrete Cycles
Do: objects <-> intuition
Link: perspectives <-> impression

Abstract Cycles
Describe: descriptions <-> system
Categorize: elements <-> category

The second section represents two different streams for each agent. The first is proactive and forms memories by subjecting bias to objective forms, and the second is reactive and focusing.

The third section is very similar to an earlier image.

Image
Here is something new that I have semi-consciously been developing on my long walks. This is heavily influenced by math and physics.

A topology is a set of objects that are connected and compacted in some way. I am not going to get into the details, but the basic idea is that two spaces are topologically equivalent if one can be stretched, twisted, crumpled, or bent into the other without tearing or gluing. Essentially, the connections between points are preserved. Topology is fundamental to virtually all of mathematics.

Imagine that you are going on a walk. Everything is all connected to form one big blob, but the objects you are paying attention to are holes or gaps in this blob. You cannot see into the trees, so this imposes a boundary in your topological reality. From time to time, your focus will shift and your topology will morph. Perhaps the wind causes the trees to shift and your topology shifts with it. Objects tend to have a curvy boundary, and the outside edge of your topology/territory is represented by a rectangular box. Moment by moment your attention is sampling the objective forms in this territory to infer their distribution. Other agents appear at the center of two symmetric cones; one of the cones is looking externally at forms and the other is looking internally to bias. Some agents are engaged in a flow where the cones flip to represent a clear path forward and backward.

The objects, agents, and even territories can be further abstracted into points forming a network topology. Does the topology form a small or large world? What is the average path length between two random nodes? What is the degree of clustering?

The distance between two points is not defined yet, but you may assign a metric to your topological space to form a metric space. Any number of metrics are possible. What you choose may influence how you navigate the territory. If you want to get from A to B, then what space allows you to do that efficiently and reliably? By assigning a metric you can also do calculus by breaking down an object into an infinite number of identical components and integrating them. If this is unwarranted, then you may use a coarse-grain approximation by breaking the object down into a finite number of components.

Image
This page represents a few operations that go beyond topology. Just imagine how these operations can be employed to represent evolutionary processes. By overloading reality with basic geometry you can expand your working memory; music is also a great tool. Using your instincts and senses to encode higher-order abstractions allows for easier transcendence of your local space-time position. Entering a transcendent state can help stretch out your spatial and temporal discount functions. This leads to delayed gratification and an increase in empathy/humility.

On a large enough time-scale, all systems are either replicating their forms or decaying away into parts that emerge into new replicating forms. Existing systems are in dance of interaction and separation. Smaller replicating forms are being transferred between larger systems to signal competitive or cooperative ends.

This is the dance of existence, and we all have a part to play.
Last edited by daylen on Wed Apr 10, 2019 6:55 am, edited 5 times in total.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

Image
So far I have only partially covered the geometric layer. I plan on adding at least four more layers to fully utilize the human sensorium. I would also like to develop the geometric layer further to include functions, groups, rings, categories, probabilities, manifolds, and so forth. Hopefully this will help people develop their mathematical intuition.

I spelled repetition wrong :|

Image
Any color in color-space can be represented by a linear combination of three primary color vectors. Red, green, and blue are the most common vectors that form a basis for this space. In the electromagnetic spectrum, each color can be associated with a particular wavelength. This leads me to think that color can represent three different subjective values or features to be optimized for on different time-cycles. Red is long-term production, green is mid-term cooperation, and blue is short-term pleasure.

These build on each other: red + green = yellow = cooperative production; green + blue = cyan = pleasureful cooperation; blue + red = purple = pleasureful production; production stabilizes relationships; relations extend pleasure; cooperative production + pleasureful cooperation = firm; firm + pleasureful production = purpose.

Production is the reconstruction of forms, and I consider both concrete and abstract forms to be products. Agents can cooperate with other agents and/or systems.
Last edited by daylen on Tue Apr 09, 2019 9:59 pm, edited 2 times in total.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

Image
Here is a rough attempt at explaining the taste, sound, and feeling layers.

Taste consumes or rejects objects/abstractions. This layer hints at the rate of growth/decay of an agent. It accounts for food and information diets. Smell can be combined with taste due to their high degree of coupling.

Sound represents repetition. The Fourier transform breaks down a signal into its frequency components. I do not know much about music theory, but I plan on learning it.

I may change my approach to the feeling layer, but this chart is just something that came to mind. I thought that the extroverted dimension could be combined with the (masculine, feminine) X (juvenile, mature) dimensions to represent different "modes" of operation. Later, I may link it to the more traditional "feelings" like anger, sadness, happiness, and so forth.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

This is starting to turn into a journal. I cannot think of a better place, so I am going to continue spewing out content in hopes that I am entertaining someone. :D

Image
Here I present cognitive diagrams. These are a natural extension of how agents leave behind a memory trail when proactive. The width of this trail can change, and it can also be partitioned into "events". The inside of the trail represents intuitions of Minkowski space(*) and the outside represents impressions of Minkowski space that form neural space. Minkowski space is populated with events, and neural space is populated with neural networks. The difference between intuitions and impressions in my model is something like the direct versus indirect perception of objective forms (i.e. did you do something with the objects or did you imagine what the objects would do based on past experience). Intuition is more automatic and is not likely to be "remembered" until you start doing it again.

Neural-nets are represented by fractal-like diagrams of circles attached to the events. Each circle may or may not have descriptions represented by dots. The events can be populated with games (objective rule-sets), players, spectators, tools, and obstacles.

A set can be formed by attaching many impressions to a single parent impression (**). Links/curves can be used to connect impressions representing metaphors. Links connect similar impressions between different events and may be accompanied by an arrow to imply direction.

(*) 3 space dimensions and 1 time dimension.

(**) I called sets categories earlier, but I do not want to overload the term category with the mathematical definition; I will probably use it later.

Image
This page has three different instances of cognitive diagrams. The first displays a few different dichotomies. The second is a crude Kegan diagram where intuition is widening at later levels; the top may be used to represent subjective impressions that are selective (biased), and the bottom may be used to represent objective impressions that span the space (unbiased). The third is just a zoomed in version that demonstrates cycles between low and high activity (or low and high memorability).

----------------

Side note: In my previous post, I am not sure that "up" and "down" properly fit to extroversion and introversion. I was thinking along the lines that feeling up is wanting to conquer earth and explore outer-space, and feeling down is wanting to become more synergistic with earth. The terms "up" and "down" may not give this impression to others, though.

intellectualpersuit
Posts: 55
Joined: Wed Feb 13, 2019 11:23 am

Re: A Model of Cognition

Post by intellectualpersuit »

You are entertaining me and I like where this is going, I am beginning to see utility! I think keeping up the spewing is the right idea. Maybe I will try to use your model to model something, I probably don't have a good enough understanding yet though to be 100% correct, unless I am representing a simple thing. I guess that you have always represented phenomena with diagrams, this model is largely a standardization for your modeling.

I thought I had some questions but going over and over what you wrote and drew, you provided enough information for me to figure it out. I'm sure i'll have questions if I try to use it.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

@intellectualpersuit Glad you like it! Yeah, I have been doing this sort of thing for a few years now. The word standardization fits well.

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: A Model of Cognition

Post by 7Wannabe5 »

I appreciate your use of the masculine/feminine, juvenile/mature dichotomies since I am familiar with this simple model which is kind of a short-hand
or file system for Jungian archetypes. In addition to the humors, I would associate respect/authority, appreciation/care, cherishment/vulnerability, and freedom/amusement with the 4 quadrants.

I am intrigued by your color-coded take on production/co-operation/pleasure. It immediately brought to my mind the example of working with my partner on our permaculture project. Why did you choose to make use of the term "firm" rather than "organization?"

Your generalization of "taste" is also very interesting, and IMO quite apt. Have you read "The Hungry Brain?" You might find the author's discussion of the relationship between "palatable" food and brain structures of some use.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

@7 I figured you would help with associations :). I will include them when I update my pictures. I also like organization better than firm; firm was just more intuitive to me at the time. I am always open to suggestions since this is very much a work in progress. I have not read The Hungry Brain but it sounds insightful.

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: A Model of Cognition

Post by daylen »

Image
Here are some alternative forms. The layers can be combined in an endless variety of configurations. Go crazy; get creative.

The first section compresses the entire history of two agents, a physical territory, and an abstract territory. Color is added to represent the value of specific impressions and links. Inside each square lives the intuition which is not readily available for expression. You could say that the edges are more conscious. The thickness of each link represents the strength of metaphorical connection (quantity and quality).

The second section is a form that highlights the past of a specified agent. This is on a grander time-scale, so the events are replaced with eras or periods.

The third section demonstrates a form for cyclic intuition where an agent gets stuck in a selective equilibrium. The inside can display subjective impressions and the outside can display objective impressions, enabling relations, and/or defense mechanisms. This cyclic pattern would give emergence to a bounded territory in which the agent is trapped. This can happen globally or locally where only a small aspect of an agents life is cyclic. Cyclic intuition is not necessarily a bad thing locally but it does hinder growth in that area.

Image
These diagrams represent flow and signal which are very similar. The difference is that flow is associated with the taste layer and has a well-defined direction+form; signal is associated with sound and has a well-defined frequency+intensity.

Image
This last image displays a more concrete example. It is a very general representation of own cognitive diagram which reflects the structure of my mind. The humanity territory includes written history. I have done some carpentry/woodworking and gardening in the past, but I plan on developing these areas in much greater depth. Cognition or rather "meta-cognition" is one of the most pleasureful topics for me to think about. I may also do some writing, but this is a secondary concern for now.

Post Reply