@bsog -
Part I: I think it's highly unlikely that human brains work in a frequentist manner in the sense of sampling randomly, remembering all samples, determining the ditribution, and then engage in some kind of maximum likelihood estimate to calculate some kind of score and confidence interval. It's simply too hard to maintain a production system this way. Think of the memory requirements alone.
Now it can be done on paper and once done specialists can then stick to memorizing the few key features of the "letter"-distribution, e.g. the mean and the width or whatever ... and then put everything else in such contexts perhaps intuitively estimating whether a large "letter"-score is highly unusual.
I suspect that's where many well-educated/intellectualized people are. This would explain why experts are notoriously bad at dealing with Black Swans or out-of-context samples. In short, the traditional specialist expert will get his distribution parameters from outside (via some academic paper or a spreadsheet) and compare personal observations to that.
The Bayesian approach can operate with much less data input. It allows for structure (to impose self-consistency). It naturally forms
experiental rather than
experimental knowledge.
I think most people are most likely to use some kind of Bayesian approach. The real difference in how much experiental knowledge is retained and how much self-consistency (e.g. critical thinking) is imposed. Here, I'd say the average answer is: "not much".
The simplest reaction pattern is a zeroth-order moving average filter. I see this a lot. In non-statistical persons this would be a person saying things like "You picked X yesterday, so I expected you'd pick X again today".
It's possible to explain ignorance based on combining the moving average filter and a simple (structure less) Bayesian approach. If the filter is a very low order, it does not remember a lot. This means that it has to follow the principle of equal a priories. This leads to the "this changes 'everything' reaction (type I) error" every time a new study comes out (or a snowball is observed in the backyard or senate). Basically, if the filter has a long memory, a single new observation that's different from the previous ones wouldn't change much but without memory there's no context and thus each now observation "changes everything".
No memory/no context is a kind of ignorance (call it layman-ignorance) that leads to type I errors. An instrumental analogy would be a highly imprecise measurement. (The expected variation is so large than any sampled mean is pointless.)
Specialist-ignorance is a much tougher and more nefarious problem because it leads to type II errors. A seemingly precise but very inaccurate instrument. (The expected variation is small but the sampled mean is way out of whack.)
In the filter/Bayes sense, this comes about from having a network that is limited (in breadth) (though not limited in height like the layman) but has a very long memory.
My favorite example here is an educated denialist using the Beer-Lambert law to show mathematically and in great detail why CO2 is irrelevant as a greenhouse gas. Typically people making the Beer-Lambert argument has a background in chemistry, laser physics, or some other strong experience with test-tube gases. There's one key aspect/simplifying assumption that holds in test-tubes (and lasers) and not in a real life atmosphere(*).
Absorbed radiation gets reradiated and doesn't leave the system like in the test-tube. This leads to an entirely different conclusion.
(*) Furthermore, the math is simple enough to understand using high school calculus which means that EVERYBODY with a STEM degree still remembers enough to find it convincing. Only someone with a undergraduate+ background in astronomy or climate science will see the problem. I'm not familiar with what they teach meteorologists, but I presume that radiation is not treated dynamically in a weather model.
Unlike layman-ignorance which is a lack of data-memory/experience specialist-ignorance is a lack of model-understanding.
This means they require different compensations.
A layman needs a lot of data context. Yes, you just read this one article, now read these other gazillion articles and stop paying the most attention to the most recent one you read.
An "expert" is harder to deal with because the following three things need to be communicated ...
1) You're wrong.
2) Not only are you wrong, but I know exactly why you're wrong.
3) You need to add/increase your understanding (take into account additional knowledge) and this will change your mind.
This is deep into Poe's Law + Dunning-Kruger territory because our antagonist thinks he's an expert whereas in reality he's not (DK). Furthermore, he's making what our protagonist thinks are basic mistakes which shouldn't happen with an expert (Poe's Law + underestimating the ignorance of the antagonist). So what we get is that our antagonist is approaching the contention as a debate whereas our protagonist sees it as an educational effort. This leads to lots of frustration.
Part II: It's definitely clear that e.g. it's practically impossible to be a working biologist(*) and be a creationist at the same time. This is because using creationism as a foundation for doing work is biology is fractally useless for all practical work---it's useless on every conceivable level of biological research. I find the idea of
fractal wrongness to be quite useful in terms of understanding the extent of ignorance. Biology is unique in the sense that the theory of evolution is so fundamental at all levels from explaining genetics, species, genotype behavior. Conversely, you can do an enormous amount of work in economics despite having assumptions that are fundamentally flawed (like rational expectations) because a lot of the math cancels out idiosyncratic behavior similar to how statistical physics is useful to derive the macrophysical quantities of pressure and temperature without considering the behavior of every single particle in the gas. As such classical economics is not fractally useful or useless. It's only wrong or right in certain domains. It is when it's applied to the wrong domains that it becomes a type II problem. Of course we see plenty of this.
Do the most alarmed people have the most interconnected understanding?
It depends. For example, in the noughties peak oil and climate change people still weren't communicating their respective understanding to each other. Climate scientists were projecting growing CO2 emissions into the far future using standard exponential assumptions. From the peak oil perspective, climate change was a complete nonissue (e.g. no consideration of permanent impairment on terminals or refineries from hurricanes). This is a case with ignorance cancels out. Not being aware that your fears are alleviated by constraints outside of your domain.
[Footnote, when I joined/cofounded the sustainability nonprofit back in 2009, it was my "vision" to try to bridge these gaps, but unfortunately this was not a vision that many of the others shared, so I left after a couple of years.]
On the other hand, we're seeing a lot of "it's not too late but we have to act now" coming from the science community. This has been going on for years. I suppose the intended effect on politicians is that "we're scientists and we think that if we say that it's too late, you politicians won't do anything" whereas the actual effect is the story of the boy who cried wolf. So the result is that the goal posts are moving instead. E.g. we've crossed the 1C threshold, so now we're looking at 2C without even mentioning 1C. Unless political trends change, we'll hit 3C before the end of the 21st century ... but everybody still talks about 2C ... This is a case where mutual ignorance compounds the problem, i.e., each side should be more worried than they actually are... or appear to be ...
There's often a BIIIIIIG difference between institutions and the people who work there. You can have some very smart people working in some very dumb sounding institutions. More importantly, often institutions are required to stick to the "party-line". Worse, people working at institutions may not be allowed to comment. You can also have some dumb people working for "smart" institutions by which I mean the aggregate of a bunch of dumb decisions simply happens to look smart almost randomly. If you can't decide whether an apparent "strategy" is "evil or incompetent" this is probably the case (that there wasn't really any well-formed strategy to begin with).
Are well-developed theories all consistent and converging over time?
Theories come in three different flavors:
1) Scientific theories are converging and consistent by construction. There's a bunch of metaphysical reasons for this but I don't think we need to go there.
2) Human theories (individuals) are scattered all over the place but consistent over time but they never converge. I'm talking about human psychology. Human minds have not changed "significantly" over tens of thousands of years. This is why we can still learn a lot from reading stories that are a few thousand years old and probably older had more writings existed/been preserved. However, since not all of our minds are the same, it's also something that each and every person much personally learn. It's not like science where we can all learn the same things because the natural world exists in objective reality. The mind is a problem of subjective reality.
3) Social theories (institutions, herd behavior). These are somewhat consistent but NOT converging. The problem here is that there's too much to learn and that the past won't be repeated in the future. Another problem is that these theories are reflexive. They're being expanded/changed over time and respond to this change.
... so lets deal with them one at time and then all of them together.
1) There are no opposing theories that are developed to the same degree. At all! This is because of the requirement and possibility of testing. Basically, both theories have to agree with the third factor which is experimental reality. And if they both agree, then first, they aren't really opposing, and second, they can almost always by shown to be mathematically equivalent. E.g. like the Schrodinger-formulation and the Heisenberg-formulation was.
(Where/when testing is unavailable, like string theory, you can have several theories that are well-developed and opposing, ... but if you can't test are you still doing science? It's actually a testament to how strong our scientific understand is of the world that we can only resolve oppositions by spending megadollars on for example the LHC.)
2) Ha! All the time. Because my mind isn't the same as your mind.
3) It's my experience that when anyone with a well-developed theory decides to broaden it, they will usually stop opposing for no other reason that they recognize the limitations. Unless they're in service of some job and bound to represent some particular policy. But in general, when it comes to limited/reflexive theory, broadness tends to limit opposition as people become less attached to their pet-theory. "To understand your enemy is to accept him."
Example: The two competing classical economics theories are Austrian and Keynesian. Anyone who only knows one will fiercely defend it against anyone who only knows the other one. Anyone who knows both will either play Devil's Advocate against either one of the former or hopefully recognize another "dual" and instead debate which is more useful for a given situation on the meta-level.
Now all of them together.
1+3) Any social theory that's not rooted in reality will fail in a very predictable manner. This is because reality is a boundary condition (fixed parameter) and social trends are slow. This makes for a linear/nonchaotic prediction. For the same reason a social theory that is rooted in reality will evolve in a predictable manner.
2+3) Without breadth, individual preferences for social theories are VERY MUCH colored by their personal temperament. For example, a person whose neurochemistry makes for a sunny disposition tends to prefer optimistic social theories.
1+2) N/A. This is why science is instrumentalist and not based on personal revelation.
In terms of fraction of broad&complex voices ... are you counting theories or people holding them? Lets say we have a 1+3 opposing a 3 theory. Obviously 1+3 is broader than 3 by construction. Initially, it will therefore only be held by a few. Over time, due to the convergence, it becomes easier and easier to see that 1+3 is true and that 3 is false (presuming that they're opposed). There is usually a definite herd-behavior (also a large number, media especially, who just swing whichever way the wind is blowing) here. The fraction therefore tends to go from ~0% to ~100% or very low to very high.
The standard progression is very normal:
1) First they ignore you.
2) Then they laugh at you.
3) Then they fight you.
4) Then you win.
(Happened with ERE too.)
The fraction of voices presenting the original complex argument will always be small(+). It will either be discovered independently or only adopted by the few who can hold it (it's a lot of work to grasp complex and broad arguments because the required foundation is so large). It can be magnified by simple repetition/echo-chamber but what mostly happens once you start "winning" is that people will drill down and pick the subargument compatible with their pet scapegoat. Instead of asking what the fraction is (there's that sneaky frequentism again
), it's perhaps more useful to ask whether you have enough self-consistent "data" to have your probabilities fully converged (Bernstein-von Mises). E.g. the/any new information didn't surprise you. Basically, if by talking to more and more people you have reached a state of "I already figured that", you're done. WRT shale I currently don't know where to go to learn anything new and it's been like that for a couple of years [for this question]. That's not to say it won't happen. I just don't know what I don't already know.) Currently, the only ones hanging on to the "peak is a just a temporary bleep" is the official oil industry and the permabulls/techno-optimists.
(+) Furthermore, it strangely seems that it's usually the same people again and again.