US shale oil peaks June 2015(?) ...

Intended for constructive conversations. Exhibits of polarizing tribalism will be deleted.
cmonkey
Posts: 1814
Joined: Mon Apr 21, 2014 11:56 am

Re: US shale oil peaks June 2015(?) ...

Post by cmonkey »

Noided wrote:then present a structured argument
I think the Post Carbon Institute has done the best analysis I have seen to date.

Even if its not this year, but in 3 or 5 or 10 years, the analysis is pretty conclusive that it will happen within our lifetimes. Same with crude oil production. Production rates have/will peak within our lifetimes. People who understand what peak oil is (maximum achievable production rate) as opposed to people who think they understand what it is (no more oil! ever!) are essentially in agreement on this. So who cares when?

Discussing the exact date is pretty much irrelevant given that conclusion. Taking action to minimize the effect it will have on your life should be the focus.

black_son_of_gray
Posts: 505
Joined: Fri Jan 02, 2015 7:39 pm

Re: US shale oil peaks June 2015(?) ...

Post by black_son_of_gray »

@ Jacob: Very interesting tie into statistics - outside of the lab, I've never heard anyone talk about this kind of stuff. I understand that the synthetic aspect of combining knowledge/understanding from other systems and areas is a tremendous aide in identifying important information. That said, with respect to eliminating Type II errors, there are some inherent difficulties. It'd be interesting to hear your (or anyone else's) thoughts.

From a purely statistical perspective, you are correct: the more you sample, the lower the probability of making a Type II error. However:
1) Effect size matters: If a terrorist attacks a major US city, the effect is huge, and everyone will know about it just by looking at any ONE news source. If the effect is smaller (or, more difficult, it is perceived to be small by the general population), then you need to have a lot more sampling. Unfortunately, it is exactly because the effect is small (or thought to be small) that there are often limited sources to sample from. And some of them are probably more 'fringe' sources that may exaggerate in the other direction - conspiracy theories, etc.
2) How do you determine appropriate cutoffs (statistical power)?: I guess if you read 100 articles a day, there is less concern for this - but for the person who doesn't have the time or energy to sample broadly or deeply, what kind of Type II error rate is acceptable? 50%? 20% 1%?? So lets say I read 10 sources of information that are fairly broad and fairly varied about a topic, and nothing really pops out to me - is my only valid conclusion that, while there might be important effects, the size of those effects are probably not huge? That is, I only have garnered the sensitivity to detect huge effects with my small sampling, not smaller effects?
3) What about the problem of induction, where you become more and more confident up until the point that you are completely wrong?
4) There is a huge amount of information out there, because Internet. The signal is in the weeds.
5) At least in the statistical world, it is known that having a high Type II error rate (low statistical power) perversely leads to overestimation of true effect sizes. So by not sampling enough, even if you correctly identify something is really there, you are likely to think the effect is much bigger than it really is.

To everyone else who isn't bizarrely revved up by statistical discussion, my apologies!

jacob
Site Admin
Posts: 15996
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: US shale oil peaks June 2015(?) ...

Post by jacob »

@Noided - The point of the OP was to point out the data for those who already know the structured argument. If you don't know it, you can start here. It's not like it's a secret.

The reason I don't feel like writing it up is that complex and controversial issues like geopolitics, investing, climate change, or epidemiology for that matter almost always turn into a painful case of "Poe's Law meets Dunning-Kruger" when pursued in a dialetic manner. Conversely, I've given you a ton of leads to pursue on your own.

From the Analects:
I do not open up the truth to one who is not eager to get knowledge, nor help out any one who is not anxious to explain himself. When I have presented one corner of a subject to any one, and he cannot from it learn the other three, I do not repeat my lesson.

jacob
Site Admin
Posts: 15996
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: US shale oil peaks June 2015(?) ...

Post by jacob »

@bsog - In the lab (under controlled circumstances) I tend towards frequentism but for real world heuristics I'm much more of a Bayesian (in the filtration sense) with little patience for frequentist wanking :-P . This allows me to cleverly escape most of the issues you point out ;-)

In the Bayesian sense, a type I error obtains when looking at new information ("evidence") (like a big terrorist attack) that should not change one's conclusion (posterior probability) but nevertheless does so. A type I error is an ignorance-error. An overreaction if you will---because you knew too little (of the right thing).

A type II error obtains from looking at new information that should change one's conclusion (like seeing a chasing T-Rex in the rear view mirror), but nevertheless does not. A type II error is a dogmatic-error. An underreaction because you knew too much (of the wrong thing).

Bayesian summary:
Type I error: Overreaction.
Type II error: Underreaction.

There are a bunch of points where I diverge from frequentism.

1) My sampling is NOT random. Frequentism requires that the sampling method is random, that is we're equally likely to hit anyone in the population. As such our ignorance should decrease as sqrt(N) by the number N of information we process. This is not very efficient. If our sampling was not random but we repeatedly hit the same subpopulation (e.g. we only listen to Fox News or MSNBC), our ignorance remains constant regardless of how many times we sample the same group. However, if our sampling is not random AND we have some idea of the actual population, we can sample systematically and thus get closer to a linear relation so that our ignorance decreases as N by N which is much better than what the central limit theorem offers, especially as N gets large. TL;DR Broadness is not achieved randomly but deliberately based on understanding the population (and of course assuming that this understanding is correct.)

2) I don't form a conclusion based on resampling the population but from updating and/or confirming my priors. Those priors have been trained on a lot of previous information. Without this "training", one would start from a point of ignorance and need to assign equal a priori probabilities to all outcomes. That is "being clueless one would expect new evidence to come out pro or con in 50/50 of the cases". E.g. if you had a coin and you didn't know whether it was loaded or not, it would be fair to assign 50% heads, 50% tails (assume that the pedantic edge landing is out of the question in the name of pedagogy). However, if you already knew that the coin was loaded to the (95% heads level), you guess would be substantially different AND if you saw a tail event, your posterior probability wouldn't change much.

More practically, suppose we had a murder case with a collection of 300 separate and mutually reinforcing pieces of evidence pointing towards guilty. We would that most new evidence would support the guilty verdict. However, someone not familiar with all the existing evidence should be inclined to stick with the principle of equal a priori probabilities and expect new evidence to be far less convincing.

The climate change "debate" provides a good example of how all new evidence "strangely" seem to point towards global warming and how most revisions also points in the same direction and also how this confuses many skeptics who think that it would be more likely if new/revised evidence would be equitably distributed.

3) Bayesian updating makes it a lot easier to spot/form patterns in the training data. A collection of strong patterns becomes a theory (in the scientific sense). A good theory is self-consistent. Hence a good theory is a self-consistent explanation of patterns based on a lot of data. This means that an additional piece of data is unlikely to break the theory.

We're very fortunate to have found patterns that seem to persist and be in accordance with our sense. E.g. there is a thing called logic and what's logical now is also logical tomorrow. The Bayesian belief is that posterior probabilities ultimately convergences on the population distribution. Of course that may change, but hopefully it changes slower than the sampling rate---important point! See below ...

Because of the updating based on discrete pieces of evidence, it means that each new piece of evidence will almost always contain either a type I or a type II error. However, this also means that any single piece of evidence is unlikely to be catastrophic.

The problem of induction is solved by noting that each piece of contradictory data would slowly break the theory if new ones kept coming up. This is how classical physics was eventually replaced. There were too many new pieces of evidence (quantization of light, finite light speed) that could not be reconciled with the previous theory.

The only problem here is if updating is too slow or stops altogether.

This is of course a widespread problem. Most people stop learning after exciting college.

Also Many are naturally inclined to seek support for their beliefs (the Fox News problem above) rather than seeking to either
A) Confront by random sampling.
B) Confront by deliberate/linear sampling.
C) Confront by opposite sampling (i.e. Popper)

I'd say these behavioral problems are the primary source of type II errors. It's not a problem of unknown-unknowns. It's a problem of unknown-knowns. Stuff a person doesn't know simply because they either didn't study hard enough or are too set in their dogma and only looked to confirm their bias.

The scientific method is, of course, specifically designed to avoid these human weaknesses.

4) The larger and more interconnected the theory, the harder it is to break and the more sure one can therefore be. Bayesian [learning] networks are like eco-systems. The more interconnected and the bigger they are, the more resilient they are, and the harder they are to wipe out.

What happens in a network is that a new piece of information/evidence is not just seen as a context-free piece of data. It affects multiple nodes which again affects other nodes. Thus it might change some posterioris a lot but it leaves most other intacts. Going back to the 9/11 example, this new evidence didn't change any of the nodes that had to do with understanding asymmetric warfare, the problem with empire, etc. It was old hat to people familiar with the concept of empire. Many Americans still don't see themselves that way (so they have 50/50 probs on those issues) and thus one incident changes a lot.

So each new observation filters through an entire network which is patterned as a theory. Many networks are similar, hence latticework theory.

5) Latticework is the presumption that patterns have existence beyond the data. This is something frequentism does not account for. Making that assumption makes it possible to expand the search space, look for unknown-unknowns because they have been found elsewhere. This often works which says a lot about how the universe works.

6) I think frequentism is inherently weak because it's structurally simplistic. Unlike big data, classical statistics admits no room for structure to "clean"/"see" the data. This is because traditional classical stats was developed to make grand conclusions based on the very small sample sizes that were available. (Note how you need to assume highly idealized distributions for the MLEs ... where do you get those distributions from?! They're mostly just assumed.)

7) It's possible to find the signal in the weeds just like it's possible to find edible/desirable plants in a real patch of weeds. You don't have to process/chew/pay attention to each plant. You don't have to use each and every sample point. If a sample point looks like it doesn't have a signal in it, it can be quickly discarded. Again, there's a pattern to signals and a different pattern to noise.

8) The human mind eminently capable of holding multiple and even self-contradictory positions in the mind. This means that it's possible to send the same piece of information through several different networks. This allows us to understand other people. One example is Russian propaganda vs NATO propaganda. First, one deliberatively samples both sides (and a neutral side if available). Keep in mind that each side have different a prioris because their inherent dogma is different. You could then have a different network based on historical conflicts that is updated based on the subnetworks of Russia and NATO. And so on.

Short idea: Theory-empathy + meta-theory. The combination makes for a stronger theory.

TL;DR: The ability to use patterns in the data; to filter/clean data; to converge by updating; to run multiple filters; and deliberately sample ... all of which is not available to a frequentist makes it possible to deal effectively with type I and type II errors.

jacob
Site Admin
Posts: 15996
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: US shale oil peaks June 2015(?) ...

Post by jacob »

@bsog - I think my first answer was far longer than it needed to be. Fundamentally, I see the origin of error as being

Type I: Bad/insufficient/oversimplified sampling/data set
Type II: Bad/insufficient/oversimplified modelling/understanding/thinking

Furthermore I don't think one excludes the other (no excluded middle here) and that it's perfectly possible to commit both simultaneously. That they are slightly connected in classical statistics is an artifact of the theory which I intuitively feel comes back to the central limit theorem (use enough data and you automatically impose a Gaussian structure(*) ... hence sufficient quantity will to some degree substitute for the inherent lack of quality (the structure of the distribution) by generation a Gaussian).

(*) Assume all outcomes are based on the sum of independent and small perturbations.

I don't think either one is inherently more difficult than the other. It's just that in our present world, it's a lot easier to access better data than it is to access better thinking. Weirdly though, it's a lot easier to propagate bad thinking than bad data. Imagine, though, if google wasn't an information-search engine but a model-search engine.

Acceptable error rates likely depends on personal tolerance and what specific problem is considered. Many function just fine despite having very high error rates due to the bubble/safe environment we live in, i.e. the cost of being wrong on practically anything is small. In that regard it's interesting to note that human brain size has been shrinking for the past 28000 years. We're slowly turning into Eloi.

black_son_of_gray
Posts: 505
Joined: Fri Jan 02, 2015 7:39 pm

Re: US shale oil peaks June 2015(?) ...

Post by black_son_of_gray »

@ Jacob - Thanks for the thoughtful reply. Ahh yes, I forgot about Bayes! It really is telling, that while I have taken college statistics classes and work with statistics (fairly simple stuff) more or less daily, the only exposure I had really had to Bayesian statistics is Nate Silver's book, The Signal and the Noise.

In reading your responses, I began to think about how I frame various science-'deniers'* arguments. For example, often the objection raised to global warming or evolution or vaccines will be some single example ("But look at all the snow here!" or "But how do you explain this creature/gap/eyeball/molecular gizmo?" or "But I heard this one kid became autistic immediately after the shot!"), which can be viewed as an appeal to the problem of induction. But, as you point out, that argument really only make sense using frequentist statistics. The Bayesian approach is intuitive and easy to understand, but perhaps just not the default for many well-educated or otherwise smart individuals. Maybe the most effective approach to combat 'denier' views is to emphasize the Bayesian logic strongly in schools or to preface arguments in a debate with that framework.

Tying this back into the OP topic- given your expertise and nonrandom sampling of various fields like carbon-based fuels / climate change, would you say that the people who are most alarmed about climate change or peak oil or (insert impending global-scale problem here) are generally the people with the most comprehensive, interconnected theories? I remember hearing in one of Dmitry Orlov's talks(?) that he thought the truth was probably somewhere between the most alarmed people and the most apathetic (Bad, but not horrible, but not great). IOW how many people or institutions do you know of that appear to have broad, well-developed theories that are different or opposing to yours? Would a meta-analysis of all the well-developed theories be consistent and converging over time (I would assume so) or scattered all over the place?** Regarding your statement (which I agree with), "it's a lot easier to access better data than it is to access better thinking", what fraction of the voices on shale oil peaking, for example, would you say are coming from broad, inter-locking perspectives with better thinking?

*I hate that term. Even though I don't agree with 'deniers', I think it's dangerous to put everyone under the same broad label of 'willfully ignorant' despite there being a lot of diversity of thought. It also puts people on the defensive.
**Consider this a non-random sampling attempt to extract information from you.

jacob
Site Admin
Posts: 15996
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: US shale oil peaks June 2015(?) ...

Post by jacob »

@bsog -

Part I: I think it's highly unlikely that human brains work in a frequentist manner in the sense of sampling randomly, remembering all samples, determining the ditribution, and then engage in some kind of maximum likelihood estimate to calculate some kind of score and confidence interval. It's simply too hard to maintain a production system this way. Think of the memory requirements alone.

Now it can be done on paper and once done specialists can then stick to memorizing the few key features of the "letter"-distribution, e.g. the mean and the width or whatever ... and then put everything else in such contexts perhaps intuitively estimating whether a large "letter"-score is highly unusual.

I suspect that's where many well-educated/intellectualized people are. This would explain why experts are notoriously bad at dealing with Black Swans or out-of-context samples. In short, the traditional specialist expert will get his distribution parameters from outside (via some academic paper or a spreadsheet) and compare personal observations to that.

The Bayesian approach can operate with much less data input. It allows for structure (to impose self-consistency). It naturally forms experiental rather than experimental knowledge.

I think most people are most likely to use some kind of Bayesian approach. The real difference in how much experiental knowledge is retained and how much self-consistency (e.g. critical thinking) is imposed. Here, I'd say the average answer is: "not much".

The simplest reaction pattern is a zeroth-order moving average filter. I see this a lot. In non-statistical persons this would be a person saying things like "You picked X yesterday, so I expected you'd pick X again today".

It's possible to explain ignorance based on combining the moving average filter and a simple (structure less) Bayesian approach. If the filter is a very low order, it does not remember a lot. This means that it has to follow the principle of equal a priories. This leads to the "this changes 'everything' reaction (type I) error" every time a new study comes out (or a snowball is observed in the backyard or senate). Basically, if the filter has a long memory, a single new observation that's different from the previous ones wouldn't change much but without memory there's no context and thus each now observation "changes everything".

No memory/no context is a kind of ignorance (call it layman-ignorance) that leads to type I errors. An instrumental analogy would be a highly imprecise measurement. (The expected variation is so large than any sampled mean is pointless.)

Specialist-ignorance is a much tougher and more nefarious problem because it leads to type II errors. A seemingly precise but very inaccurate instrument. (The expected variation is small but the sampled mean is way out of whack.)

In the filter/Bayes sense, this comes about from having a network that is limited (in breadth) (though not limited in height like the layman) but has a very long memory.

My favorite example here is an educated denialist using the Beer-Lambert law to show mathematically and in great detail why CO2 is irrelevant as a greenhouse gas. Typically people making the Beer-Lambert argument has a background in chemistry, laser physics, or some other strong experience with test-tube gases. There's one key aspect/simplifying assumption that holds in test-tubes (and lasers) and not in a real life atmosphere(*). Absorbed radiation gets reradiated and doesn't leave the system like in the test-tube. This leads to an entirely different conclusion.

(*) Furthermore, the math is simple enough to understand using high school calculus which means that EVERYBODY with a STEM degree still remembers enough to find it convincing. Only someone with a undergraduate+ background in astronomy or climate science will see the problem. I'm not familiar with what they teach meteorologists, but I presume that radiation is not treated dynamically in a weather model.

Unlike layman-ignorance which is a lack of data-memory/experience specialist-ignorance is a lack of model-understanding.

This means they require different compensations.

A layman needs a lot of data context. Yes, you just read this one article, now read these other gazillion articles and stop paying the most attention to the most recent one you read.

An "expert" is harder to deal with because the following three things need to be communicated ...
1) You're wrong.
2) Not only are you wrong, but I know exactly why you're wrong.
3) You need to add/increase your understanding (take into account additional knowledge) and this will change your mind.

This is deep into Poe's Law + Dunning-Kruger territory because our antagonist thinks he's an expert whereas in reality he's not (DK). Furthermore, he's making what our protagonist thinks are basic mistakes which shouldn't happen with an expert (Poe's Law + underestimating the ignorance of the antagonist). So what we get is that our antagonist is approaching the contention as a debate whereas our protagonist sees it as an educational effort. This leads to lots of frustration.

Part II: It's definitely clear that e.g. it's practically impossible to be a working biologist(*) and be a creationist at the same time. This is because using creationism as a foundation for doing work is biology is fractally useless for all practical work---it's useless on every conceivable level of biological research. I find the idea of fractal wrongness to be quite useful in terms of understanding the extent of ignorance. Biology is unique in the sense that the theory of evolution is so fundamental at all levels from explaining genetics, species, genotype behavior. Conversely, you can do an enormous amount of work in economics despite having assumptions that are fundamentally flawed (like rational expectations) because a lot of the math cancels out idiosyncratic behavior similar to how statistical physics is useful to derive the macrophysical quantities of pressure and temperature without considering the behavior of every single particle in the gas. As such classical economics is not fractally useful or useless. It's only wrong or right in certain domains. It is when it's applied to the wrong domains that it becomes a type II problem. Of course we see plenty of this.

Do the most alarmed people have the most interconnected understanding?

It depends. For example, in the noughties peak oil and climate change people still weren't communicating their respective understanding to each other. Climate scientists were projecting growing CO2 emissions into the far future using standard exponential assumptions. From the peak oil perspective, climate change was a complete nonissue (e.g. no consideration of permanent impairment on terminals or refineries from hurricanes). This is a case with ignorance cancels out. Not being aware that your fears are alleviated by constraints outside of your domain.

[Footnote, when I joined/cofounded the sustainability nonprofit back in 2009, it was my "vision" to try to bridge these gaps, but unfortunately this was not a vision that many of the others shared, so I left after a couple of years.]

On the other hand, we're seeing a lot of "it's not too late but we have to act now" coming from the science community. This has been going on for years. I suppose the intended effect on politicians is that "we're scientists and we think that if we say that it's too late, you politicians won't do anything" whereas the actual effect is the story of the boy who cried wolf. So the result is that the goal posts are moving instead. E.g. we've crossed the 1C threshold, so now we're looking at 2C without even mentioning 1C. Unless political trends change, we'll hit 3C before the end of the 21st century ... but everybody still talks about 2C ... This is a case where mutual ignorance compounds the problem, i.e., each side should be more worried than they actually are... or appear to be ...

There's often a BIIIIIIG difference between institutions and the people who work there. You can have some very smart people working in some very dumb sounding institutions. More importantly, often institutions are required to stick to the "party-line". Worse, people working at institutions may not be allowed to comment. You can also have some dumb people working for "smart" institutions by which I mean the aggregate of a bunch of dumb decisions simply happens to look smart almost randomly. If you can't decide whether an apparent "strategy" is "evil or incompetent" this is probably the case (that there wasn't really any well-formed strategy to begin with).

Are well-developed theories all consistent and converging over time?
Theories come in three different flavors:
1) Scientific theories are converging and consistent by construction. There's a bunch of metaphysical reasons for this but I don't think we need to go there.
2) Human theories (individuals) are scattered all over the place but consistent over time but they never converge. I'm talking about human psychology. Human minds have not changed "significantly" over tens of thousands of years. This is why we can still learn a lot from reading stories that are a few thousand years old and probably older had more writings existed/been preserved. However, since not all of our minds are the same, it's also something that each and every person much personally learn. It's not like science where we can all learn the same things because the natural world exists in objective reality. The mind is a problem of subjective reality.
3) Social theories (institutions, herd behavior). These are somewhat consistent but NOT converging. The problem here is that there's too much to learn and that the past won't be repeated in the future. Another problem is that these theories are reflexive. They're being expanded/changed over time and respond to this change.

... so lets deal with them one at time and then all of them together.
1) There are no opposing theories that are developed to the same degree. At all! This is because of the requirement and possibility of testing. Basically, both theories have to agree with the third factor which is experimental reality. And if they both agree, then first, they aren't really opposing, and second, they can almost always by shown to be mathematically equivalent. E.g. like the Schrodinger-formulation and the Heisenberg-formulation was.
(Where/when testing is unavailable, like string theory, you can have several theories that are well-developed and opposing, ... but if you can't test are you still doing science? It's actually a testament to how strong our scientific understand is of the world that we can only resolve oppositions by spending megadollars on for example the LHC.)
2) Ha! All the time. Because my mind isn't the same as your mind.
3) It's my experience that when anyone with a well-developed theory decides to broaden it, they will usually stop opposing for no other reason that they recognize the limitations. Unless they're in service of some job and bound to represent some particular policy. But in general, when it comes to limited/reflexive theory, broadness tends to limit opposition as people become less attached to their pet-theory. "To understand your enemy is to accept him."

Example: The two competing classical economics theories are Austrian and Keynesian. Anyone who only knows one will fiercely defend it against anyone who only knows the other one. Anyone who knows both will either play Devil's Advocate against either one of the former or hopefully recognize another "dual" and instead debate which is more useful for a given situation on the meta-level.

Now all of them together.

1+3) Any social theory that's not rooted in reality will fail in a very predictable manner. This is because reality is a boundary condition (fixed parameter) and social trends are slow. This makes for a linear/nonchaotic prediction. For the same reason a social theory that is rooted in reality will evolve in a predictable manner.
2+3) Without breadth, individual preferences for social theories are VERY MUCH colored by their personal temperament. For example, a person whose neurochemistry makes for a sunny disposition tends to prefer optimistic social theories.
1+2) N/A. This is why science is instrumentalist and not based on personal revelation.

In terms of fraction of broad&complex voices ... are you counting theories or people holding them? Lets say we have a 1+3 opposing a 3 theory. Obviously 1+3 is broader than 3 by construction. Initially, it will therefore only be held by a few. Over time, due to the convergence, it becomes easier and easier to see that 1+3 is true and that 3 is false (presuming that they're opposed). There is usually a definite herd-behavior (also a large number, media especially, who just swing whichever way the wind is blowing) here. The fraction therefore tends to go from ~0% to ~100% or very low to very high.

The standard progression is very normal:
1) First they ignore you.
2) Then they laugh at you.
3) Then they fight you.
4) Then you win.
(Happened with ERE too.)
The fraction of voices presenting the original complex argument will always be small(+). It will either be discovered independently or only adopted by the few who can hold it (it's a lot of work to grasp complex and broad arguments because the required foundation is so large). It can be magnified by simple repetition/echo-chamber but what mostly happens once you start "winning" is that people will drill down and pick the subargument compatible with their pet scapegoat. Instead of asking what the fraction is (there's that sneaky frequentism again :-D ), it's perhaps more useful to ask whether you have enough self-consistent "data" to have your probabilities fully converged (Bernstein-von Mises). E.g. the/any new information didn't surprise you. Basically, if by talking to more and more people you have reached a state of "I already figured that", you're done. WRT shale I currently don't know where to go to learn anything new and it's been like that for a couple of years [for this question]. That's not to say it won't happen. I just don't know what I don't already know.) Currently, the only ones hanging on to the "peak is a just a temporary bleep" is the official oil industry and the permabulls/techno-optimists.

(+) Furthermore, it strangely seems that it's usually the same people again and again.

enigmaT120
Posts: 1240
Joined: Thu Feb 12, 2015 2:14 pm
Location: Falls City, OR

Re: US shale oil peaks June 2015(?) ...

Post by enigmaT120 »

Regarding "biologist(*)", what is the asterisk for? I didn't see it as a foot note anywhere, sorry if I'm being blind.

jacob
Site Admin
Posts: 15996
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: US shale oil peaks June 2015(?) ...

Post by jacob »

Ha! That was ultimately rolled into the text. I meant "working biologist" in the sense of a biologist doing work where biological understanding is required to do the work. One could imagine a biologist working for a think tank writing policy that doesn't require much biological understanding. Such a person could be a creationist without being stymied in their work.

jacob
Site Admin
Posts: 15996
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: US shale oil peaks June 2015(?) ...

Post by jacob »

@bsog - The fraction question stumbled me because I was trying to quantify qualitative differences (5 apples > 6 oranges). Then I realized that a normal trick for dealing with interlocking arguments is simply to vote the arguments assuming that no interlocks exist. Of course a bunch of broad and dependent arguments are generally stronger than independent ones, ... but that goes too far into systems theory. In any case, I CAN count the number of arguments to give you some sort of fraction.

As presented roughly in the order that I heard about them (this won't give you any timing, but if you put them together yourself you can see the timing appear or the case getting stronger and stronger). The first three were enough to not buy into the energy independence argument. The first six enough to put it into 10 years. The first 10 within 5 years. The last 5 enough (for me) to say that this was it, that is, production will never be substantially higher than it is now.

Pro peak:
* Hubbert on shale (backdated reserves)
* EREOI saying shale is barely economical (geology+physics)
* Economy not supporting $100+ oil (economic experience, see 2008)
* QE causes speculative junk (Austrian theory)
* Negative cash flows observed (blogs and "conspiracy sites", or filings)
* Red queen race for discovery vs production (industry, petro-engineers)
* First bankruptcies confirmed (specialty news)
* US government optimism peaks (change in foreign policy confirms abundant [over]supply)
* Shale ground water pollution ("netflix", popular resistance, voters)
* IEA gives a time limit (iea.org, a lagging indicator)
* CA water shortages becomes mainstream (news, voters)
* SA debt problems + new King & cabinet + ME social unrest (blogs, google news)
* Shale quakes (government websites tracking)
* Majors pulling out (industry news)
* Shale legislation/bans (news, public resistance into law)
* Business optimism peaks. (The "Time" front page event)
* QE ends.
* China failing, commodity bust
* Saudi Gambit (oil price drops)
* Negative cash flows (mainstream business news)
* Mass layoffs begin (mainstream news)
* Economy ex shale states recessing

Con peak:
* Industry confidence
* New technology will be invented as it has been in the past.
* Most of the Pros above falling away within the IEA time limit.

Overall, the more of these one ties together, the broader the argument. Note that some can be used against other arguments. E.g. industry confidence implies subsequent oversupply (tragedy of thecommons problem) followed by a price drop (econ101), so this could be put in the pro-camp depending on which arugment is made.

sky
Posts: 1726
Joined: Tue Jan 04, 2011 2:20 am

Re: US shale oil peaks June 2015(?) ...

Post by sky »

So there is still a role for the Transition movement?

jacob
Site Admin
Posts: 15996
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: US shale oil peaks June 2015(?) ...

Post by jacob »

@sky - Yes, but it's my impression that when it comes to climate change, Transition Towns is mostly focused on avoiding contributing to the problem and possibly preparing for a time when fossil energy might be legislated out of use (if sanity prevails that will happen within the next 20 years before fossil fuels run out or even get prohibitively expensive). There's less focus on the increase in economic activity that needs to be diverted towards solving problems caused by climate change. These problems are already set in motion. So while e.g. a sane conventional economy maintains its real productivity (as efficiencies increase and compensate for input declines) resulting in declining real income of goods as more real income goes towards "anti-bads" (investing in food reserves, repairing disaster damage, ...) a transition town might find itself left behind if decreases real productivity (better efficiency but a choice of less inputs) and still have to pay the fixed climate damage costs. Conversely, a non-transition town might simply feel climate costs as a secular stagnation in wages throughout society, nothing different from what blue collar laborers have experienced for the past 30 years. But everybody. Or maybe the "rich" will be hit harder. "Rich"=Americans, then Europeans.

That's nothing that can't be fixed with a new TT policy because the present policy is not fundamentally incompatible with secular reality trends, but I just haven't seen any policy in that regard(?)

The answer really depends on how susceptible the location is to climate costs and shocks. A TT in SoCal, AZ, or Spain will be more SOL than a TT in Oregon, Michigan, or Scotland.

black_son_of_gray
Posts: 505
Joined: Fri Jan 02, 2015 7:39 pm

Re: US shale oil peaks June 2015(?) ...

Post by black_son_of_gray »

@ Jacob. This thread has been a really interesting read. With regard to the last comment, itemizing pro peak vs con peak - I guess a more refined version of my previous question (continuing my unabashed frequentist orientation) would be:

How often (ballpark numbers) would you say you personally encounter sources incorporating 1,5,10,15+ of your listed items, and what are the sources? E.g. I would guess that the average news source = 1, in that they are only reporting on that 1 thing. Maybe some long-form, investigative news sources might mention 2-3 to hash out a more compelling argument. Wonks/Bloggers who are routinely knee-deep in the source material and putting the time in to develop breadth and educate themselves in tangentially related topics might be 4+... I would imagine the 1s might be something like 95% of all sources, the 2-3s would be maybe 4%, and 4+ might be maybe <1%. With a non-random sampling, maybe you can skew that to 80% 1s, 15% 2-3s and 5% 4+? Something like that.

Anyway, I'm going to bow out on further questions in this thread, lest you start charging me with private tutoring fees :P .

jacob
Site Admin
Posts: 15996
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: US shale oil peaks June 2015(?) ...

Post by jacob »

@bsog - I agree with your numbers. I'd also add that the more numbers, the more work it is to hash out the argument and thus the more valuable it is for whatever reason and people won't just give it away for free. Once it becomes worthless, there's no value to be gained by writing it down other than as a history lesson. Some do do that but it's relatively rare. This is probably the real reason why 4+ is rare. I mean would you read a book covering some obscure argument that was 600 pages long? It wouldn't sell.

For me, I'm mostly just looking for raw material so Ones are fine with me. I can make my own Twos and Threes and Ten+s ... using latticework. This is why I prefer textbooks+news to nonfiction(*) narratives.

Non-random sampling won't boost your numbers because there's nothing to read or it might not be worthwhile anyway (why read 600 pages if you already know the material on 550 of them from elsewhere?). However, it's easy to non-randomly cover the total area and put the argument together yourself. The width is there. The complexity isn't. This is where lattice work helps. E.g. if you understood subprime housing, it was easy to understand subprime energy. The hard part was realizing that subprime was the model and not something else, e.g. standard business cycle or whatever.

TL;DR - All these complex/broad arguments simply aren't written down and made available for the general public because it's not worthwhile for the person doing it.

(*) Those I've talked to who read a lot (say a lifetime total of 2000+ books) tend to think nonfiction books are a waste of time. Standard lamentation: "I read 300 pages in order to gain the insight of two sentences."

Myakka
Posts: 122
Joined: Thu Sep 13, 2012 3:39 am

Re: US shale oil peaks June 2015(?) ...

Post by Myakka »

@ Jacob - I have enjoyed reading your expounding on the methodologies you use to create holographic models based on cross-disciplinary reading of scientific theory and current news. To create internally self-consistent models of the world based on such a quantity of research certainly says a lot about your openness to a variety of manners of looking at the world.

In my own somewhat more modest quest to model this world we live in, I have come to understand that all of the disciplines of science that constitute the accumulated knowledge of our modern world contain some recurrent assumptions which in large part pre-determine the sorts of answers that are found. One of these assumptions is that (as in a mathematical proof) we can know absolutely everything about the entity we are studying and therefore are able to answer with complete certainly all questions about that entity. This assumption works a lot better in physics and astronomy than it does in understanding humans, society, and ecology – which is why the theories in hard sciences are much more stable over time than the theories in sciences involving living entities. A second assumption of our knowledge is its fascination with gaining control over the entities studied. This act of controlling another without any thought as to needing the other's consent has the effect of rendering the one studied into the position of a slave. It allows someone wielding the knowledge contained within that field to gain control of another in ways that are frequently destructive to that other. In fact, I suspect (although I haven't yet been able to prove it), that that manner of control is always or nearly always destructive.

The alternative to this method of extracting information from the world involves integrating empathy into our pursuit of knowledge. In such a system I don't have the right to know every last private detail about Jacob L.F. I only have the right to know what you choose to share. AND then I don't have the right to use my knowledge of you to control you into doing this something I am wanting, but only the right to use that knowledge to make a guess as to what might be okay with you and then confirm that guess by asking you.

If you apply these two different approaches to knowing and interacting with the world to the environment. The first approach obviously produces the environmental crisis such as we know it today. The second allows for that environment to say no to those things we might ask of it which would be damaging to it.

What is your reaction to the idea that all of the fields of knowledge you have spent time understanding might all have the same type II error as I have tried to explain above?

cmonkey
Posts: 1814
Joined: Mon Apr 21, 2014 11:56 am

Re: US shale oil peaks June 2015(?) ...

Post by cmonkey »

This is a fantastic interview regarding what I view as the most important topic in peak oil talk - available net exports.

Current estimates are that China and India will consume all available oil exports on the world market by the year 2032 (17 years from now) given their dramatically increasing consumption vs falling net exports.

cmonkey
Posts: 1814
Joined: Mon Apr 21, 2014 11:56 am

Re: US shale oil peaks June 2015(?) ...

Post by cmonkey »

Saudi Arabia's crown prince has announced a plan to create a 2 trillion dollar investment fund to move SA into the 'post oil' era. Of course no mention that this is probably due to peaking production now or in the near future. The only reasoning is price volatility and the affect on their budget.

Part of that fund will be an IPO of Aramco.....

enigmaT120
Posts: 1240
Joined: Thu Feb 12, 2015 2:14 pm
Location: Falls City, OR

Re: US shale oil peaks June 2015(?) ...

Post by enigmaT120 »

Didn't Denmark already do that? What took SA so long?

jacob
Site Admin
Posts: 15996
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: US shale oil peaks June 2015(?) ...

Post by jacob »

@enigmaT12 - Denmark doesn't have a sovereign wealth fund. However, they recently privatized their national oil company in an IPO. Norway on the other hand has one of the largest sovereign wealth funds in the world.

https://en.wikipedia.org/wiki/List_of_c ... alth_funds

enigmaT120
Posts: 1240
Joined: Thu Feb 12, 2015 2:14 pm
Location: Falls City, OR

Re: US shale oil peaks June 2015(?) ...

Post by enigmaT120 »

Dammit. Stupid excuse for a memory that I have!

Locked