jacob wrote: ↑Thu May 21, 2020 7:51 pm

@iDave - From the perspective of an ex-physical scientist and ex-financial modeller, I think it's fair to say that "all science is based on models but not all models are based on science". For example, suppose I wanted to build a tracker-predictor, for example, a targeting radar for tracking falling objects.

An example of a model that was not based on science would be a Kalman filter ...

jacob,

The first two sentences, while probably not the way I'd say it, I'd basically agree with. Over the last decade or so I started hearing the term "The Science" invoked a lot. When I hear that I tend to assume what is meant is the underlying "truth" that little-s science (to me, an activity, not a thing) seeks to understand. Mathematics is a language we invented to describe how nature/the universe work. Models of the type discussed here are largely applications of the math. It seems that insofar as we can record, share/discuss, and build on science we are completely reliant on math and a little bit of jargon, and in that sense I can see a certain inseparability. Gives me some new topics for navel gazing.

Yes, I've got a working understanding of Kalman Filters, not so sharp now as it was 20 years ago when I spent most of my time slogging through the weeds and my graduate studies were fresh. You probably recall we discussed the topic offline a while back. Kalman was an EE, as am I as far as academic training. Luckily my jargon is rusty so I'll avoid it as much as I can lest I embarrass myself by getting some of it wrong. The way I learned to derive the "Kalman Filter Equation" started with a very simple signal processing problem (extracting signal from observations corrupted by static white noise) then generalized to more complicated systems. My concentration was control systems so from there we took it in a certain direction. Despite that, and that I'm not a radar expert (I can spell it both forwards and backwards though) I'm a little familiar with the tracking of ballistic objects problem. At least as pertains to this planet, ballistics are pretty well understood, and at the heart of such a Kalman filter is a "model" of a falling object (which afaik would include gravity and drag, btw, so I guess my experience is with the expert level, ha). If we're talking about tracking an asteroid or something like that to predict it's impact point, worrying about gravity and drag is pointless because you won't have time get enough updates for the filter to "converge" (Kalman filters, at least of this type, work in "real time" and don't "look ahead" very far), and likely the object is moving so fast the cumulative effects of gravity and drag will be minimal. If you're tracking a satellite whose orbit is decaying, those effects might be more germane. What's not known at first encounter is the object's initial state (position and velocity vector, maybe accelleration) and its size/shape (how drag will affect it). The problem is to sort through inherently noisy and somewhat imprecise radar measurements, and the Kalman filter is a practical computational algorithm to come up with an "optimal"* formulation of the model (calibrate the model you might say). So from my limited perspective I wouldn't classify a Kalman Filter itself as a model, although including a model and predicting ahead incrementally are part of the algorithm. I suppose the model can be as sciencey or unsciencey as the situation warrants.

One way we commonly see modeling that isn't derived from science is when random variables are substituted for phenomena we can observe but either don't understand enough to model properly or the underlying process is prohibitively complex. Sticking with radio waves, antennas pick up "atmospheric noise". We probably have a pretty good understanding of a lot of the processes that produce radio waves across the universe, but detailed modeling of the radio band emission sources of the entire universe to test via modeling (i.e., by simulation) how well a new gps receiver design will reject noise and give accurate readings isn't practical. We have lots of measurements of this noise so we can come up with a suitable representation by matching statistical properties in a random variable. Where it gets more dicey is when we don't have the observation base and use random variables to hopefully cover a wide range of excursion of some unknown. That might be at the heart of the think tank's criticism that 7Wb5 mentioned above.

Your last sentence: "Fundamentally, models are thus ways of describing reality and scientific models are models that have been subject to the scientific method." sums up what I was getting at more expertly than what I did. Which segues to one one my favorite sayings (attributed to Einstein, perhaps apocryphally, I dunno): "As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality." You could probably substitute "model" for "mathematics" in that statement with no loss of generality. And that's probably what's at the heart of the problem getting hashed out here. Models are just models. Models not subject to the scientific method (yet) need to be understood with caution. Some of the models I've worked on and with were extremely expensive to develop, took years and input from SMEs across many disciplines, and even more money/effort was spent to verify/validate them. Only then were they employed for prediction. I think the problem here isn't so much the initial rough-cut model, but lies either with those who promulgated the results to laypersons, or what the laypersons decided to do with them. And it's hard to even use the word "fault" because I like to believe that somewhere below all the noise people were doing their best to save lives without making the cure more deadly than the disease, as it were.

*To be "optimal" requires some assumptions in the derivation that don't always apply IRL, iirc, but the algorithm still works well enough to be useful.