In the post: “What does it take for an quantitative theoretical model to be reliable?” I put up a list over requirements a quantitative theoretical model will have to fulfill to be regarded as reliable. I base the list on what I regard to be principles in modern philosophy of science. Hopefully in accordance with the principles of critical rationalism as expressed by Karl Popper. I also base the list on selected principles within measurement, estimation of uncertainty and independent testing as found in the international standards. International Standardization Organization: Quantities and units, Guide to the expression of uncertainty in measurement, General requirements for the competence of testing and calibration laboratories; And Bureau International des Poids et Measure BIPM: International vocabulary of metrology.

- The theory is about a causal relations between quantities that can be measured
- The measurands are well defined
- The measurands can be quantified within a reasonable uncertainty
- The uncertainty of the measurands has been determined by statistical and / or quantitative analysis
- The functional relationships, the mechanisms, has been explained in a plausible manner
- The functional relationships, the mechanisms, has been expressed in mathematical terms
- The functional relationships between variables and parameters has been combined into a model.
- The influencing variables which have significant effect on the accuracy of the model are identified
- The model has been demonstrated to consistently predict outputs within stated uncertainties
- The model has been demonstrated to consistently predict outputs without significant systematic errors
- The model has been tested by an independent party on conditions it has not been adjusted to match

However I offered no explanations and no arguments in the original post. This post is intended to rectify this shortcoming by offering a reason behind each individual requirement.

**The theory is about a causal relation between quantities that can be measured**

Causality is a very fundamental requirement within science. I take for granted that this principle is acceptable as a requirement for a reliable theoretical model. A causal relation between two events exists if the occurrence of the first causes the other. The first event is called the cause and the second event is called the effect. A correlation between two variables does not imply causation. On the other hand, if there is a causal relationship between two variables, they must be correlated.

**“**Quantity is a property that can exist as a magnitude or multitude. Quantities can be compared in terms of “more,” “less,” or “equal,” or by assigning a numerical value in terms of a unit of measurement.” As all requirements on this list are requirements for a quantitative theoretical model. I find it reasonable to require that the causal relation is between quantities.

At last, it is also a requirement that the quantities can be measured and assigned a magnitude. Obviously, if you cannot perform a measurement of a quantity independent from the model, how can you ever tell if the theoretical model is reliable or not?

Finally, it would also be wise to assign the quantity a unit in accordance with ISO 8000 quantity and units. Most of all because you will end up with problems if you don´t.

**The measurands are well defined**

The term measurand is meant to cover both input variables and output variables. You will need at least to variables in a casual relationship – one input variable (independent variable) and one output variable (dependent variable) . Obviously both these variables – both these measurands – need to be well defined. If one of the variables are not well defined – this will add uncertainty to the measurement. What has been measured – how can you repeat the measurement. How can the measurement be repeated by an independent party. Also you will loose ability to communicate precisely about the casual relationship. It will also become difficult to test and verify the theory.**The measurands can be quantified within a reasonable uncertainty**

For a measurement to be complete, it must be possible to assign a magnitude, a unit and an uncertainty to the measurands. If you cannot quantify, and assign an uncertainty to the input variable and the output variables, there is no way you can tell if a prediction based on the causal relationship is within a reasonable uncertainty.Obviously you cannot use the model to quantify the input variables or the output variables. And you cannot use the model to quantify the uncertainty. You cannot use outputs from the model to test the model. Even a seriously defective model may seem to work perfectly fine if you test it against it´s own predictions.

**The uncertainty of the measurands has been determined by statistical and / or quantitative analysis**

It is not sufficient that it will be possible to determine the uncertainty of the measurands. It is also required that the uncertainty of the measurands actually has been determined, and that the uncertainty has been determined and documented in accordance with an acceptable standards. The guideline: Guide to the expression of Uncertainty in Measurement is the most recognized standard for evaluation and expression of uncertainty. I am not aware of any other standard having the same level of international recognition.

**The functional relationships, the mechanisms, has been explained in a plausible manner**

By statistical analysis you may be able to find correlation between variables between which there are no causal relationship. This correlation can be used in a theoretical model. The theoretical model can then be used to predict an output which is reasonably close to a independent measurement of the output variable – for a while. Obviously the theoretical model seems to have predictive skills, without having any predictive skills. Therefore the functional relationships will also have to be explained in a plausible way. The causal relationship will have to be explained. Obviously the explanation will have to rest on already established reliable quantitative theories.

**The functional relationships, the mechanisms, has been expressed in mathematical terms**

For the functional relationship to be usable it must be possible to calculate a set of output variables for a set of input variables. Hence, the functional relationship will also have to be expressed in mathematical terms. If it has not been expressed in mathematical terms, it cannot be used in a computation.

**The functional relationships between variables and parameters has been combined into a model.**

When the functional relationships in the theory has been expressed in mathematical terms it will also have to be combined into a model. The model is the usable realization of the theoretical model. The model must have the capability that it can calculate a set of output variables for a set of input variables.

**The influencing variables which have significant effect on the accuracy of the model are identified**

Input variables which can be quantified and which have a significant and systematic effect on the output variables need to be included in the model. Input variables which have significant effect on the accuracy, but cannot be quantified or the functional relationships are not known should be identified.

**The model has been demonstrated to consistently predict outputs within stated uncertainties**

If not – the claim about model uncertainty has been falsified.

If the uncertainty has not been stated, the model isn´t falsifiable – it is not scientific.How can you rely on a model if it has not been demonstrated to predict outputs within stated uncertainties for a realistic set of input variables. And yes, the uncertainty of the model will have to be stated. How can you possibly decide if a model is useful or not if the uncertainty of the model has not been quantified?

**The model has been demonstrated to consistently predict outputs without significant systematic errors**

If not, the claim that the model correctly represents the issue at hand has been falsified.If the model, after calibration and adjustment, still predict output with significant systematic errors there must be something wrong with it.

**The model has been tested by an independent party on conditions it has not been adjusted to match**

The first part of this requirement is that the model will have to be tested. The predicted output will have to be compared to an independent measurement, estimation, of the output variable. The independent measurement will also have to be assigned an uncertainty. The predicted output should differ from an independent estimation (measurement) of the output variable by less than the combined uncertainty of the estimate and the claimed uncertainty of the theoretical model. Be sure that even in very simple models, there can be all kinds of errors. The only way to be sure that no such errors exists is to compare it with an independent measurement of the same output variable.

The second part of this requirement is that the model will have to be tested by an independent party. In line with established practice in our society, it is often required that testing of important models are performed or verified by independent parties. The reason for this is to avoid possible errors related to testing of own products.

Finally it is also required that the model is tested on conditions it has not been adjusted to match. Most theoretical models need to be adjusted in various ways to match an independent measurement of the output variables. This is also called calibration and adjustment. There will then be a risk that the adjustment improves the test results for the test conditions. The model can have much worse capabilities for conditions it has not been adjusted to match. It is therefor required to test the model on conditions it has not been adjusted to match.

And – of course if the test results are not within stated capabilities – the model and the claims about it has been falsified. If no capabilities has been stated, the model isn´t scientific – it isn’t falsifiable.

———

I would also expect that data, methods, models and test results are readily available for scrutiny, and that all information is provided in a way that is consistent with established standards.

———

Pingback: The modern scientific method: Imagine – check – deduct – test | Science or fiction?

Pingback: Index – Posts by “Science or Fiction” | Science or fiction?

Pingback: How to arrive at a reliable model | The principles of science

This is great.

I believe there is another required aspect: there must be variation in each measurand; there must be occasions where values in each are higher / greater or lower / fewer.

These causal claims say, in so many words: if values in the putative causal measurand go up, then values of the putative effect go up (or, in the case of an inverse causal relation, go down).

If I say “there is more precipitation in the winter than the summer,” I have to have seasonal variation, and I need to have variation in precipitation.

This leads to this view: you cannot investigate causal relations with two measurands if one of the measurands is monotonic: staying steady – either remaining the same, or rising steadily, or falling steadily.

If one is monotonic, you have no opportunity to say as one rises the other rises (or falls); because there is no data where each has both a rise and a fall. There is never occasion for the two measures to covary.

I assume “covary” means that each varies, and the two should, to some degree, vary sympathetically. If one does not vary, you logically cannot have a covariance analysis.

This is a matter of logic, or reason; it needs ot be a statistical assumption for running any covariance analysis. –Frankly, I have not seen this in the several statistics books I have to review.

Currently, this is a problem with modeling the putative relation between CO2 (putative cause) and global temp (putative effect). In modern measurement time, CO2 has only been steadily rising. If you do not have this co-vary requirement, then ANY measured trend that has an overall general rise, or decline, should have a notable correlation.

CO2 does “covary” – at the seasonal level – there is a sinusoidal cycle each year, but over years a gradual increase. So, you can test whether CO2 varies by season. But, the planetary CAGW temperature hypothesis is over a longer time frame / longer resolution – of years. Planetary temp rises and falls, but across years, and at that time frame CO2 has only a steady increase – no covariance.

LikeLike

I tried to find the correlation between surface temperature and CO2 once, the correlation I found was not at all convincing.

The temperature of the oceans is monotonically increasing on the other hand. The oceans are supposed by IPCC to accumulate 93% of the energy absorbed by CO2 in the atmosphere. Without having done the exercise, I imagine that there would be a significant correlation between ocean temperature and CO2, at least if ocean temperature is smoothed over a few years. https://dhf66.wordpress.com/2016/07/15/how-to-estimate-current-global-energy-imbalance-from-nodcs-ocean-temperature-record-2005-2015/

However, the temperature increase of the oceans is lower than the rate of warming hypothezised by IPCC. The current rate of warming of the oceans is equal to the hypothesised cloud feed-back alone which if it is right – would leave no room for the primary effect.

https://dhf66.wordpress.com/2016/02/09/without-cloud-feedback-there-would-be-no-global-warming/

Hence, judging by current observations, the size of the effect of CO2 that is propounded by IPCC seems to be exaggerated.

LikeLike