Errors in Measurement
Posted by greg2213 on February 12, 2010
A generally accepted number for the amount of warming over the last 100 some odd years is .6 +- .2 degrees C. According to surfacestations.org a lot of the stations used to measure temps have an error of >= 2C.
So how the heck do you get an error measurement of 0.2 when your thermometer has an error of +-2.0??
Apparently if you throw a bazillion measurements into the mix the errors tend to cancel each other out. As an extreme, consider this: an atom (or a molecule) is mostly empty space and subatomic particles tend to move somewhat randomly. Yet that baseball bat is very sold, very predictable, and make a nice noise when you hit a homer.
So maybe if we had a million (or several million or maybe 50,000) thermometers scattered all over the world their errors would tend to cancel (some high, some low) and you could get a small error measurement like tenths or hundredths.
But what if there are only a few hundred? Or a few dozen? And what if you calculation software adds a small error of it’s own? Or if that software isn’t perfectly programmed and adds some other error? You certainly won’t get tiny error ranges.
I ask “Where’s the Beef?” and folks offer Holy Hypothetical Cows
Whenever I’ve raised the issue of precision and accuracy drift in GIStemp, the discussion has ended up with folks offering all sorts of reasons why hypothetically you can get a gazillion bits of precision out of a large average of a bazillion things. Then I point out that we have only, at most, 62 values going into the monthly mean (and that done in 2 steps, with opportunities for error and accuracy drift). And that then those values are used for all sorts of other calculations (homogenizing, UHI “correction”, weighting, all sorts of things) before they ever approach the point where they are finally turned into “anomalies”. Even then the method used does not always compare a station with itself. It is more a “basket of oranges” to a “basket of apples”. (And some times there are as few as ONE station forming the “anomaly” for a given GRID box…)
Still, the Hypothetical Cow gets trotted out on stage each time the issue is raised. A Hypothetical Cow, we are told, has near infinite accuracy and precision due to the central limit theorem and the law of large numbers (which, in hypothetical land, can even be applied to small groups of real numbers…)
But this article…
He skewers that cow in the rest of the piece, here: Of Hypothetical Cows and Real Program Accuracy
Here are a couple of other posts on the GISS surface record:
- The Surface Temp Record is a Mess
- 1934 Warmer than 1998? Yes, No, Yes, No…
- Jim Hansen, Chief Alarmist of GISS, says, “…the US time series which (US covering less than 2% of the world) is so noisy and has such a large margin of error that no conclusions can be drawn from it at this point.” Keep in mind that the US series is the gold standard, which means the rest of the world’s measurements are is worse shape.The same article points out that current temps as measured by the surface record are not significantly warmer than the 30s & 40s.
Jones says, ““The major datasets mostly agree,” he said. “If some of our critics spent less time criticising us and prepared a dataset of their own, that would be much more constructive.”
How can it be constructive if all critics of Jones’ (and other warmist) work is universally derided as “flat earth thinking?” What really needs to happen is that the proponents of warming need to be a lot more respectful of the skeptics than they are.
Also, the critics have done a lot of work with the data and it’s all over the web (Chefio has a lot of it.) Secondly, if Jone’s work was robust then the criticism wouldn’t be an issue.
Lastly, far more work shows the MWP as at least as warm as today, and maybe as much as 2C (or more) warmer. Very few papers beyond the discredited Hockey Sticks show that it was cooler.
There’s lots of MWP stuff on CO2 Science.