non-randomly-sampled历史数据代表吗?- 江南体育网页版- - - - -地球科学堆江南电子竞技平台栈交换 最近30从www.hoelymoley.com 2023 - 07 - 08 - t11:43:51z //www.hoelymoley.com/feeds/question/9675 https://creativecommons.org/licenses/by-sa/4.0/rdf //www.hoelymoley.com/q/9675 5 non-randomly-sampled历史数据代表吗? user967 //www.hoelymoley.com/users/0 2017 - 02 - 12 - t17:39:10z 2017 - 07 - 14 - t08:01:19z < p >我是全球变暖的怀疑论者,和我的一个问题是历史的全球温度的准确性。因为这些温度不是随机抽样或网格点位置,他们可以被认为是一个精确的表示,“全球温度”?< / p >

Thoughts/etc:

  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3982162/ talks about this in a temporal sense, since using the average of the minimum and maximum daily temperatures at a given location isn't really a great way of determining average temperature. However, it doesn't discuss geographical random sampling/gridding.

  • http://onlinelibrary.wiley.com/doi/10.1002/joc.4580/full notes this problem exists, and takes uniformly gridded measurements, but it's limited to a specific region and time period (1979-2012).

  • I know climate scientists slice the Earth up into grids to avoid clustering bias, but that's not the same thing, and isn't useful if the original readings don't accurately represent the slice/region.

  • I also realize that climate scientists have other measures of global warming, but linear regression of actual temperature measurements seems to be the most used to convince the public, so their accuracy seems important.

  • As a skeptic, I'd also like to know if, in general, most of the arguments for global warming are statistical in nature (ie, linear regression on measured variables), or the statistical ones are just the most "photogenic" for public consumption? In other words, is the whole non-randomly-sampled/gridded temperature argument a red herring?

EDIT (to clarify question):

To determine the Earth's mean surface temperature, we can employ one of these methods:

  • Measure the Earth's temperature at every point and average. Of course, this is physically impossible, since a point is a 0-dimensional mathematical abstraction, but we can do something close with satellites.

  • Select a large number of random points on the Earth's surface (this random distribution is uniform in longitude, but not in latitude-- in latitude, it would look like a cosine curve), measure the temperature, and average. In addition to giving us a mean, it would give us a standard deviation so we can say "we are 95% confident that the Earth's true mean temperature is X plus or minus Y".

  • Take a uniformly spaced grid (non trivial, since the distance between longitudes vary by latitude), measure the temperature at those points and average. This is similar to the first approach, but with fewer points. Unless we believe our grid points introduce a bias, this should be as accurate as random sampling.

My problem: temperature measurements in the past were made using NONE of these methods. The points where temperature was measured were not chosen randomly or in a gridded fashion. Therefore, how can they be an accurate measurement of historical temperature, even if we only consider temperature changes?

NOTE: I realize surface temperature isn't the best measure of global warming, since water has a much higher specific heat than land (among other things), but that's my focus for this question.

//www.hoelymoley.com/questions/9675/-/9697 # 9697 9 回答non-randomly-sampled历史数据代表约翰吗? 约翰 //www.hoelymoley.com/users/7080 2017 - 02年- 15 - t15:20:28z 2017 - 07 - 14 - t08:01:19z < p >样品不需要随机是有效的,它可以帮助,但这并不是最重要的尤其是详尽的,一致的,并且有一个大样本的大小,尤其是在处理观察性研究。比随机样本代表性是不同的。记住我们不是要达到全球气温最高的精确度在一个瞬间,我们正试图测量温度的变化在一个大跨度的时间。因此位置比随机化更重要位置的一致性。< / p >

The fact that the sampling points do not move is essential, we know temperature is affected by regional conditions if the samples were re-randomized(moved) with every measurement it would make it less accurate not more. Remember what is being measured, the change, because the sampling points are not moved the the change will be accurate because it essentially becomes a stratified sample. If I am measuring changes in engine temperature for instance I don not want to measure at a different point each time, as long as the points(locations) are consistent the sample will retain high accuracy. A random sampling would be LESS accurate because it would invite confounding because we know the distribution of temperature across the engine (or globe) is not random. Any shift in location between measurements would invite confounding data. Almost no science uses a truly random samples, it's just not possible. Consider things like exhaustive sampling, cluster sampling, stratified sampling, and systematic sampling all are used more often than true random sampling and each is more accurate than random in the right circumstances.

Consider an example, say you are trying to measure the temperature change in an engine over time. Where on the engine I attach my sensors does not matter as long as I do not move them, especially if I put many sensors on it. I could put thirty sensors all on the left side engine, and it would measure the change in temperature very accurately, compared to moving the sensors between every measurement. Don't fall for the perfect solution fallacy. Also remember this is an observational/descriptive study by its very nature.

Each point on the map is more like a repetition, the real independent is the time at which they are sampled, which is either stratified or clustered depending on which study you refer too. Note that multiple sets of data points are also compared. NOAA, BEST, etc. are each independent data sets that can be compared, and show the same pattern.

High and low are used for measurements because that is all that was recorded in the oldest measurements, so changing the format would require throwing out all that data, drastically shortening the sample size (loosing more than half the time span). In this case the accuracy gained by the much larger number of samples is more than would be gained by a random or grid location. Random is rarely possible with historic data which is why the size and consistency of the data set is so important. The nice thing is these are also compared to other sampling methods on other time scales to test to see if they show the same pattern. Historic scientists are aware of the limitations of their data which is why independent verification is so important.

Now consider ice core data, I was surprised when you said surface temperature was the most used, I see ice core data far more often, because it records a much longer span of time, and records other things (like $CO_{2}$ content) as well. Again each core is a repetition and the core can be sampled in a random or stratified way, stratified is the most common because it is more exhaustive in a core sample. Ice cores are also compared to ice cores for m other locations.

Another consideration is cross-comparison, that is the use of multiple independent forms of measure, ice core compared to satellite, compared to surface, etc. Dozens of different forms of measurements/experiments are compared and show the same pattern.

This is probably one of the best overviews of the science I have seen. It is a little old (2013) so if anyone has seen a more recent version I would love to use it instead.

//www.hoelymoley.com/questions/9675/-/9928 # 9928 2 回答user7733 non-randomly-sampled历史数据代表吗? user7733 //www.hoelymoley.com/users/7733 2017 - 03 - 18 - t01:48:15z 2017 - 03 - 18 - t01:48:15z < p >不,他们不是代表。科学不是最理想的。你不会从那里开始如果你有今天的计划,但你所得到的。当你有你必须诚实和不完善的数据指出的缺陷,如果可能的话,计算或估计这些缺陷的影响你的结果/结论。是否聪明的气候学家已经这么做了必要的程度超出了我(也许不是超越你?)。早期的IPCC报告似乎有足够的“不确定”“不确定”“数据不足”的评论但是这些,当然,在数量上有所减少。原因并不是更好的历史数据,你仍然有相同的旧数据,但显然更好的“治疗”。数据是否已被人类虐待或集成电路再次超越我。也许还没有折磨。如果我们想要检查一个囚犯声称他被折磨时,保安说,他们“不碰他,gov !”我们会看下他的衣服在他的皮肤。所以让climatoligist他们告诉你如何对待他们的数据,外层的衣服脱下,让他们解释那些瘀伤的痕迹。 Be suspicious of anyone who says it was an accident while playing hockey.

Baidu
map