注意你的循环,
for i in range(len(lat)): for j in range(len(lat)): for m in range(len(lat)): for n in range(len(lat)):
将只生成索引,
i = 0,…, 8 j = 0,…, 31 m = 0,…, 8 n = 0,…, 31
当你使用sst5[59:90,i,j]
和sst5[59:90,m,n]
你不是从你的lat
和lon
变量的热带点提取数据,你只是从完整的全局数组的角落提取数据,即,sst5[:, 0:8, 0:31]
。您可以通过输出sst5看到这一点。循环中的形状
,将显示
…(365,720, 1440)(365,720, 1440)(365,720, 1440)(365,720, 1440)…
这个角在南极洲,因此没有海温数据,numpy。corcoef
将只看到缺失的数据值,并试图除以零协方差,产生NaN值。当发生这种情况时,我的Numpy会发出以下警告:
RuntimeWarning:在true_divide c /= stddev[:, None]
你可以修改你的程序,在循环之前将sst5
数组子集:
sst5_子集= sst5[59:90, 120:510:35,::45]
,然后在循环中引用这个数组:
mar_05.append(np。corcoef (sst5_子集[:,i,j], sst5_子集[:,m,n])[0,1])
但是,就个人而言,我会摆脱循环并将其作为整个数组操作:
sst_array = sst5_子集.重塑(sst5_子集)。形状[0],1)。T corr = np.ma.corrcoef(sst_array)
一个兴趣点的快速输出图片:< A href="https://i.stack.imgur.com/bzCFL.png" rel="nofollow noreferrer"> A >
像飞机投下探空仪系统一样,使用漂流探空仪的现场实验涉及大量成本,需要大量提前规划。了解可能的飞行路线并获得飞越许多国家的许可可能需要数月时间。如果气球在未获得飞越许可的地区附近漂流,则必须将其砍下。< / p > < /引用>
In additional to that difficulty, general reasons why driftsondes can’t stay aloft permanently include,
- All balloons leak, so they can’t stay aloft indefinitely
- Balloon materials deteriorate, particularly in extreme temperatures
- Instruments need power, and solar panels add to weight and complexity
- Finite number of dropsondes aboard, although that’s not the only type of payload
I notice that (but don’t know much about) the more recent STRATEOLE-2 driftsonde (Haase et al, 2018; https://eos.org/science-updates/around-the-world-in-84-days) can stay aloft for around three months, which means it can circle the equator about three times.
我们需要注意这里的符号约定。你提到云的移除产生了+ 18w /m2的强迫,这并没有错,但我们通常将其定义为添加或存在的云产生了- 18w /m2的强迫。这是因为我们认为它描述了云如何对系统的长期平衡状态做出贡献,比如工业化前的状态,而不是关于云消失的“假设”实验。
当我们考虑当某物从这个基线扰乱系统的辐射平衡时会发生什么时,云(和其他)反馈就会起作用。这可能是温室气体或气溶胶浓度的变化、大规模森林砍伐、大型火山喷发、太阳辐射等。这将通过这些反馈机制引起系统状态的变化,这些反馈机制可以反对或加强原始的辐射摄动。< / p >
In the case of cloud feedbacks, those two opposing components (shortwave and longwave) could each change in either direction in response to a radiative perturbation, so the net effect of cloud feedbacks could also go in either direction. Models disagree a lot about what the cloud net response to greenhouse gas increase is; the CMIP5 ensemble had -0.04±0.53 W/m2/K in response to a CO2 doubling, but individual models can go in either direction.
WMO对各观测站阵风的定义为,
观测周期内3秒运行平均风速的最大值。
这是您找到的定义。你所链接的ECMWF文件说,
这个持续时间短于模型时间步长,因此ECMWF综合预报系统(IFS)从时间步长平均的表面应力,表面摩擦,风切变和稳定性推断出每个时间步长的阵风强度。< / p > < /引用>
The details of this calculation can be found in the IFS documentation (S3.9.4) and there's a more readable summary in an old ECMWF newsletter by Bechtold and Bidlot (2009). My interpretation is that this is a parameterization of V_3_X that's calculated every model time step, so X is the atmospheric time step length (e.g., about 90 to 1200 s, depending on model resolution) for the instantaneous output field. The aggregated period maximum output fields that are also available will be simply the maximum V_3_X over that period.
你可能想看看最近的ZECMIP(零排放承诺)论文MacDougall等人(2020)https://doi.org/10.5194/bg-17-2987-2020,与右边的IPCC AR6技术摘要的示意图一起:
该示意图表明,自工业化前时期以来的大部分变暖将在我们有净正排放的情况下发生,而净零排放后的变暖将是在此基础上的一些小增加。为了理解这一点,我们需要考虑净辐射不平衡(目前为+0.6 W/m2),而不是辐射强迫(目前为+2.2 W/m2)。这种不平衡是辐射强迫和各种反馈的净结果,其中一些反馈是负反馈(例如,普朗克响应、陆地和海洋碳吸收)。普朗克对地表温度的响应约为-3 W/m2/K,因此,一旦实现净零排放,地表只需要再升温0.2 K就能通过这一机制侵蚀0.6 W/m2的辐射不平衡。
但是还有其他的反馈在起作用。一旦达到净零排放,现有的陆地和海洋碳汇将继续吸收CO2,因此大气CO2将减少,其对辐射强迫和辐射不平衡的贡献将减少。这减少了普朗克响应为达到辐射平衡所做的功。碳汇的作用越快,在达到辐射平衡之前所需的额外变暖就越少。< / p >
On the other hand, ocean heat uptake will likely weaken once net-zero emissions are reached, because the current ocean heat sink is partly driven by the ocean-atmosphere temperature gradient. This effectively becomes a positive feedback on surface temperature change. Going back to MacDougall et al (2020), it turns out that these feedbacks are strong and the zero emissions committed surface warming is a subtle balance between these components, giving a best estimate range of -0.3 to +0.3 K (see plots below).
A key thing here it that this is a measure of the atmospheric response, which doesn't have a great deal of thermal inertia. The ocean is still out of balance and accumulating heat, and the carbon cycle is still out of balance removing CO2 from the atmosphere. Equilibration of other components of the Earth system would take much longer, as you say, but that's not the goal of net-zero.
我最近意识到这个数据集和论文,我认为这是你正在寻找的东西:
Tuinenburg等人(2020):通过Lagrangian大气湿度跟踪获得的全球蒸发降水流,PANGAEA, https://doi.pangaea.de/10.1594/PANGAEA.912710
Tuinenburg等人(2020):从蒸发到降水的高分辨率全球大气湿度联系,地球系统。科学。数据,12,3177-3188,https://doi.org/10.5194/essd-12-3177-2020
从中我们得到图1:UTrack大气湿度跟踪模型。(a)该模式利用ERA5再分析数据强迫的拉格朗日水分跟踪方案,跟踪从源单体到目标单体在大气中的蒸发。(b) UTrack模型中的“蒸发足迹”或“蒸发棚”实例。(c)“降水足迹”或“降水棚”的示例:
我必须承认我还没有看过数据,所以我不知道使用起来有多容易。
最常用的周期是30年,这是来自于气候正常值在NWP使用的想法。仅用10年的时间,就连全球平均温度也会看到一次大的厄尔尼诺现象或长时间的拉尼娜现象的影响,这可能不是你想要的。如果你用30年的方法写一篇论文,审稿人可能不会质疑;如果你用的是20年,他们可能会要求你给出理由;如果你用的是10年,你可能会因此受到批评。< / p >
Regardless of what period you pick, it's always a good idea to also calculate confidence intervals for your means, which can give you a sanity check on whether differences between periods are really notable. That's sort of one step short of doing formal statistical hypothesis testing.
From the IPCC AR6 glossary, for reference:
Climate: Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The classical period for averaging these variables is 30 years, as defined by the World Meteorological Organization (WMO). The relevant quantities are most often surface variables such as temperature, precipitation and wind. Climate in a wider sense is the state, including a statistical description, of the climate system.
and,
Global warming: Global warming refers to the increase in global surface temperature relative to a baseline reference period, averaging over a period sufficient to remove interannual variations (e.g., 20 or 30 years). A common choice for the baseline is 1850-1900 (the earliest period of reliable observations with sufficient geographic coverage), with more modern baselines used depending upon the application.
黄色箭头表示与自然碳循环相关的年碳通量(以PgC年- 1为单位),估计在工业时代之前的时间,大约在1750年。粉色箭头表示2010-2019年期间的平均人为通量。我认为第二句话是不对的。我的解释是,粉色箭头是对自然碳循环的人为扰动,它给了我们观察到的当今通量。它们不是用来表示与标记更明显的黄色箭头分开的进程。< / p >
So, total respiration and fire emissions were 111.1 PgC/yr in 1750 and are 111.1+25.6=136.7 PgC/yr in the present day. Similarly, gross photosynthesis was 113 PgC/yr in 1750 and is 113+29=142 PgC/yr in the present day, which represents approximately a 25% enhancement (due mainly to CO2 fertilization). That fits with some text in the Fig 5.12 caption from an earlier draft of the report:
The relative change of Gross photosynthesis since pre-industrial times is estimated as the range of observation-based of 31 ± 3 % (Campbell et al., 2017) and land-model of 19±12% (Sitch et al., 2015) estimates. This is used to estimate the pre-industrial Gross photosynthesis, assuming a present-day range of 116–175 PgC yr-1 (Joiner et al., 2018). The corresponding emissions by Total respiration and fire are those required to match the Net land flux...
Note that most of those pink values on the net fluxes row come from Friedlingstein et al (2020), who describe them as "the overall perturbation of the global carbon cycle caused by anthropogenic activities". Where I think this figure differs from Friedlingstein et al slightly is that it imposes closure of the cumulative 1750 to 2019 carbon fluxes (see column 1 of Table 5.1), which means that the budgetary imbalance of 20 PgC going back to 1850 is assigned to the pre-industrial land and ocean sinks.
Re the oceans, in some ways this is a simpler case than the land. All parts of the ocean are continually exchanging CO2 with the atmosphere: some CO2 molecules dissolve into surface waters (entering through diffusion or bubble entrainment and collapse), and some CO2 molecules outgas from surface waters (leaving through diffusion or bubble cavitation). Figure 5.12 shows estimates of those gross fluxes in 1750 in yellow: 54.0 PgC/yr entering the ocean and 54.6 PgC/yr leaving the ocean, giving a net loss from the ocean to atmosphere of 0.6 PgC/yr.
Anthropogenic emissions have perturbed the system so that there are now gross CO2 fluxes of 54.0+25.5=79.5 PgC/yr entering and 54.6+23.0=77.6 PgC/yr leaving the ocean, giving a net gain of CO2 by the ocean of -0.6+2.5=1.9 PgC/yr. Both of those gross fluxes have increased because the CO2 concentrations in the atmosphere and surface ocean have both increased but, on average, the atmospheric concentration has increased more. This produces a gradient in CO2 concentration across the atmosphere-ocean interface that drives the net flux into the ocean. This gradient is maintained partly by continued fossil fuel emissions into the atmosphere and partly by physical and biological ocean processes that transfer dissolved CO2 down away from the surface into deeper ocean waters.
我不确定你是否对报告进行了更深入的引用,
希克斯等人(1985)关于使用监测空气浓度推断干沉积, NOAA技术备忘录ERLO ARLZ141 (https://repository.library.noaa.gov/view/noaa/19701)
在附录C中有更多关于这些图表起源的信息。
他们认为湍流和重力(沉降)沉积通量是可分的和可加的,因此总沉积速度为
$$v_p = v_d + v_g = -F_p / C$$
其中$F_p$和$C$分别为粒子通量和浓度。然后他们注意到,
这个简单的过程引入了一个概念问题,然而,因为显然$v_d$和$C$是高度函数,而$v_g$不是。
此时他们假设了两个方程$F_p$ (Eqns。
$$ F_p = - \left[C(z) -C(z_0) \] \,r_a^{-1} -C(z)\, v_g\\ F_p = -C(z_0) \,r_ {cp}^{-1} -C(z_0) \, v_g $$
这些可以组合在一起消除$C(z_0)$,将结果与上面的$v_p$方程结合,重新排列得到Eq C7,
$$v_p = (r_a + r_{cp} + r_a r_{cp} v_g)^{-1} + v_g$$
您在右上方展示的电路图是该方程的表达式,它显示了如何修改传统电路图以解释重力沉积存在下的湍流传递。然而,他们注意到,
在实践中,除非$v_g$相对较大,否则式(C.7)中的三积项不太可能具有重大意义。由于对于粒子来说,$r_{cp}$这个量有很大的不确定性,所以没有必要详细考虑其后果。< / p > < /引用>
By NASA - https://web.archive.org/web/20140421050855/http://science-edu.larc.nasa.gov/energy_budget/ quoting Loeb et al., J. Clim 2009 & Trenberth et al, BAMS 2009, Public Domain, https://commons.wikimedia.org/w/index.php?curid=32285340
If we changed something about that system, such as increasing atmospheric CO2, and recalculated the energy flows only, we would find that those flows no longer balance. Each component would have a net gain or loss of energy. At this point we could re-diagnose the states (e.g., atmosphere and surface temperatures) that would bring this system back into balance.
Note there’s no component of time in this description, this is just diagnosing the initial and final states of the system in response to a given change. Under this model we don’t need to know the details of what the surface is made of, we’re just describing it as something that radiates like a black body with a particular emissivity. But if we want to know how the system transitions between those two balance states then, yes, we will need to know the heat capacities of the various components. Components with larger heat capacities will take longer to transition and accumulate more energy for a given level of energy imbalance than components with smaller heat capacities.
Why is this point never mentioned in the theory? It seems to have been completely ignored.
Just taking one example climate model land surface scheme, http://doi.org/10.5194/gmd-4-677-2011, the surface heat capacity it literally the first term of the first equation in that paper.
Similarly, the latest IPCC report (and many of the previous ones, and the references therein) has lots of analysis of this aspect, e.g., Fig TS.13:
IPCC AR6 (2021), Technical Summary (pdf), Figure TS.13 (page TS-150).
Where does the extra energy come from? It cannot be created to suit. It can only come from the energy absorbed by the extra CO2
Ah no, this maybe where your confusion lies. The energy comes from the sun, which is a continual external input of energy to the Earth system. An increase in greenhouse gas concentration causes some of that energy to accumulate in the system, changing the system state, until the state has changed to the point that input and output are back in balance.
Energy is required to increase the temperature of the surface, and more energy is required if any is to be radiated. This can only be funded by the extra energy absorbed by the extra CO2 The extra energy cannot be provided by the Sun, because its supply to the surface was already in balance with output before the addition of CO2.
Well, specifically, increasing the greenhouse gas concentration reduces the atmospheric transmittance to longwave radiation, which reduces the amount of energy leaving the atmosphere to space and increases the energy content and temperature of the atmosphere. The warmer atmosphere emits more longwave radiation both to space (thereby weakening the initial top-of-atmosphere imbalance) and to the surface. The latter changes the surface energy budget through the downwelling longwave term, reducing the net loss of energy from the surface by longwave exchange and increasing in the energy content and temperature of the surface. But while the "back radiation" may be the route through which surface temperature increases, the sun is the origin of all the energy flowing through the diagram at the top.
一些相关的AR6标题值为:
+1.09°C = 1850-1900年和2011-2020年观测到的GMST和GSAT变化
+1.06°C =第7章归因评估期1850-1900年和2010-2019年观测到的GMST和GSAT变化
+1.14°C =第7章归因评估期1850-1900年和2010-2019年模拟的GSAT变化
+1.29°C = 1750和2019年模拟的GSAT变化。归因于人为强迫
+1.06°C和+1.09°C的值与观测到的温度变化估计值相同,只是后者涵盖了在撰写报告时可用的最近的“当前”时期。是的,0.03°C的差异仅来自意义期的一年差异,但这只是反映了年际变率对10年平均值的影响。报告中包含的+1.06°C值只是为了与第7章的工作进行比较。
+1.06°C和+1.14°C值是同一GSAT度量的两个不同估计值。它们不是独立的估计,因为后者受到GSAT观测以及其他观测和模型的限制。当你考虑它们的不确定性界限时,这些估计是相互一致的,但是,正如第7章p7-52所指出的:
由于模拟响应试图约束多条证据线(补充材料7. sm .2),其中只有一条是GSAT,它们不一定应该被期望完全一致。GSAT的观测告诉我们发生了什么,但它们没有告诉我们为什么会发生:每种强迫因子对记录的贡献有多大?从理论上讲,我们可以通过多次运行完整的esm来估计这些贡献,包括强迫因子中的不确定性,也许也受到观测及其不确定性的限制,试图通过内部变率的噪声看到信号。但实际上,这在计算上是不可能做到的,这就是模拟器的用武之地。< / p >
Cross-Chapter Box 7.1
Climate model emulators are simple physically-based models that are used to approximate large-scale climate responses of complex Earth System Models (ESMs). Due to their low computational cost they can populate or span wide uncertainty ranges that ESMs cannot. They need to be calibrated to do this and, once calibrated, they can aid inter-ESM comparisons and act as ESM extrapolation tools to reflect and combine knowledge from ESMs and many other lines of evidence.
The emulators don’t (and don’t attempt to) capture the full, noisy temperature record including interannual variability. Think of them as emulating the underlying forced response of temperature without the obscuring effects of noise.
The difference between emulated values +1.14 °C and +1.29 °C is a mixture of several things: the 1750 versus 1850-1900 baselines (about 0.1 °C), the non-anthropogenic forcings (about -0.02 °C), and using end points rather than end periods (about 0.2 °C). Note that just looking at the end points is normally not recommended in observations and ESMs because a lot of the difference will be internal variability, but that’s not really a problem in these smooth emulations. You just have to be careful what you do with them, i.e., what other values you’re comparing them to.
So, the bottom line is that the best estimate of the warming that has been expressed by the system to date is +1.09 [0.95 to 1.20] °C. From the AR6 SPM headline statements (my emphasis):
A.1.2 Each of the last four decades has been successively warmer than any decade that preceded it since 1850. Global surface temperature8 in the first two decades of the 21st century (2001-2020) was 0.99 [0.84-1.10] °C higher than 1850-1900. Global surface temperature was 1.09 [0.95 to 1.20] °C higher in 2011–2020 than 1850–1900, with larger increases over land (1.59 [1.34 to 1.83] °C) than over the ocean (0.88 [0.68 to 1.01] °C). The estimated increase in global surface temperature since AR5 is principally due to further warming since 2003–2012 (+0.19 [0.16 to 0.22] °C). Additionally, methodological advances and new datasets contributed approximately 0.1ºC to the updated estimate of warming in AR6.
在WFD中使用的南极洲以外的67,420个CRU网格盒中有211个被错误地指定为陆地。这些在WFDEI文件中被省略了;剩下67209个南极洲外的陆地点加上27533个南极洲内的陆地点(网格框的面积不相等)。< / p > < /引用>
I've never seen a clear explanation of where they come from, but I recall they have associated elevations of zero. I suspect that some of them are from moored buoys that were mis-characterized as land. For example, the Bay of Biscay box is probably the Gascogne buoy described in Turton and Pethica (2010) and the group to the west is probably Ocean Station PAP:
EC-Earth似乎值得注意的是没有一个特殊的问题。有一篇论文EC-Earth3历史模拟和一篇EC-Earth3报告,但没有明显的协调。这些信息可能散布在各种各样的论文和期刊上。< / p >
It's worth noting that CMIP6 outputs are pushed through various standard benchmarking tools and put on the web, which you might find useful: e.g., https://www.ilamb.org/CMIP6/historical.
Is it possible that the data from these models contribute to CMIP6 without any peer-reviewed evaluation of the outputs?
Yes and no. Pretty much every group will have some form of internal evaluation and sanity check of their model outputs before publishing them to the ESGF. Some groups require a human to approve every single file that gets published. But with each CMIP6 model producing petabytes of data, this internal post-processing, review and publication work flow requires a significant amount of effort by each group (e.g., CESM at NCAR). It's just not practical for all that data to also be formally peer-reviewed just for the purpose of sitting in a database.
In practice though, each science paper that uses those data is a form of peer-review (because none of us should use any data without sanity checking it for ourselves), and those papers will themselves have traditional, formal peer-review before being published. (I've had to contact other modelling groups about oddities in their data I've found while writing a paper and sometimes they have the resources to fix the dataset, sometimes they don't.) So you could say that these datasets have at least three levels of review, and probably more. One of the benefits of making these data on the ESGF available to anyone is that they are scrutinized by far more people, friendly and unfriendly, than happens under traditional peer-review.
The COP meetings happen every year, but most of them are much smaller than COP26 and are mostly concerned with ironing out technical details of existing agreements and laying the ground work for future agreements. The COP3 in Kyoto in 1997 yielded the Kyoto Protocol and subsequent COP meetings have also served as CMPs (COP acting as a Meeting of the Parties to the Kyoto Protocol) that do things like agree changes to the reporting methods for national greenhouse gas emission inventories. As the Protocol was coming to an end in 2020, COP21 in Paris in 2015 was targeted as the meeting where it's replacement would be negotiated and agreed, which was why that meeting was another large one. That meeting yielded the Paris Agreement and subsequent COP meetings have also served as CMAs (COP acting as a Meeting of the Parties to the Paris Agreement).
Under the Paris Agreement, Parties have to register their emission targets every five years and make them more ambitious than their last targets, which brings us to 2020, the original scheduled date for COP26 before Covid intervened. So COP26 in Glasgow naturally became the venue where the Parties would commit to the next revision of their targets, making it another large meeting.
None of that was dictated by the timetable for the IPCC AR6. Similarly, minutes from early AR6 planning in 2016 show that that report was never intended to be finalized before COP26. From a planning paper (pdf):
a. ... which establishes that the AR6 cycle ending one year after the session at which the final product of the AR6 has been accepted, expected to end in 2022.
In other words, Working Group I, II and III reports would all be available by late 2021, a year after the original COP26 date.
b. The second is that it is unlikely that Working Group I (WG I) would be able to submit to the Panel its contribution to the AR6 before November 2020...
They didn't expect that even the WGI report would be available by COP26.
c. In order to meet the needs of the United Nations Framework Convention on Climate Change (UNFCCC) the Panel would need to consider a Special Report (SR1) on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways in September 2018.
That was okay because the AR6 cycle would include earlier reports that would inform the needs of the UNFCCC process.
As some commenters above note, the Parties already had the information they needed to make emission commitments - from the IPCC Special Reports on 1.5°C, Land and Ocean, from their own commissioned reports and from the scientific literature that's already published (and which the IPCC reports merely summarize).
When I see such neat bullseyes like that, it's normally a warning sign that someone is trying to do too much with a sparse dataset by interpolating low numbers of data points over large distances. I looked into quite a few of the blobs on your maps and they were all located in remote, sparsely populated regions such as mountains, national parks or extensive forests, often with an airfield or research station nearby where there could be an isolated weather station. There are similar oddities in their wind speed maps, which mainly show blobs centered on towns and cities where weather stations are likely to be located:
Note that over the US the wind speed map is broadly in line with the ECMWF analysis, albeit with lots of artificial blobbiness because of whatever sampling and interpolating they're doing.
I suspect that the surface pressure maps are made even worse because they're surface pressure rather than sea level pressure, so the sparse samples can reflect altitude more than transient weather. That would lead to the major features that you've highlighted being persistent in time.
The bottom line is that I think the features that you're seeing are artifacts of a bad methodology rather than a physical phenomenon.
我应该使用什么Pa级别…< / p > < /引用>
That really depends what you're trying to do with it.
Is this data just missing near the surface?
The pressure level 1
hur
data in your ACCESS model plot are missing because the surface pressure in those grid boxes is less than 962.5 hPa lower bound of that layer. In other words, that atmospheric layer doesn't exist above those grid boxes, so properties of it like temperature and humidity are not defined. If you look at pressure levels 2, 3, etc, you should notice less and less of the area is undefined.To be honest, looking at pressure level 1 isn't very useful for this reason. In some places it'll be undefined, in some places it'll be inside the boundary layer and in some places it'll be outside, making interpretation difficult.
If you're looking at a single level this close to the surface, then you probably really want to look at the screen-level humidity variable,
hurs
. That will be defined everywhere, is more comparable between models and is usually more relevant when looking at surface level impacts and applications. The same is true for temperature variablesta
andtas
.
这也是CMIP5的一个特性,CMIP6的数据请求论文也提到了它:
…虽然140%的近地表相对湿度值原则上在空间和时间点上是现实的,但CMIP5档案中的许多高值(代表时间和网格单元平均值)可能是由处理错误引起的。因此,上限设置为100.001 %,分类为
建议
,与海冰范围的限制形成对比,该限制具有100.001 %
Juckes等人(2020):CMIP6数据请求(DREQ,版本01.00.31),Geosci。模型开发,13,201-224,https://doi.org/10.5194/gmd-13-201-2020
所以建模中心被允许将他们的数据裁剪到100.001 %,但可能他们中的许多人选择不这样做。论文引用 ruosteenja et al . (2017)进行CMIP5分析,他们提出了以下几个原因:
给出的值中插入。他给出了各种各样的答案,分为三类:
这篇论文是开放获取的(我认为),值得一读,以了解所有这些点的更多细节。同样值得指出的是,CMIP数据有很多奇怪的特征,考虑到数据量、后处理链的复杂性和运行模型的成本,这是不可避免的。每一轮都会变得更好,但总会有一些奇怪的东西在里面。重要的是做你正在做的事情,对任何看起来不对的事情进行查询,而不是只看表面。
The UK research community is dominated by a single family of models known collectively as the Unified Model. The code for these is owned by the UK's national weather and climate modelling center, the Met Office, and used gratis under license by academics. That license gives academics access to repositories of model source code and experiment setups shared across the MO and academic community. Simon mentioned ARCHER2, which is shared by the academics across all disciplines, but the Met Office also provides Monsoon specifically for atmospheric modelling collaborations with the academic community.
Because the MO is an operational forecast center, they put a lot of effort into making sure the code and support software run efficiently and reliably, so what academics have access to is pretty battle hardened. But the academic side also has NCAS-CMS, whose job it is to make sure the model works on machines like ARCHER2 for the whole community. All in all there’s a good level of national support for this model on these machines, and when I send a student on the training course for the model they can be running climate simulations on that hardware within 30 minutes of arriving.
something like the WRF has 1.5 million lines of code. So debugging it or customizing it to a different cluster seems pretty tough
Well, the UM has about 1.3 million lines of code and I’d estimate that I know about 15% of that code very very well (mainly a particular area of science) and the rest of it barely at all. When I encounter bugs they’re almost always because of the thing that I’ve just changed or something closely related in areas of the code I know well. When bugs lead into other parts of the model, it’s usually best to go and ask someone who knows about those areas rather than digging too hard yourself.
Writing and running an individual component model or "parameterization" model for one of these larger models seems manageable... But then how can that same person test his/her model inside of one of the larger climate models?
Yes, new parameterizations are often developed separately from the full climate model before being added to it. But the longer a parameterization is developed in isolation the more likely it is that it will be conceptually or technically incompatible with the full model. The trick is knowing early on that you want to couple it into the larger model later and to design your parameterization accordingly. In my experience, however modular we aspire to make these models, it can still be quite a pain to couple in parameterization code that’s had a well-established, independent life outside of the climate model. In general though, this is where the shared code repositories and an active community are really useful.
Since a lot of that model code is written in fortran, that just exacerbates the portability problem since there is less ability to "abstract" away some of this configuration in objects.
I remember years ago, as a student, a computer scientist friend of mine avoided doing an industry placement at a climate modelling center because he had such a low opinion of their software. "It's so basic and boring", he said, "They just use Fortran!" But those same things that are off-putting to a computing student are beneficial to the largely self-taught programmers (i.e., physical scientists) who are working with these models. Fortran is a fairly straightforward and safe language to learn and use, with relatively few concepts and gotchas. Compare that with OOP paradigms, which are hard to use well without significant training.
But those are more comments on the portability of the programmers than of the programs. As Simon mentions, the difficult bits of getting a climate model running on new hardware (big or small) tend to be handled by the support staff of that hardware rather than the academic researchers themselves.
稍微有点问题的是,估计是2019年相对于1750年和总变暖1.27°C。而引用的1.1°C总升温可能是观测到的2010-19年相对于1850-1900年的值(见SPM A.1.2或A.1.3)。有可能BBC在这里混淆了一些不一致的估计,或者,正如我所说,他们可能得到了最好的底线。请注意,政策制定者摘要(SPM)有另一个数字,SPM.2,它给出了相对于适当几十年的0.5°C甲烷贡献的最佳估计:
正如我所说的,我还没有研究这些不同估计的所有细微差别,但是,是的,甲烷对总变暖的贡献是十分之几°c。
…因为两个速度都需要相对于相同的参考(地面)…< / p > < /引用>
So, if all winds disappear (i.e., wind speed = 0 m/s) then all the air is moving with the rotating frame by definition. There’s no meteorology or fluid dynamics invoked here, this just falls out from our chosen coordinate system and definition of “wind”, so it would be the same at 2 m or 25 km altitude.
NB Your definition of air speed changes from "minus" in the first para to "plus" in the last para; I've assumed that this was a typo.
+2.2 W/m2是有效的辐射强迫,它是在允许快速系统反馈达到平衡(例如对流层湿度、平流层冷却)但不允许缓慢系统反馈(海洋热吸收、陆地和海冰变化)之后,响应一组变化条件(例如温室气体和气溶胶浓度、土地利用)的大气净辐射顶的差异。后一点很重要,因为储存在大气和陆地上的能量相对较少,所以是缓慢的部分会调整系统状态,以消除任何净辐射不平衡。
但请注意,这是系统的理论诊断性质;+2.2 W/m2并不是工业化前和现在之间的能量不平衡。在现实中,强迫条件和辐射不平衡是逐渐发生的,系统也随之逐渐响应,主要是通过在海洋中储存热量和提高海洋表面温度。这使得表达的,可测量的不平衡降低到更低的值,给了我们今天的+0.6 W/m2。< / p >
As for the pre-industrial imbalance, well, to quote Andrews and Forster (2020):
Since heat uptake was not observed in the nineteenth century, climate and energy balance models must be used to infer the pre-industrial energy imbalance. Forster summarized this to be around 0.10 W/m2 …
My experience from climate models is that 0.1 W/m2 is the typical variability in the decadal mean radiation balance in pre-industrial control simulations, so that observed estimate is probably not distinguishable from noise for a system that’s basically in radiative balance. In the absence of any solid evidence to the contrary, we normally use a target of radiative balance and no drift in long term model state when spinning-up pre-industrial model climates.
Δ=133.26+0.044[2]
这只是一个数值上的巧合,0.044看起来像二氧化碳的摩尔质量。方程给0.044 * 380 = 16 W / m2,即减少地表沉降流红外如果你删除所有目前大气中的二氧化碳(例如,看到钟和表1,2013年,天气,< a href = " https://doi.org/10.1002/wea.2072 " rel =“nofollow”noreferrer > https://doi.org/10.1002/wea.2072 < / >, < a href = " https://courses.seas.harvard.edu/climate/eli/courses/global -变化——debates/sources/co2 saturation/more/zhong -黑格- 2013. - pdf”rel =“nofollow”noreferrer > pdf < / >)。以同样的方式观察水蒸气暴露了他们为本教程建立的模型的一些局限性,这可能是为了向他们的学生教授一些概念而保持简单。对水蒸气做同样的事情,就像对二氧化碳所做的那样,将需要消除208w /m2的温室效应(再次,见那篇文章的表1),在他们的模型中,超过150w /m2的总温室效应。这种差异是因为它们在地表能量收支中有太多的太阳辐射,所以被诊断出的温室效应不需要做太多的功来平衡288 K的地表温度。
一个更现实的模型将排除被大气吸收的80 W/m2的太阳辐射,使表面吸收的太阳通量为160 W/m2。平衡然后< span class = " math-container”> = 160 + \δE 390美元< / span >,所以<跨类=“math-container”> \δE = 230美元< / span > W / m2和二氧化碳的修订后的扰动方程,< / p > < p > <跨类=“math-container”>δE = 214 + 0.044 $ $ \ [CO2] $ $ < / span > < / p > < p > 3的困难是,与二氧化碳,水蒸气的浓度是高度可变的空间和时间,所以等效方程将很大程度上是由你选择现在的水浓度。平均H2O浓度的一个大致数字是4000 ppm,这导致方程,
$$\Delta E = 22 + 0.052[H2O]$$
我应该强调这个模型不是用于任何严肃的计算。它只是一个工具,让学生们习惯于量化地球系统的各个部分,以及它们如何相互权衡
英国政府已经在国家层面发布了这方面的信息,这可能是地方政府层面如何开展工作的有用指南。< / p >
The last UK Climate Change Risk Assessment was published in 2017 and the next one is imminent in mid-2021. They follow risk assessment methods set out in things like the HM Treasury Green Book, which, despite the name, is not about environmental assessment specifically (there are Orange, Aqua and Magenta Books too). The Green Book has a shorter, climate-specific supplement that may be a good place to start. In particular, Annex B of the supplement has lots of links to other resources and tools for doing the assessment.
There’s also a local government website that hosts a slightly random selection of plans that various regions have put together over the last few years.
This all looks like a daunting amount of paperwork to a scientist like me, so good luck!
Given that TIR1 records are stored as 10-bit values (i.e., in the range 0 to 1023) and the color scale maxes out at 939, I suspect that they are just plotting the raw count data. This is normally transformed into radiances by applying a linear scale and offset to put the data in physical units.
I had a quick look at the equivalent 10.8 micron image from Meteosat over the Indian Ocean (see below) and I get a similar count range and a similar image when I apply their color scale. I also suspect that they've inverted their data, e.g., plotted count_inv = 1024 - count
, to fit with the viewers' expectation that cold clouds appear white.
For as long as I can remember (okay, since 2002) the standard source of this type has been the Web of Science. It covers all sciences (and engineering, social science) so you have to restrict your search with sensible combinations of keywords, categories, and journal names, but it's the closest thing to a neutral database that we have.
I don't find Google Scholar to be a good substitute. It doesn't have a very good signal to noise, you'll often get multiple hits to entries of the same article in different databases, but direct links to the actual journal article can appear quite low down on the list. I know this has been a problem for EGU/Copernicus journal articles, which appear much lower down than social network (e.g., ResearchGate) links.
The problem with both of those tools though is the sheer volume of articles. So much is published these days that it's simply not possible for anyone to exhaust the search. My lab often recruits post-docs into Earth science positions from other disciplines (e.g., maths, physics) and knowing where to start or finish with the literature can be intimidating for them. What they need is for some experienced researchers in the field to filter the literature for them, which is why I normally recommend reading recent review articles as a way of gauging the knowledge boundaries.
There are journals dedicated to review articles, e.g.,
and doubtlessly there are more than I've forgotten. Some publishers also curate collections of important articles from across their journals, e.g.,
Read the references you find in those and maybe go one level deeper and then you'll have done your due diligence. I still stumble across seams of papers I've missed in fields that I've worked in for years and it's never been a problem; no one expects you to be exhaustive in your search.
https://www.epa.gov/sites/production/files/2018-03/documents/emission-factors_mar_2018_0.pdf
表中有一个注释,
从2017年指南到Defra / DECC的航空旅行因子公司报告的温室气体转换因子。版本1.0 2017年8月。< / p > < /引用>
So, with a lovely bit of irony, those EPA air travel numbers are taken from the UK/DECC/Defra, which are available here:
https://www.gov.uk/government/publications/greenhouse-gas-reporting-conversion-factors-2017
Note that they’ve translated the names of the air travel distance categories so they make sense in the US market:
- US short-haul (< 300 mi) = UK domestic (UK to UK)
- US medium-haul (300 - 2300 mi) = UK short-haul (< 3700 km)
- US long-haul (> 2300 mi) = UK long-haul (> 3700 km)
Taking some numbers from US short-haul/UK domestic as an example, the CO2 emissions are,
0.14002 kgCO2e/pass km = 140.02 gCO2e/pass km = 225.3 gCO2e/pass mi
which is just basic units conversion and agrees with your EPA table and the EPA pdf. The same is true for the CO2 emissions in the other distance categories. Note that the EPA are using the numbers for the CO2e emissions without radiative forcing feedbacks from water vapor, contrails, NOx, etc., which is the dark blue portion of the bars in your plot from the BBC. You should be comparing that portion of the emissions, not the full bar including the light blue.
There's a further problem with the CH4 and N2O values in your EPA table. Taking the same example as above but for CH4 emissions,
0.00006 kgCO2e/pass km = 0.06 gCO2e/pass km = 0.097 gCO2e/pass mi = 0.0039 gCH4/pass mi
This is correct in the EPA pdf, but has been mis-transcribed in your EPA table as 3.9 gCH4/pass mi. Oops! someone got confused between grams and kilograms, and they've made the same mistake in the CH4 and N2O entries for all modes of transport. They’ve then used this to calculate the overall GWP column, so their values are oversensitive to CH4 and N2O:
GWP = 225*1 + 3.9*25 + 7.2*298 = 2468.1
The correct calculation is,
GWP = 225*1 + 3.9e-3*25 + 7.2e-3*298 = 227
The bottom line is that I think the GWP column in your EPA table is nonsense and the aircraft emissions factors have the same source (albeit the BBC are probably using more recent UK emission factors). The respective CO2 emissions for cars in the sources I've linked to aren't wildly different (297 gCO2/mi in UK, 343 gCO2/mi in US) and for motorcycles are almost identical (183 gCO2/mi in UK, 189 gCO2/mi in US).
The person must have a PhD in Climatology or Meteorology and have at least one paper directly related to climatology...
Some of the most experienced climate scientists I know (IPCC authors, etc) have either worked in national labs all their careers without getting a PhD, or got PhDs in other subjects (e.g, mathematics, particle physics, chemistry) before moving into climate research. Your list would need to capture those people too.
It would be a lot of work to categorise PhD theses as “climatology” or “not climatology”. In my experience, universities don’t award PhDs in a particular subject like they do for bachelor degrees, you’re awarded a more general degree of Doctor of Philosophy for your individual thesis. That’s a UK perspective, the details almost certainly differ from country to country. On the other hand, it’s unusual for someone with a climatology-related PhD to have not also published at least one climatology-related journal paper, so a journal search should catch most of these people anyway.
...published in a peer reviewed academic journal of climatology or meteorology.
That’s a reasonable place to start. Originally you mentioned climatology only and I was going to comment that there would be a lot of devil in the detail of separating meteorology from climatology. You also mentioned “accredited” journals, but “ISI listed” is probably a better target. As pointed out by Fred, you’ll have to include the more generalist journals (e.g., Geophysical Research Letters) and the less immediately obvious specialist ones (e.g., Global Biogeochemical Cycles, Global Change Biology). You’ll also need to decide things like whether you’re using lead author only or all authors, and whether they need to be actively publishing on climate (in which case, what’s the cut off date for inactive?).
All these subjective decisions will affect the exact list you end up with, and you’ll always be able to add in another journal and get a few more names, but with diminishing returns. In other words, there isn’t a single definitive list of people.
I’m curious, do you have a rough idea how long you expect this list to be once compiled?
考虑到天气模式,从最近(即当天)的卫星图像来看,似乎地球正在产生这些云覆盖。这条线与气象卫星8号的圆盘线边缘非常接近,气象卫星8号位于东经41.5°赤道上方。我怀疑这只是他们如何将来自地球同步卫星的各种图像拼接在一起的人工产物。卫星的天顶角和数据的不确定性对圆盘边缘的影响非常大,因此大多数用于预测和研究的产品都被剪裁以排除该区域。< / p >
Image from EUMETSAT: https://eumetview.eumetsat.int/mapviewer/?product=EO:EUM:DAT:MSG:CLM-IODC
inst3_3d_asm_Np
,这是压力水平的大气变量。我怀疑您需要将该下拉选项更改为inst1_2d_asm_Nx
,这是表面和近表面变量。列表显示了我的选项是, DISPH =零平面位移高度PS =地表压力QV10M = 10米比湿度QV2M = 2米比湿度SLP =海平面压力T10M = 10米空气温度T2M = 2米空气温度<----在这里!TO3 =总塔柱臭氧TOX =总塔柱奇氧TQI =总可降水量冰水TQL =总可降水量液态水TQV =总可降水量水蒸气TROPPB =基于混合估算的对流层顶压TROPPT =基于热估算的对流层顶压TROPPV =基于epv估算的对流层顶压TROPQ =基于混合托流估算的对流层顶比湿度TROPT =基于混合托流估算的对流层顶温度TS =地表皮肤温度U10M = 10米东风U2M = 2米东风U50M = 50米东风V10M = 10米北风V2M = 2米北风V50M = 50米北风代码>
<强>在一次核事故中,BC的排放要大得多,将发生在一个小区域,并将注入整个对流层强>。例如,Robock等人(2007),你在你的问题中链接到的同一个人,运行了这个模拟:
在我们的标准计算中,我们在5月15日将5 Tg黑碳注入30 N, 70 e的一列网格框中。我们将黑碳放置在对应对流层上层的模型层中(300-150 mb)。这是一个巨大的扰动——大约是当前大气总量的50倍,大约是当前日通量的250倍。 They found that the BC entered the stratosphere, where removal processes are much slower than in the troposphere, such that,
E-folding times are 6 y, compared with 1 y for volcanic eruptions and 1 week for tropospheric aerosols.
The BC then absorbs solar radiation and heats the stratosphere, reducing the amount of solar radiation reaching the surface and cooling surface air temperature by more than 1 degree Celsius for about 5 years. Note that the associated $\small\mathsf{CO_2}$ emissions would warm the troposphere, but over a slower time scale and these studies tend to concentrate on the immediate BC cooling effect of a nuclear incident.
As an additional bit of contrast, that study also found that,
When we placed the aerosols in the lower troposphere (907–765 mb), about half of the aerosols were removed within 15 days
This indicates how important it is for the BC emissions to reach the stratosphere in order get a prolonged surface cooling.
lat=
和lon=
并不像您认为的那样。当您引用sst5
时,它们不会设置轴的哪些部分是活动的/可见的,它们只是列出您感兴趣的lats和lats,但是sst5
对它们的内容一无所知。我会修改我的答案。