My opinion is worthless because I am not an atmospheric scientist. I base my view on the peer reviewed literature as much as I can. I have absolutely no reason to believe that Bates' 2 zone model is more likely to be accurate than the other climate models, which have rigorously developed and regression tested and fine tuned (and still imperfect but improving all the time)
I have asked you many many times why you think Bates' paper is so much better than all the other peer reviewed papers out there that put climate sensitivity in the 1.5 to 4.5 degrees range and you haven't given any good answer other than your opinion that temperatures have been lagging predictions. (or in other words, the 'pause' or 'hiatus' in global warming seems to be evidence for you that climate sensitivity is lower than the established range. The 'Hiatus' event has been discussed plenty of times in the literature, there have been some good explanations for how natural variability could have led to a short term hiatus event where the pacific Decadal oscillation has been linked to a temporary global cooling forcing (which still resulted in warming despite natural factors that would have otherwise led to a colder climate)
The PDO can flip from negative to positive which would result in a surge in warming over a short period of time.
Pacific Decadal Oscillation is one particular culprit that can move ocean heat content from the surface to below 700m and this can cool the atmosphere and ocean heat content, but these are temporary features that switch between positive and negative and climate models tend to ignore/smooth these because they balance each other out over time, and because we cannot predict in advance when these oscillations flip from one state to another.
I 'reject' its content on the basis that it's one paper out of thousands of climate papers and it's an outlier that gives me no good reason to believe that it is true and the other papers are false, while also raising many red flags that give me good reasons to suspect that it is's findings are not robust.
There are other theories that also explain the 'hiatus' including ocean circulation theories that posit some kind of natural thermostat where ocean circulation changes as temperatures increase and this change serves to moderate global warming, but this branch of theory doesn't have the support of the best climate models so it is not widely accepted.
When Bates gets his 2 zone model properly analysed and verified using independent sources of data and independent teams of scientists, then I will give it more weight, but for me to judge his research myself and conclude that it is more realistic than the established scientific view would be very hubristic.
There might be some plausible elements of his theory, but plausible is a long way from proven. It is plausible that the twin towers were knocked by a controlled demolition (if you are prepared to only look at very biased sources)
You do know that wind power kills a lot less birds than fossil fuels do?
Give me details of why you think it's wrong.
Very demanding. People are allowed to come to their own conclusions.
Pause, hiatus???? -The "science is settled" gang now can't even agree if there was a pause.
Yet the political UNIPCC acknowledges it, the "scientific community" has published much research, sorry, literature, discussing it, the next minute they're squabbling over whether it even occurred or not.
That's what happens when science descends into trying to extrapolate theories from statistical noise.
It's farcical. But it's keeping people busy.
And yes, we know parts of the scientific community then declared that the missing heat was "hiding" in the oceans, the oceans having suddenly remembered they should be playing ball with the statisticians in white coats.
Farcical science, which was busted by NASA, which declared "NASA Study Finds Earth's Ocean Abyss Has Not Warmed"
Then they changed their tune along with the measuring apparatus
Referring to something that the scientific cummunity can't even agree happened sums up the scientific confusion evident in the "debate is over" camp and is identical to the scientific confusion I mentioned yesterday which is evident between NASA's and the UNIPCC's wildly differing attribution to the sun's role in temperatures.
This "science" is in it's infancy and is far from settled, no matter how many times someone tells you it is.
I am very worried about this. Because if there is a Dalton minimum that somehow exactly cancels out the global warming signal, then we will have a further 20 years of political stagnation and 'debate' about whether or not we should take action by which time the solar cycle emerges out of it's minima and we get a huge surge in temperatures and 20 wasted years committing us to a much warmer world.
The most quoted figure I have seen is about a third of a degree c average cooler temperatures during the Maunder Minimum, this doesn't mean that regional variation wasn't above this, just the same way that global warming of 1 degree doesn't mean that some parts of the world warmed less, while other parts warmed more.
The uncertainty tied to any scientific hypothesis is always the entire range of possible outcomes
But this doesn't mean that every possibiity is equally likely. The extremes on both ends are outliers, still possible but unlikely, while the confidence increases closest to the mean of the model predictions.
That's what the models are. In fact, every scientific prediction involves constructing a predictive model of nature.
Experts predict based on what the best available models predict.
Astronomers can predict eclipses centuries in advance, because we have discovered the laws that affect the model and the elements of the model that can preturb the movement of the planets have mostly been accounted for. The astronomical model of the solar system is extremely robust and the movements of the planets is stable and follows well understood rules. Models of more chaotic events are much less certain and are often probabilistic but that doesn't make them invalid. We know that if a radioactive isotope has a half life of 1 year, that a year from now, half of those isotopes will have decayed, but we cannot say which individual atoms will have decayed.
Climate models can give a fair representation of how the different climate systems will react to changes in temperature, land use, changes in albido, changes in CO2 concentrations, changes to ocean circulation etc. These models are run in regression against known outcomes and if they can represent the past well, then we have a reason to be confident that they have some predictive value. Is every single variable accounted for properly? No, the models are improving as we get more real world data to fine tune them as well as higher resolution models and more computing power to include smaller scale features. In an earth systems predictive model there is never going to be a definitive prediction because there are inherent assumptions about human activity and 'random' natural events and natural cycles that are irregular or have uncertain genesis criteria. But these are still the best models that we have and this is why I think we should go with these predictions rather than gut instinct feelings that the models are imperfect therefore one's own hunch or feeling is just as predictive.
Th 'consensus' on long term weather forecasting is that it is impossible to predict reliably more than 2 weeks in advance. Anyone who claims to have a long term forecasting model is promising a lot more than they can deliver. On the other hand with climate modelling, nobody is trying to predict individual weather events or even individual years worth of weather, the models are predicting that on average, different regions are likely to see changes to their climate (and these changes are based on their unique geography)
There are plenty of these predictions that have been coming true, temperatures have been increasing on average, growing seasons have changed, animal and plant species have migrated, wildfire seasons have gotten longer, the arctic sea ice has dramatically declined and is almost certain to be gone in the summertime within the not too distant future, glaciers are retreating etc etc..
None of these predictions are consistent with any other theory other than global warming. The solar cycles certainly don't explain them, all that's left is vague 'natural variability' hand waving from 'the climate is always changing' crowd.
You also can't declare predictions false until you do a careful analysis of what was actually predicted and what has actually transpired. Most of the models predict a range of outcomes and they are compared against past known variables and as they are run into the future, we are able to compare them to the results we are seeing in real time, and future iterations of those models are able to run with modified variables to account for new information about the earths response to climate changes.
There have been studies into how accurate the climate models have been, and the results are that climate models have been surprisingly accurate. Especially compared with the predictions made by the 'contrarians' in the past. (quick illustration, it's only 2 minutes long but there are peer reviewed studies that show more detail)
I don't see why it is absurd to link a severe drought with social unrest leading to a civil war. Nobody says it's the only cause, far from it, but it didn't help.
It is absurd to me, to disregard the risk of conflict in the future if water shortages get worse as glaciers melt and rivers run dry. Mass migration is a known cause of conflict, and water shortages have been known to end whole civilisations
For thousands of years regional climate change has caused massive disturbances to cultures and populations. Now we're looking at altering the global climate so that we could see big changes in regional climates dispersed throughout the entire planet over the course of only a few decades.
Record 2018 snowfall continues increasing snowfall trends showing UN IPCC AR5 report is flawed
Can you quantify this link you've observed?
Rather than just creating it?
It's a bit "Climate Change Ate My Hamster" without it if I'm honest!
Saying "climate change didn't help" is like saying climate change didn't help me win the lotto last week, -it's quite meaningless.
From the abstract:
"This article provides a systematic interrogation of these claims, and finds little merit to them.
Amongst other things it shows that there is no clear and reliable evidence that anthropogenic climate change was a factor in Syria's pre-civil war drought; that this drought did not cause anywhere near the scale of migration that is often alleged; and that there exists no solid evidence that drought migration pressures in Syria contributed to civil war onset.
The Syria case, the article finds, does not support ‘threat multiplier’ views of the impacts of climate change; to the contrary, we conclude, policymakers, commentators and scholars alike should exercise far greater caution when drawing such linkages or when securitising climate change."
It's snowing therefore climate change isn't real......
Do we have to deal with this every year?
The AR5 report that is cited in this story deals with long term effects of climate change, so in a world where global average temperatures are 2 or 3 degrees hotter, there will probably be less snow, but in the intervening period, warm air hold more moisture.
The graph linked to in that story refers to winter snow, if you look at the graphs for summer snow you can see there is a clear collapse in Northern hemisphere snow cover. (yes I know it's not supposed to snow in summer, but there is a clear warming trend in the data for the summertime, while snow cover in winter is not a direct correlation with lower temperatures, because as long as it's cold enough to snow, warmer air can actually produce more snow than colder air.
Heavy precipitation events have increased in both summer and winter across the northern Hemisphere, and when you have polar vortex conditions dragging cold air from the arctic into contact with warm humid tropical air, then you get the snowmageddon conditions we saw this winter in the USA.
There is still uncertainty on what will happen to northern hemisphere weather when the arctic warms. It hugely affects the atmospheric and oceanic currents and the IPCC can make an educated guess at what will happen happen based on what the models say, in reality, we don't know yet.
The United Nations Intergovernmental Panel on Climate Change (IPCC) are not experts on climate. The IPCC was formed in the late 80s, NOT to test the assumption that carbon emissions were driving heat and heat was driving dangerous "climate change" or catastrophic anthropogenic "global warming" as they branded it back then, BUT to broadcast it. It is a politicised forum, pushing out people who are frustrated by the way discussions of findings and theories in its working papers are distilled into political alarms in the summary materials used by politicians and the press. The IPCCs terms of reference are only to report on climate change caused by human activity, other factors that affect climate are excluded.
The scheme works as follows: Working group 1 is told to provide the scientific basis for human caused global warming, the other two groups have to accept the findings of working group 1 and it’s from these other groups that the “we’re all gonna’ drown”, “go extinct due to roasting to death” unless we pay money to these special interest groups. There is an additional twist the IPCC committee releases a policy document for government first before the findings of working group 1 are released which must then be changed so there is no contradiction with the policy group.
Also worth noting is that the independent and quite authoritative sounding World Meteorological Organisation (the WMO) is, conveniently, like the IPCC, another UN agency.
Interesting too is that the UN's WMO cherrypicks a rather slim 20 year baseline period for its recent musings.....
"In this WMO statement, the period 1880-1900 has been used as a reference period for pre-industrial conditions allowing early instrumental observations to be used for estimating pre-industrial temperature conditions."
The same models that are not reflecting the current flattened trend and which, after several decades, are still no nearer to narrowing that 1.5-4.5 degrees range of uncertainty.
The PDO has been known about for 40 years and has been reconstructed back several times that. Its effect on temperature trends is well noted and has always shown a flattening on warming in its negative phase. Its fingerprint is in the temperature trend to date. My point is that, despite this well-known factor, the models are still not taking it into account.
See, that's your problem right there. Despite not understanding its content you reject the paper because it isn't the same conclusion as your 97% consensus. Also because it's open source. Also because you don't like the guy. And also because you havent found much comment on it, or at least comment strenuously rejecting it that would strengthen your argument.
Your confirmation bias means you'll never be open to any new learnings if they're not saying the same message as you're now programmed to accept as "settled". You say it may be plausible but you'll wait for someone else to determine that. Well, if that's the case you should stop doing a John Gibbons on it and refrain from shooting it down until you either read and understand it or Google more comments on it from someone else. You have one person commenting on it here but you are choosing to ignore it.
I continue to think that the natural variability cooling of the Maunder period has been severely underestimated in this discussion.
Here are some actual numbers based on the CET record which began in 1659.
The first available 30-year average (1659-88) is 8.83 deg C. This drops more or less regularly (1659 just happened to be also the last peak of any significant solar activity until the 18th century) to reach its lowest value of 8.45 C for 1673-1702 (just about dead even with the least solar activity, the resumption of anything noticeable was a weak cycle that peaked in 1705). Almost midway in that period was the ultimate cold winter of 1683-84 but summers were generally not very warm either, the July average for this minimum period is 15.55 deg (the modern average is close to 17 deg).
There was steady warming after the Maunder sunspot deficit ended (peaks in 1718, 1728, 1737, 1749, 1761, 1769, 1778 and 1787 were generally similar to the range of the active 20th century).
The warmest 30 years came rather quickly, 1710 to 1739 averaged 9.44 degrees. This is only a little cooler than the mid-20th century. The winter of 1739-40 was very cold and its traces can be found in 30-year averages which dropped off slightly until 1740 was no longer in the mix, although only slightly, indicating that 1740 was a very anomalous temporary return to a colder climate (the equivalent of 1962-63 perhaps).
There was no further 30-year average below 9.0 until the Dalton minimum. This occurred from about 1795 to 1835, during which time sunspot activity was generally weak and cycles were longer than usual (gaps of 14, 15 and 13-14 years for the 1801, 1816 and 1829-30 peaks). Between these came 3-5 year intervals of near zero activity. We are currently entering a similar sort of phase, whether it lasts forty years like the Dalton remains to be seen.
The 30-year means from the mid to late 18th century remained above 9.0 generally but showed a long-term cooling trend that is slightly out of phase with solar forcing, possibly more due to increased volcanic activity, or other unknown long-term cycles pushing the CET down slowly to near 9.0 where it often stood before the Dalton started up. Winters in the 1760s were generally quite cold, and 1776, 1784 and 1785 had numerous cold daily records, so this despite high solar activity would be analogous to cold winters in 1947 and 1956. However, between these cold winters were several mild ones and 30-year averages were not really pushed down much until the Dalton began, with a rather spectacular oscillation between very cold and very mild winters in the period 1794-99. Eventually cold took over and there are several rather famous cold winters in the Dalton (1814, 1820 and 1823 in particular). Some summers however were reasonably warm. The coldest 30 years in the Dalton period were 1795 to 1824 (8.99 C) but this was actually just marginally colder than much of the mid to late 18th century during active solar cycles, and seems to be mainly confined to January becoming particularly cold. January 30-year means in the Dalton kept falling slowly towards 2.2 C which was reached by 1809 to 1838, while other months seem to have remained about in the same range as earlier in the period of record, and generally 0.3 or more higher than in the Maunder.
Once January came out of this tailspin, a lot of the mid to late 19th century continued to produce similar CET values except that January returned to more of a modern 3 to 4 CET range. Summers actually cooled slightly after the Dalton and the lowest July average since 1771-1800 came as late as 1838 to 1867 when July averaged only 15.0 degrees. The climate of the mid to late 19th century appears to have been generally cloudier and wetter than in modern times.
A long downturn in solar activity from the mid-1870s to 1914 left no particular impression on the temperature records which continued about the same as in the more active mid-19th century. Several periods around 1875-1904 round off to 9.04 but this is only marginally lower than the slight peak after the Dalton which was 9.28 from 1846 to 1875. While solar activity varied considerably from Dalton to mid-19th and back to late 19th century minimum, temperatures only showed marginal responses. It took something like the Maunder to force temperatures down significantly.
Once solar activity picked up with moderate cycles in 1917, 1928 and 1937, the mean temperatures responded gradually, almost always showing a slight rise through that interval and reaching 9.42 C by 1910-39 (slightly lower than the previous peak exactly 200 years earlier). With even stronger solar activity (peaks in 1947, 1957, 1968, 1979 and 1989 were mostly rated strong to super-strong by Schove using all historical data) the CET value kept rising slowly, the last time it was lower than 1710-39 was 1929-58 although it had broken that mark in 1918-47, despite the arrival of Feb 1947 in the data. By 1938-67 the 30-year average had risen to 9.6 C.
This is often cited as the end of the non-AGW climate period so that further rises could be partially attributed to human activity (this boundary must be considered indistinct as carbon dioxide levels have been on the rise since the early 20th century at least). So since then, we've seen a steady rise, reaching 10.0 for 1981-2010 and currently standing at 10.14 C for 1988-2017.
If we allowed that the climate has stayed naturally invariable since 1938-67 and all of this warming is human caused, it amounts to 0.54 degrees and on an annual basis it is often quoted in the 0.7 to 0.8 degree range. Whether all of that is human caused or not remains contentious. I consider about half of it to be plausible natural variability due to the continued strong solar activity in the 1970s and 1980s, and really the peak of 2001 was not that weak either, but there is evidently some lag between solar regimes and temperature responses (for example, January was still doing its Dalton thing all the way into the strong 1837-38 peak that ended it).
This account generally establishes that the Maunder was a lot more than 0.3 degrees colder than "normal" climate periods, it's too bad we don't have more years before 1659 but it's pretty obvious from the 1660s that the full Maunder cooling did not set in until perhaps 1675 or later, well into the second decade of quiet sun conditions.
But there are many quirks in the data that suggest that we really don't fully understand how three drivers are interacting (solar forcing, natural cycles, and human activity). For example, June wasn't all that cool in the Maunder and then warmed up to modern values in that warmish spell in the early 18th century, and seldom dropped back much below that, then in the modern warming, it hasn't kept pace with other months, staying in the 14-14.5 range while July has warmed through the 16s to reach modern 30-year averages above 17 deg. October on the other hand showed tendencies to cool when other months were warming, and has moved faster ahead than other months recently.
Climate change enthusiasts always have explanations for whatever natural variations occur, but sometimes they sound contrived if not counter-intuitive. In particular, the whole notion of displaced arctic vortices bringing severe cold south in recent winters in North America makes a lot of eyes roll in non-dependent meteorological circles, where it's pretty widely understood that past winters were often very cold and nobody did anything to displace any arctic vortex to get that to happen.
My feeling is that some are trying too hard, we need to get back to basics in this field, either it's warming up, or cooling off, in general terms.
The idea that somewhere it was really warm in the Maunder is supported nowhere in actual data. North America does not have the length of instrumental records, but anecdotal reports suggest that the climate was quite harsh for the earliest settlers and so North America may have also been in a colder climate phase during the Maunder minimum. It seems to have lasted a bit longer too, which is what one might expect if ice margins were further south. The winter of 1717 was certainly about as severe as anything ever recorded in the northeastern U.S., after which the general trend of the mid-18th century seemed to show a similar warming trend to Europe. The period 1780 to 1830 can be considered high variability with harsh winters in general from what we know (which is considerable, some continuous observations exist for the northeastern U.S.). The mid-19th century is clearly a much colder and wetter climate for places such as Toronto than any part of the 20th century. Several consecutive winters around 1870 had 100 inches of snow while that mark nowadays would be considered exceptional.
In my research, I have found indications that the position of the north magnetic pole correlates well with temperature trends, and recently this has moved into a position due north of central Alaska. It would imply a trend towards colder winters in eastern Asia although it would take a return to lower latitudes to beef up that trend. But once the NMP becomes somewhat distant from Europe, it's unclear how greater separations would mean much, at some point one reaches equilibrium in that potential signal and would be looking for an eventual downward trend as the separation passed its maximum.
Anyway, a rather long account, but these are the actual numbers, and within those we find some real indications of how finely balanced our inter-glacial climate is ... anecdotally, I find the near-absence of summer in 1816 the best indicator that we are on a knife's edge where severe volcanic dust or total solar inactivity could stress our natural climate back towards the edges of a glacial return, at least something like the Dryas oscillations that ended the last glacial maximum. Up against that, I really don't know how big a deal AGW might be in the larger picture (and I am sure there are legions of climate scientists who would not even be able to comprehend what I am saying by "larger picture" because they are the whole canvas in their own estimation).
Yep, it's much the same as Gibbons classing himself as a "science communicator".
Nothing could be further than the truth.
The science that doesn't suit the agenda weirdly enough doesn't get communicated on his strictly moderated eco facism propaganda blog.
Have you figured out what these people's agenda is yet?
MT, you're getting away with murder with your parenthesis.
You've a bad dose of the brackets