I am not a climate “scientist.” I don’t have a degree in “climatology.” I’ve never stepped foot through the doors of NOAA, NASA GISS or HadCRUT. I am not an expert on the climate. Therefore I should not be able to write an article today that will prove 100% correct 10 years in the future, rejecting the climate models built by the “experts” who are backed by billions and billions of dollars in funding. If climate “science” is truly a valid science, an amateur climatologist shouldn’t be able to make better forecasts than the experts. That is a testable hypothesis. If climate “science” is a valid science, and the ground measurements are not being manipulated to get a predetermined answer, then over the next 10 years, both satellite and ground measurements should fall in line with the “expert” IPCC climate model predictions. In my amateur opinion, there is a 0% chance of that happening. Over the next 10 years, the spread between the average IPCC climate model predicted temperature and the UAH Satellite Data will widen. I am 100% certain of that, and this article will cover why. Here is how the situation looks today. The climate models do an awful job forecasting temperatures, and my bet is that over the next 10 years, the gap between reality and predicted are certain to grow.
While I don’t have a background in climatology (whatever that is), I do have a background that includes multivariable modeling, properly applying the scientific method, advanced statistics and mathematics, quantum physics and chemistry both inorganic and organic. I view not being a climate “scientist” as a benefit, more than a hindrance, because I don’t have a dog in this fight, I’m only interested in the truth. I wasn’t taught to think only one way, I draw upon my extensive and diverse educational and professional background to formulate what I consider to be the most reasonable explanations of what is happening. I’m not the only one that sees problems with the existing climate models. Bob Tisdale wrote an excellent 353-page ebook on the subject of why climate models fail. Keeping with the spirit of this blog, however, I’m going to condense the arguments down so that they will fit on a napkin, and can easily be explained over drinks at a cocktail party.
Fails the Stink Test:
The first clue that climate “science” is a fraud is how they speak about certainty. They make outrageous claims as to the confidence levels and certainty of their models. We only have maybe 50% confidence in weather forecasts that go out 1 week, and we are being told the climate models have a 95% certainly level forecasting the climate 100 years in the future. Anyone that has ever build a multivariable forecast model knows those kinds of numbers simply don’t exist. How do I know climate “scientists” don’t truly have the ability to forecast the infinitely complex climate 100 years in the future? Simple, they aren’t working on Wall Street. If climate “scientists” could truly model something as infinitely complex as the global climate, modeling the S&P 500 would be a walk in the park, and climate “scientists” would easily be the wealthiest people in world history. Facts are, we can’t even model the stock market with any certainty, and we certainly can’t model something infinitely more complex with greater certainty. When you see Goldman Sachs advertising to hire climate “scientists” starting at 1,000 x a professor’s salary, you can start to believe their claims, but until then, it is simply nonsense.
Real science uses a common language, and it usually involves numbers. In real science you apply the scientific method, and, through experimentation, either accept or reject a hypothesis. Climate science does very little that is consistent with classical science. First off, applying the scientific method to the hypothesis “the temperature change over the past 50 to 150 years (the industrial age) is of natural causes” is not rejected. Don’t take my word for it, simply take any ice core data set and test the temperature variation over the past 50 to 150 years compared to the temperature variation of the entire Holocene. There is nothing abnormal about the recent temperature variation. Failing to reject the null means game over in any real science.
Second, science talks of confidence levels and certainty. A hypothesis is rejected at the 95% confidence level. Climate Alarmists talk of “consensus” and use ad hominem attacks or worse to silence critics. The 95% confidence level is often mentioned by Climate Alarmists, but it isn’t from some statistical analysis of data, it comes from a survey that claims 95 out of 100 scientists agree man is somehow impacting the climate. BTW, I would have agreed with the survey questions, as would anyone with an ounce of common sense.
Third, Climate “science” is a model based science. There is very little if any experimentation and empirical evidence supporting it. What experiments are performed to convince the public are complete jokes, and work more to discredit the science than to validate it. Ironically, much of the evidence we do have doesn’t implicate CO2 very well, if at all.
Fourth, if something is understood, it can be modeled. From the top graphic and all the evidence, the IPCC isn’t even close to accurately modeling the climate.
Lastly, real science published charts with error bars around them. This is extremely important for sciences that try to boil an entire year’s worth of data into a single point. Climate “science” takes an entire year, where the winters may be sub-zero, and the summers may be over 50 degrees C, and represent it as a single point estimate. They then fret about fractions of a degree variation of data sets that may include unreliable proxies such as tree rings and coral. While many of the official reports may include the error bars, many of the charts published for public consumption do not. I’ve never seen the error bars put on Al Gore’s famous chart.
Fails Econometrics 101:
Now for a more serious rebuttal of the models. To model something, one has to first start with a hypothesis that includes the most “significant” variables. For instance, if I were to model weight loss, a valid model would almost certainly have include exercise and caloric intake, a basic input/output model. I would imagine that exercise may explain 40% of the weight loss, and caloric intake (dieting) may explain 40% as well. If I ran a model “weight loss is a function of exercise” I would get an R-Squared of 40, meaning that my simple single variable model can explain 40% of the variation of weight around its mean. R-Squared is the “explanatory power” of the model. If I ran a regression of “weight loss is a function of exercise and caloric intake,” I would get an R-Squared of 80, meaning that 80% of weight loss can be explained by exercise and dieting. The other 20% is explained by factors outside the model or “exogenous” factors. Possible exogenous factors in this example would be genetics, sex, any illnesses that occurred during the testing period, starting physical condition and possible medications taken.
Now let’s look at a climate model. This graphic details one of the IPCC models. Click here to view the reference video.
My first thought after studying this model was, “we’ve spent billions of dollars to create a single variable piece of crap model of the climate?…You have got to be kidding me!!! This counts as ‘settled science?'” If climate “scientists” think this is a good model, they simply don’t understand what a good model is. This kind of model would get any Econometrics 101 student a grade of F—, it is truly that pathetic. My weight loss model had more factors in it than this climate model, and the climate is infinitely more complex. Goldman Sachs’ models for the S&P 500 have hundreds, if not thousands of variables, and they still do a poor job. To think the climate is controlled by a simple single variable is absurd on a biblical scale. This is what is called an “underspecified” model on an epic scale. An “underspecified” model fails to included all the significant variable of a model. This flaw alone guarantees that I will win my bet (except for the chance of a coincident/non-causative event). The following quote highlights just 3 of the countless factors Goldman Sachs includes in their analysis of the stock market.
In a research note out this week, Goldman Sachs’ top strategists predict that stocks will once again disappoint next year. Goldman predicts the S&P 500 will go nowhere in the coming year, ending 2016 at 2,100. The stock market index is already at 2,090. Include dividends and Goldman predicts that stocks will return just 3% in 2016. Stocks are up a measly 1.5% in 2015.
Goldman says the market will hit a headwind of rising interest rates, a strengthening dollar, and stalled profitability.
I’m not the only one to have made this observation. Dr. Curry has reached the same conclusion as well.
“The climate model simulation results for the 21st century reported by the Intergovernmental Panel on Climate Change (IPCC) do not include key elements of climate variability, and hence are not useful as projections for how the 21st century will actually evolve.”
Boil it down for me:
Because CO2 is the most significant of the insignificant greenhouse gasses, the above climate model basically reduces down to “temperature is a LINEAR function of CO2,” or ΔT=f(ΔCO2). The “linear” relationship between CO2 and temperature defined within these models is the criminal motivation for all the suspicious “adjustments” made to the ground measurements, and why I consider the perpetrators of these “adjustments” outright frauds, and this article will defend that claim. If Goldman Sachs ever tried to do what I will explain in this article, their CEO would be wearing a striped shirt and afraid of taking showers.
The Case of the Missing Factors:
The greenhouse gas effect impacts only the radiative transportation of energy, so modeling only the greenhouse gas effect leaves out the major methods of heat transfer; conduction and convection. Conduction and convection are relatively very slow means to transport heat, radiation travels at the speed of light. Place the tip of an iron rod in a fire, and it will take a period of time for the heat to travel up the rod and burn your hand. That is heat transported through conduction. Place your hand above a gas burner and turn it on. It will take a short period of time for the hot air to rise up and reach your hand. That is heat transportation by convection. Both are relatively slow when compared to the greenhouse gas effect that absorbs and reradiates photons at the speed of light. Because air density and altitude are inversely related, the probability dynamics created favor radiation carrying energy out of the atmosphere. Evidence shows that the greenhouse gas effect may actually aid atmospheric COOLING more than warming. Further evidence that CO2 doesn’t permanently trap heat in the atmosphere, or even effectively retain it for an extended period of time, are the rapid drops in atmospheric temperature post-El Ninos. Once the oceans stop adding the additional warmth to the atmosphere, the atmospheric temperatures rapidly return to a level relatively consistent with the pre-El Nino levels.
The effects of conduction and convection aren’t the only factors suspiciously excluded from the model. Just look at the above graphic of the climate system. H2O and the sun are mentioned multiple times, yet are not included in the IPCC model. The IPCC model is literally like the above-mentioned weight loss model, but not including exercise and caloric intake. Climate models should be infinitely complex input/output models. The sun is the only material input factor providing energy to the earth. Not including the sun in a climate model is like not including caloric intake in a weight loss model. Water vapor and the oceans should be by far the most significant output factors. H2O and the oceans simply dominate the climate dynamics. Not including the oceans, water vapor, latent heat transportation, clouds, and precipitation is like not including exercise in the weight loss model. If you don’t include exercise and caloric intake in a weight loss model, what do you have left? You are left with the 20% not explained by the two most significant causative variables. What that means is that the IPCC climate models will never have R-Squares significantly above the level explained by the relatively insignificant variables. What excluding the impact of the sun and water really means is that people that know better are deliberately manipulating models to reach predetermined outcomes. To manipulate the models in this manner proves that the manipulators clearly understand what variables need to be excluded, and what ones need to be included. Do that in any other field and you end up wearing an orange jump-suit, just ask Bernie Madoff.
The Case of Data Malpractice:
The IPCC models clearly define a LINEAR relationship between temperature and CO2; ΔT=f(ΔCO2). The people that created that model are to quote Bill O’Riley “either liars or morons, and I don’t know which one.” Either way, neither option is good. Once again, the people that made the IPCC models are “experts,” they either know better or should have known better, than to make the mistakes they have made. The LINEAR model of ΔT=f(ΔCO2) will NEVER, I repeat, NEVER work. It will NEVER work for the same reason a linear model of gravity will never work. Why? BECAUSE THEIR RELATIONSHIPS AREN’T LINEAR!!! Clearly defined and natural laws explain these relationships. Objects fall at 9.8m/sec^2, not 9.8m/sec, a linear model will never explain how an object falls. The absorption of infrared radiation between the wavelengths of 13 to 18 microns by CO2(the only defined mechanism by which CO2 can affect the climate) is not linearly related to CO2 concentration levels, it is a logarithmic relationship. The experts that are manipulating these models know that the relationship ΔT=f(ΔCO2) is wrong, they know the physics are all wrong. The experts know the real relationship is ΔT=f (Δlog(CO2)). To prove my point, the program NASA uses called MODTRAN adjusts for the logarithmic relationship, so the manipulators can’t deny knowledge of the relationship. To win this point in a court case, one would only need to run the climate models using Δlog(CO2) instead of the ΔCO2 as the independent variable and I’m 100% confident the R-Squared would increase (assuming, of course, that accurate temperature data is being used). Throw in the sun and water vapor and the R-Square would increase exponentially, and suddenly, the climate “experts” would be having their requests for bail denied. Note how MODTRAN calculates an R-Square of .997 for that relationship. A perfect/highest possible R-Square is 1.
The Case of Your Lying Eyes:
Everything one needs to prosecute the climate “science” fraud is captured in the top graphic in this article. “A picture paints a 1,000 words,” and that picture paints the crime story of the century. The results of the IPCC models tell a forensic investigator everything they need to know:
- 100% of the models overstate observed temperatures. This is not a random error, this is a systematic bias.
- The forecasts represent a linear relationship between CO2 and temperature that is not supported by the fundamental physics of the CO2 molecule.
- Satellite temperature measurements, by far the most accurate measurements, show very little relationship between CO2 and temperature.
- Any real science that produces multi-variable models also publish the resulting R-Squared. That is how you determine the validity of the models. R-Squared is the “explanatory power” of the model. The IPCC doesn’t report the R-Squareds of the models, at least not that I’ve seen.
- Simply looking at the above IPCC models, I would estimate the R-Squared to be below 20. In a real science, that low a score would result in the rejection of the hypothesis.
- Because the R-Squared of the models is so low, by definition they have either ignored significant variables and/or used the wrong scale for CO2 (log vs non-log). The IPCC models do both.
OK, Connect the Dots for Me:
To the casual observer, the recent NOAA Whistleblower and past Climategate email leaks appear to be a complicated much-ado-about-nothings, but to the forensic investigator, they establish the motive for the crime. Like a well-painted mosaic, you can’t see the picture from up close. Each individual infraction doesn’t add up to much, and the climate alarmists can easily explain it away. It is only when all the infractions are taken in their entirety that the crime begins to take shape and becomes clear. The root of the problem is CO2. CO2 has a near linear pattern of increase.
The linear pattern of CO2 simply doesn’t correlate well with the non-linear pattern of atmospheric temperatures. Between 1980 and 2012, CO2 increased from 335 to 395, a nearly 20% increase, while temperatures were essentially unchanged. Note how temperatures “spike” during El Ninos, and then rapidly fall once the oceans stop providing the extra heat. A linear relationship between CO2 and temperature can’t explain temperature drops, let alone rapid temperature drops, nor can they explain temperature spikes. CO2 changes gradually and always increase, temperatures don’t.
Faced with the above dilemma, what can a climate alarmist do? Clearly, the ΔT=f(ΔCO2) model is not supported by the data. To “fix” this problem, the climate alarmist only has one option, manipulate the data to make the model work. To do this, the climate alarmists have to focus on the least accurate and most accessible data sets. Dr. Christy and Dr. Spencer, combined with the standardized and transparent methods of satellite data temperature constructions, force the climate alarmists to rely on the highly inaccurate and “adjusted” ground measurement data sets. As Stalin once said, “it isn’t who votes that count, it’s the people that count the votes.” The Climategate emails prove the climate alarmists are the Stalinesque vote counters for the ground measurements, and the satellite data is beyond their control.
While I’ve written about this is the past, I’m not the only one that has documented this fraudulent data manipulation.
Over the course of the last few decades, overseers of the 3 main 19th century-to-present global temperature data sets — NOAA, NASA, and HadCRUT — have been successfully transforming the temperature record to the shape dictated by climate models. Namely, there has been a concerted effort to cool down the past — especially the 1920s to 1940s warm period — and to warm up the more recent decades, especially after about 1950. In this way, a trend of steep linear warming emerges that looks similar to the linear shape of anthropogenic CO2 emissions for the 20th and 21st centuries. A better fit between anthropogenic CO2 emissions and surface temperature helps to imply causation, and this ostensible correlation-turned-causation can then be used to justify policy decisions aimed at eliminating fossil fuel energies.
To Catch a Thief:
Criminals succeed because most people with a solid moral foundation simply can’t understand the criminal mind. The climate alarmist relies on the general public not connecting the dots and quickly losing interest regarding the highly complicated and boring topics of data construction and climate modeling. I’m only interested in it because I’m very familiar with the topic of multivariable modeling, and what I saw alarmed me. There is clear intent to deceive the public, as I’ve tried to explain in this article. While the general public would never connect the dots, forensic investigators can, and this article details the approach and methods to do so.
The motive is clear, to maintain the illusion that the model ΔT=f(ΔCO2) is valid, and therefore, is justification for continued funding.
The method is clear, to manipulate the data and model variables to make the model produce the desired outcome.
The criminal intent is established because the data and model variables were in fact manipulated in a manner that would improve the validity of the ΔT=f(ΔCO2).
The models clearly have very low R-Squareds, yet the climate alarmists claim they are valid.
The evidence to prove the models are fraudulent is that the difference between the forecast and observed measurements are growing and will continue to grow over time. If the IPCC understands the climate, they should be able to model it.
The other “game-over” piece of evidence is that there is an almost perfect R-Squared of 0.98 between the “adjustments” and atmospheric CO2. The probability of that happening randomly is in the same ballpark as an arthritic drunk monkey typing War and Peace while riding a bike backward. Any educated jury would only need this evidence to convict. The SEC uses this kind of evidence all the time to discover financial fraud. The numbers are simply too perfect, so perfect they make Bernie Madoff look like an amateur.
Courtroom Demonstration to Prove the CO2 Emperor Has No Clothes:
From the above explanation, the way to expose this fraud should be obvious. It’s all a numbers game. A computer won’t lie, and R-Squared doesn’t have an agenda. Because it is obvious the IPCC models are fraudulently constructed, it would be almost impossible to build a worse climate model right there in the courtroom. All one would need to do is build a common sense climate model using satellite temperature data and ΔT=f (Δlog(CO2) combined with data for solar radiation, water vapor and El Nino’s and El Ninas. The judge and jury would be able to witness before their eyes the R-Squared going from insignificant to significant, and the “consensus” verdict for this “settled science” would be guilty of defrauding and deceiving the public. Pay attention to the temperature and solar charts in this slideshow. The relationship demonstrated in these charts in far far far greater than CO2 and temperature. The R-Squared of some of those charts look to be 70 or higher, easily beating anything the IPCC will ever be able to produce with its ΔT=f(ΔCO2) model. That is all the proof one should need to expose the greatest scientific fraud in history. Even if this approach fails, there are plenty more flaws in the theory that could be addressed. The holes in this “theory” are simply too many and too large and too costly for anyone to truly think this is a valid science.
Another more technical way to debunk the Climate Alarmists and their models in court would be to simply have a panel of real scientists present the data sets they feel are most appropriate for a climate model. Dr. Willie Soon would present his solar data sets, Dr. Spencer and Christy would present their satellite data, someone else would present data for the oceans, El Ninos and El Ninas, humidity and precipitation. Data for CO2 and other greenhouse gasses would also be provided. All the data sets would then be entered into a multivariable linear regression program like SAS. These programs have a procedure called “Stepwise”, PROCSTEPWISE was the code I used in SAS, which will run countless regressions searching for the ideal model. The computer will tell you which factors are significant which ones aren’t, and I assure you, no computer in the world will ever assign a high level of significance to CO2. If the judge uses this unbiased and objective approach to determining the verdict, the Climate Alarmists will leave the courtroom in cuffs. The Climate Alarmists have abused the public’s trust, holding themselves up as the “technological elite” that Eisenhower warned us about, and abusing that undeserved position to push a personal agenda that is contrary to the best interest of the public as a whole.
BTW, the MO of the Climate Alarmists is to deny, deflect, deceive, distort, and attack. One favorite tactic is to “appeal to authority,” who are often the “Fact Checkers.” These favorite attack dogs are a tainted jury at best.
Be sure to “Like,” “Share,” “Subscribe,” and “Comment.” If you are real ambitious, please forward it on to President Trump.