Comment

Hitchens on Palin: 'A Disgraceful Opportunist and Real Moral Coward'

360
Walter L. Newton12/15/2009 4:34:56 pm PST

re: #342 Jaerik

But that’s where I’m confused. I’m trying to figure out your argument, not refute it. I’m not sure what I’m “trying.”

You’re saying that the unreliability of the CRU data and models is why there is room for speculation. I’m tentatively agreeing with you for the sake of argument and suggesting we discount ALL CRU data. Let’s completely write them off! No loss to me.

But then I’m pointing out that there are, as you said, “4 to 5 other sources,” that have nothing to do with CRU, that still back up their models. Independent sources, that pre-date CRU’s questionable modeling.

Are you saying those 4 or 5 original sources are now all unreliable? Did CRU’s shenanigans with the data somehow go back in time and corrupt the original data?

Did CRU erase their data as I’ve seen you previously claim? Or is the data from those 4-5 original sources that you’re now retroactively discounting? I’m confused. You’re being inconsistent or not explaining yourself well - one or the other.

Let’s first deal with your last question “Did CRU erase their data as I’ve seen you previously claim?” No, they did not erase data.

The Hadcrut3 data set is made up of data elements, about 5000 of them, temperature readings, in 5 degree by 5 degree grids, which cover the whole planet. A grid that size is about the size of Nevada (hat tip 6 Degrees). Every month, from Jan 1850 until now, CRU has tried to put a temperature reading into each grid, one reading for each month, for each year from Jan. 1850.

Of course, it’s impossible to fill each grid, each month with temperature readings, early on that was a lot of places with no way to get a temperature reading, a lot of open ocean, uninhabited places… but they tried, using many sources, every thing from actual reading in that grid, to tree ring information.

So, where the claim that the “erased data” came from was this. You start with your first data set (file a table, you know, like a spreadsheet) and you fill in as much of the data as you can.

You do research, and you come up with NEW, BETTER or AMENDED data to fill into the grids.

So, you make a SECOND pass on the data set, putting the NEW NEW, BETTER or AMENDED data into the data set.

But you never made a COPY of the first data set. You simply edited the existing one with NEW, BETTER or AMENDED data, and never made a legacy copy so you could compare original data against the next set of data points.

And this went on and on over the years, always coming up with NEW, BETTER or AMENDED data.

What’s the problem? One, they cannot look at every grid right now, especially the older grids what haven’t had NEW, BETTER or AMENDED data entered in years, and tell you the SOURCE for that temperature reading. Why, because as they added NEW, BETTER or AMENDED data, they didn’t keep legacy copies, the mapping of what data from what source into what grid become sloppy or non-existent.

Now if you are still with me… let me know and I will continue in another comment…