Saturday, December 27, 2014

Global whacking: Clarifications

Robert Salmon, Storm at Sea (1840). Wikimedia Commons.
Duff writes, in comments, with reference to the previous post:
"Sabine, Feely et al. 2004 modelled the time scale of acidification" [my emphasis]. Ah yes, that titchy little word that means so much!
Modeling, Duff, is how science is done. Copernicus and Galileo did not truck around the solar system with surveying equipment measuring where things were in relation to each other ("Got to the sun yesterday, about 93 million miles from Cracow!"); they stayed home and imagined how it might be structured and what consequences such a structure might have that you could observe from earth.

Having done that, they were able to start looking at the data as test data. There was lots of data already, incidentally, and it was very good—as you know, scientists were able to make completely accurate predictions of eclipses and comet visits and planetary movements and so on in spite of their mistaken geocentrism—but it wasn't sufficient for Galileo's needs, since the geocentrists had worked out rationalizations to cover their model's problems; to clinch the question he needed something different, which turned out to be the observation with Galileo's own 1609 improved telescope of a single planet over a four-month period in 1610.

I can imagine the Duffs of the day chuckling: "Ooh, he built his own telescope, did he, and spent a few weeks looking at Venus through it? And that's supposed to outweigh all the observations made over the past 13 centuries and prove his so-called model, according to which when the sun rises it's not really the sun rising but us whirling in the opposite direction at 1,070 miles per hour? Hold onto your hats, gentlemen, I'm feeling seasick already! Must be a very special telescope!"
"While there are earlier data on ocean acidification, going back to 1910 or so and available online from the NOAA, they do not come from long-term time series observations done under controlled conditions at consistent locations," [my emphasis]" Ah, yes, 'controlled conditions', so important, you know!
Experimental control, Duff, is a necessary element of hypothesis testing, to make sure your data are pertinent to the question you're examining and not distorted by some irrelevant variable. I thought everybody learned this in primary school; I'm surprised to see you using "control" with a snicker as if it referred to some kind of "dodgy" manipulation of the facts.

A study of ocean acidification needs to be controlled for place and time. If I take a measurement of the water off Coney Island today and find a pH of 8.1, and you take one off Maidstone on Wednesday and get a pH of 7.9, that does not mean that the ocean has radically acidified this week: it means, rather, nothing at all. Coney Island and Maidstone are different places, in different conditions, and we have no idea how the conditions affect the results.

If we repeat the procedure a million and a half times, in Tahiti and Alaska one week and the Seychelles and the Falklands the next, that doesn't produce any more value than the first one, since its value was zero to start with and zero times a million and a half is zero. This, as far as I can see, is what Michael Wallace has done with the NOAA acidification data from 1910 to 1988; dumped it all year-by-year and decade-by-decade into a plotting program and mistaken the inevitable resulting noise as a "finding"—naturally a finding that nothing in particular has happened, because that's what statistical noise looks like: nothing in particular.

Wallace's chart of all the measurements from 1910 to 1920 (only four years because World War I) dumped into a single file without regard to where they were taken or whether the sample of places is comparable to the sample for 1920 to 1930, etc.
Screenshot of results from a search of the NOAA database for items relevant to ocean acidification from 1910 to 1914, from which Wallace created the chart above. Note that half of them were taken at unknown locations and none of them were taken over a time span longer than 19 months. The problem isn't that the measurements are inaccurate but that they are not commensurable with the measurements from other decades, nor are the more thoroughly done measurements from later decades commensurable with each other until you get to the JGOFS program in 1988. 
The question remains: "Who determined that the directly measured ocean pH data was not of “sufficient quality” and if it wasn’t, why then did NOAA make the data available on their website as part of other ocean data in their World Ocean Database without a caveat?" 
There is nothing whatever wrong with the NOAA data, and there is no reason why NOAA should have issued them with a "caveat".  I am sure they have all sorts of uses. They are simply not usable for what Wallace wants to do with them, aggregating them into a single database and calling them a "time series". As you see from the illustration above, they cannot form part of a time series because you have no clue how water pH levels changed in any particular place over any particular timespan as long as two years, let alone a century (by the 1960s you find a few studies carried out over terms as long as 8 or 10 years, but most of them are as short-term as those of the 1910s, and there are still tons of readings from unknown locations). It would be like trying to draw a map using a list of place names without geographical coordinates.
"a very large-scale survey" conducted at ONE!!! location since 1988!!! as opposed to millions of measurements taken around the globe over the last hundred years.
No, not at all, Duff, you are reading very carelessly. The very large-scale survey discussed in the 2004 paper was of a global inventory of 9,618 stations over a five-year period in the 1990s, "which represents the most accurate and comprehensive view of the global ocean inorganic carbon distribution available." And it was a survey of oceanic carbon distribution, as stated, not of pH, so entirely independent of the Aloha Station pH measurements or the century's worth of pH measurements to which you refer.

The carbon distribution was used, in the first place, "to provide an ocean data-constrained global estimate of the cumulative oceanic sink for anthropogenic CO2 for the period from ~1800 to 1994." Starting around 1800 because that is when massive CO2 emissions began with the industrial era. Since it is well established that oceanic carbon sinks, very gradually, over time, it was possible to date the carbon found in the survey, the oldest carbon being that at the lowest depths, and to construct the 2006 model (see below) on that basis.


The Aloha Station data were then used to test the model, on the general lines I was talking about at the top of the post, for the period 1988 to 2010, and it passed, you know. If you don't like the single-station study you can check out one (without the participation of Feely and Sabine) that covers all seven ongoing time series projects with similar results.

The evidence, as Ten Bears and Ken_L remind us, goes well beyond Richard Feely, and only gets stronger with time.
"a somewhat long-in-the-tooth graduate student (he got his BS in Plant and Soil Science in 1980)" - isn't that blatant age-ism? 
To me there is something a little odd in a person with a 30-year career as a hydrologist and a substantial list of publications (though not in ocean acidification) billing himself as a "graduate student". I'm almost certainly older than he is, anyway. Does that make me a self-hating geezer? So be it.

Cross-posted at No More Mister Nice Blog.

No comments:

Post a Comment