UPDATE 1: I've gotten several (5?) comments along the lines of "just toss the whole lot". That's what's known in scientific circles as A Really Bad Idea. Climategate, in and of itself, will not dismantle the AGW industry; there's way too much money at stake (I understand that Gore himself is in a position to make over a billion). All that's going to come from the emails is an ongoing controversey about who said what to whom and what they really meant. In the end, the alarmists will continue to claim that the "science is settled".
The computer programs are the key to finally resolving this. They represent a golden opportunity to actually explore the science, put it out in the open where it should have been from day one, and either scientifically disprove the whole notion, or maybe even discover that there is some merit in what they've been saying. I'm for getting to the bottom of this in a scientific manner.
The question isn't going to go away until it's either proven or disproven using real scientific methods, as opposed to the pseudo-science that's been foisted on the world. We need to drive a stake through the heart of this beast now, while it's stunned and confused. We'll never get another opportunity like this.
I recently found myself wondering how I would handle an analysis of the Global Warming computer models “liberated” from the University of East Anglia and published on the web.
Unlike the emails that were also leaked, there will be no possibility to claim that they were taken out of context; computer programs don’t hedge or use potentially misleading colloquialisms. On the other hand, it’s been the output of these programs, not the casual statements of the senders and receivers of the emails, that have been used as the impetus for climate change legislation that could effect every person on the planet. To date, these programs have never been subjected to the kind of analysis that one would expect for such world-changing research. We now have an opportunity to do just that. This document details what I would do if I was selected to head up the team performing such a review.
This is not a witch hunt; the goal isn’t to prove or disprove scientific malpractice on anyone’s part. The objective of this project should be to bring transparency to the research, not to support any particular point of view. This is a last-chance opportunity to drill through the political outer shell that’s grown around AGW and to expose the science at its center to the kind of open and transparent examination that would be applied to a theory in any other field.
I propose a two-tiered approach; a code analysis stage, performed by programmers, to fully document the inner workings of the programs, followed a review by climatologists and statisticians, both pro- and anti- warming, of the finished analysis. The purpose of the first phase will be to abstract the statistical methods and the mechanics of the simulation from the mechanical aspects of the code. The second phase will attempt to validate the methods and mechanics. I’d also like to propose a third phase, to convert the Phase One analysis into an Open Source model, written in a modern, object-oriented language, that can be used as a framework (under a “full disclosure” license) for future research. A fourth phase, aimed at reconstructing the raw data and making the model and its results open and available to the world, is also a possibility.
The work product of the analysis group (Phase 1) should include the following:
1. A diagram of the calling structure of the program(s); this indicates how the various functions within the program interact. It’s roughly similar to a wiring diagram for a home.
2. A set of flow charts for the various functions within the code. This provides reviewers an insight into the sequencing of the operations on the data.
3. A set of “Data Dictionaries” describing the data structures and intermediate files used.
4. A single, unified dataflow diagram describing in mathematical terms the path(s) of input data from input to final output.
5. Output of test runs of the original code using test vectors provided by the phase 2 teams (a test vector is a set of data designed to expose and measure a specific aspect of the system under test).
The validation process (Phase 2) will use the Phase 1 documents to evaluate the following:
1. Are the methods used to filter or pre-sort data statistically valid? If there are problems, what recommendations can be made to improve them?
2. Are the simulations of the climate mechanisms valid?
a. What assumptions have been made? How can these effect the output?
b. What simplifications have been made? Are they justifiable? Could these effect the output, and if so, how could the model be extended?
The final result of this process would be:
1. A list of errors in the original source code, along with thier impact on the final output
2. A list of assumptions and simplifications in the source code, along with an assessement of thier impact on the output.
3. Development of one or more dataset based on publicly available and published data, with all proxy data and adjustment noted and justified.
The final report of the validation team will include all findings by all members; the initial findings will be discussed, and for members will be allowed to write follow-ups to their initial findings. Each of the team will be invited to write a summary of his/her analysis or may work with other members of the team to prepare joint reports. All of the initial findings, amended finding, and summary reports will be made available to the public. There will be no single summarization of the findings (as was done with the IPCC report).
Phase 1 documents as defined above (Call tree, data dictionaries, flow charts, dataflow diagram)
Phase 2 documents:
1. Initial and amended findings.
2. Summary reports
3. Recommendations for future models
4. Recommendations for future datasets
5. Questions and responses passed between the Phase 2 and Phase 1 teams
Having no idea of the size of the code, it’s hard to make any kind of estimation of manpower or schedule (it gets a lot harder when you’re working with volunteers who are contributing what spare time they can from their own lives). Phase one will require three programmers, this will eventually expand to six or eight; they should have pretty good analytical math skills and at least a passing familiarirty with Fortran. They'll need access to a system with a Fortran compiler, which isn't all that common these days.
Phase two will require a number of people with a solid background in modeling and simulation, climateology, and control system theory. In order to ensure that both sides are represented in the analysis, I'd like to see a balance between "warmists" and "skeptics". It might be difficult to find academics willing to cooperate with the analysis of "stolen" code, but the fact that the code was supposed to have been divulged under British FOIA laws might mitigate this timidity.
I suspect that the Phase 1 team will break down into groups according to the list of work products, with the dataflow diagram starting once the other parts are done. This means that a team of at least three to begin with, with additional team members added once a process has begun to solidify. We’re going to want to have at least two people working on each section of the documents, in order to provide a peer review mechanism (peer review in the software world has a different meaning from the practice used by the scientific journals). We’ll want to add someone with strong numerical methods background to work on the dataflow diagram.
If any bugs are found (for example, a math function found to return an incorrect value under certain circumstances), they’ll will be noted in the dataflow diagram, but no attempt will be made to determine the extent of its effect on the outputs.
The final documents will be reviewed by the entire team before handing off to the Phase Two team. The project manager will have final say on the disposition of any red-lines that come from these reviews; the list of redlines should be published with the Phase 2 reports, but shouldn’t be made available to the Phase 2 reviewers; the project manager might end up taking a lot of heat over any perceived bias in the documents turned over to the Phase 2 people.
A “firewall” should be established between phase one and phase two; “clean room” approach similar to that used by various companies to reverse-engineer the IBM BIOS system in the early ‘80s. This will prevent any possibility of the validation team from being influenced by the opinions of the code analysts based on either comments in the code or coding methods. It will also clear the legal road for creating an Open Source climate model for the proposed third phase; it will allow the Phase Three developers to work from a structural model reverse-engineered from the source code, rather than examining the source code itself; there’s no grounds for a copyright infringement claim. (There might be grounds for a claim if any of the methods (“algorithms”) are covered by patents, but since a patent requires full disclosure, I strongly suspect that none have ever been filed). The rules of the firewall would prohibit the Phase Two reviewers from communicating directly with (or even knowing the identity of) the Phase One analysts until after the final summaries are released.
Unfortunately, I don’t think there’s any way to do the Phase One work package in such a way that the Phase Two reviewers won’t have questions. We’ll have to work out a mechanism for the reviewers to pass questions to the analysts, and for the responses to be “scrubbed” of any potential bias, in order to maintain the integrity of the “clean room”. I think another independent team of analysts should review any responses, and the full text of questions and responses should be included in the final document package.
It’s important to note that this process isn’t intended to provide “an answer”; it’s intentionally designed to provide a platform for airing multiple viewpoints in an open and transparent way.