- "Accumulate knowledge of a discipline through interviews and reading.
- Determine whether critical expertise has yet to be applied in the field.
- Look for bias and mistakenly held assumptions in the research.
- Analyze jargon to uncover differing definitions of key terms.
- Check for classic mistakes using human-error tools.
- Follow the errors as they ripple through underlying assumptions.
- Suggest new avenues for research that emerge from steps one through six."
- Engage with so-called "experts" and their writings
- Decide if those "experts" are indeed actually experts
- Do those experts have a particular agenda?
- Do the words they use get in the way?
- Are their theories basically built on sand?
- See how their errors beget other errors
- Work out the biggest issues, and continue until you've had enough
At its best, intellectual history throws up dazzling insights: in the hands of a master (such as the extraordinary Anthony Grafton), it can be a virtuoso performance of brain over matter, not unlike a QC's persuasive mastery of his or her brief. Yet at its worst, it can be a sterile exercise in intellectual futility, divorced from the world by its shallow insistence on examining only the participants and their claims, not the validity of the evidence expressed in the ideas, and so ending up in a kind of over-finessed, intricate superficiality.
As an example, even though Grafton's generally excellent book on Leon Battista Alberti shows precisely how Alberti's form and ideas flowed from classical topoi, I think Grafton perhaps takes the whole humanist conceit (that if we all wrote as well as Cicero the world would be a better place) a little bit too literally - whereas humanism was by and large more like a courtly Latinistic game of patronage - and as a result his book never really engages with Alberti the person.
If we bear this kind of thing in mind, it should be reasonably clear that Rugg's "Verifier Method" looks to verify not evidence qua contents but instead expert opinions qua methodology: a kind of faux legalistic framework, with the investigator as self-appointed armchair judge in his/her own kangaroo court, and with no power or desire to step outside into the real world.
In the case of the Voynich Manuscript (in case you were wondering when I'd ever mention it), I think the Verifier Method falls right at steps (1) and (2). Because Rugg's conceptual framework had no mechanism to critique evidence (in particular the various transcriptions of the text), and what separates experts in such an uncertain field is by and large their conception of what constitutes relevant evidence, Rugg has no intrinsic way of deciding who is (and who is not) an expert, let alone trying to infer their agendas (3) or to diagnose any linguistic/semantic difficulties (4)
Essentially, it seems to me that the Verifier Method relies so heavily on the underlying field being regular that it fails to be a satisfactory tool to apply to such irregular areas of study as the Voynich Manuscript. But the problem then is that regular fields of study tend not to need exploratory methods such as the Verifier Method to help traverse them.
Finally, I think that "Verifying" is such a weak aim of any knowledge methodology as to be virtually useless: as a strategy, all it really tries to elicit is some kind of limp correlation. The "Cardan Grille" nonsense that Rugg concocted to "verify" that the Dee/Kelly hoax hypothesis was "possible" is precisely such a thing: of course the hypothesis was possible, that's why it was a hypothesis, duh. Come on: when dealing with an uncertain field, when would the Verifier Method ever be preferable to Popper's Falsificationism, where you collect together plausible hypotheses and actively design experiments to try to kill them? Now that's what I call proper Popper science...