At Genocide Report, we thrive on scientific research. However, just as when shopping for used cars, you have to choose carefully. Most research is sponsored and reflects either (1) the needs of those paying or (2) a desire to say something that flatters or interests the audience, in a process that makes science into entertainment.
Others have published insightful critiques. Consider this criticism of peer review:
At this point we at the BMJ thought that we would change direction dramatically and begin to open up the process. We hoped that increasing the accountability would improve the quality of review. We began by conducting a randomized trial of open review (meaning that the authors but not readers knew the identity of the reviewers) against traditional review. It had no effect on the quality of reviewers’ opinions. They were neither better nor worse. We went ahead and introduced the system routinely on ethical grounds: such important judgements should be open and acountable unless there were compelling reasons why they could not be—and there were not.
Our next step was to conduct a trial of our current open system against a system whereby every document associated with peer review, together with the names of everybody involved, was posted on the BMJ’s website when the paper was published. Once again this intervention had no effect on the quality of the opinion. We thus planned to make posting peer review documents the next stage in opening up our peer review process, but that has not yet happened—partly because the results of the trial have not yet been published and partly because this step required various technical developments.
The final step was, in my mind, to open up the whole process and conduct it in real time on the web in front of the eyes of anybody interested. Peer review would then be transformed from a black box into an open scientific discourse.
In other words, peer review — a closed forum — is less effective than an open forum. This leads us to wonder why anyone would choose peer review unless their goal was to limit criticism. Among those who are counterparts in a profession, the primary self-interest burden consists of wanting to advance the profession itself, making people less critical.
This is consistent with what one famous survey of peer review found, which is that most findings are not reproducible or otherwise demonstrate scientists rejecting non-conformity data so that they can find facts to fit a theory, not a theory to fit all the facts:
The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.
Peer review functions as a bias and self-interest amplifier because it limits criticism to those with interests outside of the results themselves; people like to advance their careers, and they do so by generating interest per the nature of a utilitarian system such as ours, so the incentive for them is to approve of research on the basis of funding or popularity.
It’s something to think about as you read through the myriad of studies out there.
2 thoughts on “Why So Much Of Published Scientific Research Is Meaningless”
Comments are closed.