Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the worst hydroxychloroquine studies - the now-retracted one in the Lancet claiming that patients receiving the drug were dying at a higher rate, which appears to have used completely fake data and where there was no possible way they could've got the data they purported to have used - was in fact peer reviewed. Peer review really doesn't do as much as people seem to think it does.


Peer review can’t catch fake data. Peer review can look at if an experiment is repeatable and repeating the experiment can show fake data.


Peer review definitely could and should have caught the fake data in this case - there were too many implausible things about this paper that one could verify with a google search and see cannot be right.

In fact, it took people about one day from publication to start pointing them out with proof - and these people were (supposedly) less qualified than the peer reviewers.

It is true that, with properly faked data, a reviewer cannot be expected to catch it. Nor can p-hacking generally be discovered from an incomplete description.

But quite a bit of faked and even p-hacked data is less-than-competently done in a way that a reviewer can and should be able to catch;

And observational data or meta-study are much easier to verify, since no experiment was done by the researcher either.


So then what? Even if peer review does not solve the problem of bad research completely, it certainly mitigates it better than not doing anything.

How would you solve this problem?


> Even if peer review does not solve the problem of bad research completely, it certainly mitigates it better than not doing anything.

This isn't self-evident. In fact, it's conceivable that peer review could actually lower the quality of published research by delaying paradigm shifting work from seeing the light of day.


By attributing less weight to the imprimatur of peer review.

It's a sign that something might well hold up, but it's not definitive. I think people know this about their own field, and then immediately forget it when dealing with another field.


How? If someone is going to rely upon the conclusions of a paper, simply read that paper and evaluate the claims. i.e. stop the sole use of the titles of papers and their cliffs notes to support a point.


I think the problem people are getting is that "peer reviewed" has become a badge of expertise or authority in itself, inflating people's trust in it and weakening skepticism.


The problem is that "peers" of a like mind are easily found, when churning papers is rewarded. Quantity beats Quality in today's world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: