Why publishing a paper every day is a problem.
[see updates at the end of this article]
I disagree with a fair chunk of Griffiths and co's work theoretically and methodologically. That's science. But Griffithsgate goes beyond that and raises some uncomfortable questions about editorial bias and the very real consequences of careless applied research (see Dorothy Bishop's blog and Tom Chiver's article in Unheard).
In saying that, it is tricky to separate procedures from the science because the rushed nature of the work means that it is riddled with contradictions. Like a political party trying to avoid the opposition, it is almost impossible to debate a moving target. For example:
That's all just procedural remember and long before getting to the actual science. I've previously written about the problems of publishing on an industrial scale as it relates to the impacts of technology on people and society. However, this tale all started with a straightforward request to view data based on a data sharing statement.
I like to think that when questions are raised, people should speak (I suspect most would want to) and discuss what is going on. For example, problems with a recent paper in Psychological Science were swiftly dealt with by the original authors.
Indeed, stuff goes wrong all the time. That's research. It's the response that matters.
To date (15/7/2020) we have heard:
- Nothing from the editor-in-chief of IJMHA or editorial board. Authors have been allowed to speak on their behalf.
- Nothing from editor-in-chief or editorial board of JBA.
- Nothing from the publisher of either journal
- Nothing from Nottingham Trent University.
I'm surprised that others who sit on the editorial boards of either journal (aside from Griffiths) have been so quiet. Personally, I would resign my position if no statement is forthcoming or if their hand is forced by a publisher. Doing nothing risks making this appear like it is all business as usual. I would also be curious to know how many papers submitted to IJMHA or JBA that included Griffiths name have ever been rejected.
One thing I've learnt over the years is that many of the above people who could answer these questions, and who are in positions of power to do so, will immediately side-step the issue and not see it as their problem to solve.
Of course, this explains why the system has allowed all this to happen in the first place, despite the fact that every scientist in the land can see the problem.
A new blog from Dorothy Bishop raises even more concerns. Results following a formal investigation promised by publishers, have yet to materialise.
Data has now been provided alongside a correction to the paper that appeared in IJMHA following the original request.
Formal investigation has yet to materialise from JBA.
Information that was previously online about a COVID-19 special issue appears to have disappeared (including all tweets from the Editor-in-chief).
In the meantime, papers have become (I think) even more erratic, relying largely on newspaper articles and Google search queries.
1. Griffiths has responded to evidence regarding self-plagiarism. The argument encourages readers to consider how re-using text is ok if the audience is different. A somewhat confusing line of thought given that you would normally change how you write depending on the audience (e.g., academic vs interested member of the public). Most of the text recycling flagged by others appears to occur in peer-reviewed outlets, which at the very least requires permission from the copyright holder (usually the publisher) in advance of publication.
This is the problem with writing a paper everyday. It's just going to repeat itself.
1. The publisher of JBA (Akadémiai Kiadó) have posted a response on Dorothy Bishop's blog (see the comments section). This is a positive step and in answer to my earlier question, it would appear that some papers co-authored by Griffiths have been rejected in JBA. They conclude:
We believe that the data support that our publication process is not biased.
It is somewhat a shame that the publisher has not shared their data in a way that could illustrate the number of reviews per paper or the number of reviewers who reviewed multiple papers by the same authors. This may have have helped put this issue to bed. Dorothy Bishop's blog and analysis, in contrast, provided all the underlying code and data to support her conclusions.
A more formal investigation regarding the specific scientific claims made in Griffiths et al's papers is now in the pipeline. Hopefully, this will be conducted by someone who is independent and has no conflicts of interest with the editorial board, journal or the publisher.
2. Griffiths has also responded to issues concerning publication metrics of JBA that appeared in the same blog.