Computational Reproducibility in Behavior Research Methods

Assessing computational reproducibility in Behavior Research Methods appears online today. 

Behavior Research Methods (BRM) is a journal dedicated to the methodologies, techniques, and tools utilized in psychological research. 

We recently examined computational reproducibility in BRM after changes to submission policies.

Specifically, a new policy was introduced for all new submissions on January 1st, 2020, and subsequently made public as part of an editorial: “BRM requires the information to be easily available in a repository or in an appendix” (Brysbaert et al., 2021, p. 2).

This project involved a large team that painstakingly sampled a variety of research assets (N=200) before and after those policy changes. 

We double-coded what worked and what didn't. We even timed how long it took to get data, code, or stimulus materials up and running. 

So what did we find? 

The good news is that things are improving. The decay rate is also slower for articles published after changes to policy. In other words, research assets remain functional for longer post-publication after changes to policy. 


Distributions of time (a), completeness (b), and reusability (c) before and after changes to journal policy that aim to improve the computational reproducibility (or usefulness) of research assets for other scientists (Ellis et al, 2024).

However, progress was less than we had hoped. Data-based research assets improved the most, but the challenges associated with keeping various software and code libraries operational remain challenging. Maintaining even a simple smartphone app is becoming a full-time occupation! 

Based on our results, we provide recommendations that may interest journals already leading the field (e.g., Psychological Science, BRM, and The Journal of Social Psychology) as well as other journals and institutions that are reforming policy or guidelines for staff. 

While one might argue that the resources required to achieve reproducible work that lasts a long time are considerable, I don’t think this is a reason not to push for further improvements. 

Simply publishing less with higher quality resources to support computational reproducibility is one option (although good data and resource management should be part of all science regardless). Yet disciplines where the rate of publication is slower don't fare better. For example, management journals have extremely long manuscript turnaround times, but early indications from another project suggest that this doesn’t always translate into improved transparency.  

Regardless, there has yet to be an agreement for how long resources associated with a research article should be expected to last. 2 years, 1 year, 1 REF cycle? This requires further discussion, especially if researchers conducting computationally reproducible work are to be rewarded. 

It has been clear since around 2010 that most psychology studies fail to replicate due to p hacking or other questionable research practices. Notwithstanding outright fraud, statistical and technological knowledge gaps are widespread. Many journals operate as if everything is ok, which reflects a lack of leadership or confidence to make decisions.

Journals are either engaged, listening, and adapting to long-standing challenges or stick their heads in the sand. 

Disciplines are equally split. For example, research England has dropped the mandate for open-access books in its guidance for the next REF. So science will be free to read, but humanities scholarship will remain hidden in expensive books until at least 2029.

Institutions are either moving ahead with change at pace or, at least in the UK, completely tied up with more pressing financial issues. 

The polarisation of journals, disciplines, and institutions is getting worse, not better. 

Comments

Popular posts from this blog

A universal skill-set for all psychologists?

State-Trait Anxiety Inventory (STAI): SPSS Script

The Hexaco Personality Inventory - SPSS Script