Saturday, 22 December 2018

The Role of a Journal Editor

Having navigated the academic publishing system as an author, reviewer and occasional editor, I can confirm that there are inconsistencies at almost every turn*. From the variable quality of reviews to the procedures in place during article or chapter production, even within the same journal, authors and reviewers can have very different experiences.

Previously, I was led to believe that the role of an editor was clear and consistent. Editors read manuscripts, find reviewers, and make key decisions based on reviews and their own expertise. In addition, editors can help guide authors towards what they feel are the most important points that should be addressed following review. This is particularly important when it comes to clarifying the direction of a paper when reviewers express conflicting views.

However, a number of editors and associate editors, who are sometimes paid by journals, don't always act in a way that is helpful or fair to authors and reviewers. In many instances, following peer review, authors receive a 100% stock reply with no indication that an editor has even read the paper. This may not matter if all reviews pull in the same direction, but that rarely happens.

For example, imagine one reviewer is positive and recommends revisions, but another argues convincingly to reject the paper. The default option for 'non-editors' will be to reject the paper following review without explanation. Alternatively, if 2 or 3 reviewers are positive and suggest revisions, the decision will automatically become revise and resubmit again, with no explanation or guidance.

Even a single sentence to confirm that the paper has been read would add something.

The problem is that authors can be left in limbo wondering what comments are more important especially when reviews are contradictory or if one review is simply of a higher quality than another. A paper in this position will typically go back out to the same reviewers and unless both are happy, the paper will be rejected (based on the logic outlined above).

This lack of direction can have a negative impact on revised manuscripts as the content becomes muddied when authors feel they must appease every reviewer comment. It is possible to request additional clarification from an editor, but this does not guarantee a reply.

Regardless of the outcome, if there is zero editorial input, the editor simply becomes an administrator rather than a fellow academic who plays an important role in developing the work and journal. This also poses problems for reviewers who then feel that their contributions are not worthy of further comment regardless of the final outcome.

Of course, many journals appoint excellent editors and their input ensures that authors will submit their best work to that outlet again and again. These editors have a genuine desire and duty of care that improves the work.

On the other hand, those who are new to scientific publishing may be surprised to learn that when you lift the lid, many journals appear to have almost no editorial input. This makes me think that there may be some confusion of what is expected following an editorial appointment.

Perhaps as well as a list of predatory publishers, we also need a list of journals that have 'real' editors. That is, experts who continue to set the bar high by providing a service to the field, which should be welcomed, encouraged, applauded and rewarded.

The role of a journal editor may be changing, but when much of psychology still struggles to ensure that papers, data and associated materials are freely available, I'd rather editors said more, not less.

*Note: This is purely based on my own and colleagues experiences publishing within psychology, health and computer science. 

Tuesday, 22 May 2018

Why measuring screen time is important

Sometimes there is a disconnection between scientific methods used to investigate a given phenomena and the language used to describe subsequent results. 

When it comes to understanding the impact of technology use on health, wellbeing or anything else for that matter, the gulf is vast. See here for a related discussion. 

This has become even more important as the UK Government is currently conducting an enquiry around the impact of screen time and social media on young people. The outcome could lead to new guidelines for the public especially parents with young children.  



Many academics have already submitted evidence that argues for a cautious approach when developing any new guidance or regulation because the evidence base suggesting any effects (good or bad) remains relatively weak. 

However, other voices have concluded that social media and screen time is a public health issue that needs to be addressed as a matter of urgency. 

To me, this view ignores the methodological elephant in the room. 

The Elephant

In order to argue that social media or smartphone usage is indeed a genuine behavioural addiction and/or a societal problem then the data on which that conclusion should be drawn will need to, at some point, include measures of behaviour.

Research concerning problematic smartphone or social media usage typically relies on correlations derived from estimates or surveys. These measures have some value, but evidence concerning their validity to detect problematic behaviours remains mixed. 

It's also worth noting that when correlational designs are scaled up (e.g., when N's become greater than 10,000) self-reported social media use, for example, explains little about a persons' wellbeing. 

Collectively, this body of work tells us very little regarding cause and effect. 

This methodological gap is particularly frustrating when the very technology we wish to study has the added advantage of being able to accurately record human-computer interactions in real-time. People were quantifying such behaviours in 2011

In terms of present concerns, longitudinal data of this nature could be linked with other life outcomes, which might include measurements of academic performance, physical and mental health. 

Just to be transparent, I have no axe to grind or conflict of interest, but like most scientists, getting to the truth is important. 

This will take time, considerable effort and (some) money. 

The alternative involves destroying public trust in science by wasting taxpayers' money and indoctrinating people with divisive nonsense.

Current Work

With these issues in mind, my research group often consider how we can encourage other researchers to harness behavioural measures from smartphones. We have developed mobile applications to assist researchers, and analysis routines to process subsequent data. 

There is still a long way to go and the rapid pace of technological development makes this even more challenging. It feels like everything moves faster than science. In the time it has taken me to write this Blog, Google have announced new tools to foster digital well-being

That said, we recently considered how long researchers might need to collect smartphone usage data in order to understand typical patterns of behaviour. In other words, if I check my phone 30 times today, will I do the same tomorrow, in 2 days, or a week later?

Of course, there is nothing to stop us collecting such data indefinitely from a digital device, but this would be ethically questionable. 

Therefore, the development of norms regarding the type and volume of data required to make reliable inferences about day-to-day behaviour is important for the field to move forward.

We realised it was possible to consider this in more detail by conducting some additional analysis on an existing data set originally reported by our group here in (2015).



Fig 1. Barcode of smartphone use over two weeks.
Black areas indicate times where the phone was in use and Saturdays are indicated with a red dashed line. Weekday alarm clock times (and snoozing) are clearly evident.

Consistency of Behaviour

Participants had their smartphone usage tracked over a period of 13 days. From this data we were able to calculate the number of total hours usage and the number of checks for each day. Checks are defined as periods of usage lasting less than 15 seconds.

First, we compared the total hours spent using a smartphone between the first and second week of data collection. This generated very large correlation coefficients of .81 and .96 respectively (Figures a and b). 





An additional analysis revealed that even 48 hours of data collection alone was highly predictive of future behaviour particularly in relation to checking (usages lasting less than 15 seconds). 

Similarly, 5 days of total usage still resulted in remarkably high correlations when compared to a subsequent week. Interestingly, no differences were observed between weekdays and weekends.

So, how much data do you need to determine typical smartphone usage? - not a lot as it happens, but this might vary across different samples. We did however, observe that patterns of usage and checking were more consistent for participants with lower levels of smartphone use overall. 

This doesn't completely rule out collecting data for longer periods of time and there are occasions where this might be useful, for example, as part of an intervention to reduce usage or track how these trends might predict subsequent health outcomes. Nevertheless, at this stage we can conclude that even small amounts of usage data can be highly revealing of typical behaviour. 

Finally, we observed that self-reported usage derived from the Mobile Phone Problematic Use Scale (MPPUS) remains unreliable when predicting any of these usage behaviours.

Moving Forward

My lab and others continue to develop and refine similar methods and analysis guidelines. We are now running a number of studies which focus on both specific applications usage and the effects of smartphone withdrawal. 

Despite results to the contrary, we also remain hopeful that it may be possible to validate some existing self-report scales with patterns and types of usage in the future.

Our hope is that these methods will help answer what have become ever more pressing concerns for society and social science more broadly. That is, what can screen time tell us about people and how might these behaviours lead to positive or negative life outcomes. 

I initially became involved with this area of research because it was methodologically and theoretically interesting however, it has now also become politically charged with mixed messages being communicated to the general public. 

The discussion has become swamped with celebrities and commentators. Many have good intentions, but these are driven entirely by gut instinct or anecdotal evidence. 

Combined with an extremely weak evidence base, these arguments should not drive changes in policy. 

Reference

Wilcockson, T. D. W., Ellis, D. A. and Shaw, H. (Epub ahead of print) Determining typical smartphone usage: What data do we need? Cyberpsychology, Behavior and Social Networking, 21, https://www.liebertpub.com/doi/10.1089/cyber.2017.0652