Thursday, 10 December 2020

Reflections on Interdisciplinary Research (2013-2020)

I've always found interdisciplinary research rewarding. It also fits with my view that, at least from an applied perspective, no single discipline has all the answers. Probably also says something about my attention span. 

Anyway, I'm going to share three things that early-career interdisciplinary researchers in the UK (or who are thinking about working in the UK) might find useful. Somewhat timely as we come to the end of another REF cycle. For those outside the UK, the REF is used to quantify the quality of research outputs across institutions every 7 years or so. It determines how much funding is allocated to each institution by the UK Government. 

The below is therefore based on my own and colleagues experiences while conducting interdisciplinary research at research intensive institutions across the UK.

1. Take time to appreciate disciplinary hierarchies because it may impact where you can work in the future.

Publication expectations placed on researchers in terms of the REF appear to be somewhat incompatible with an interdisciplinary research agenda. Papers make up the bulk a REF submission and outputs are submitted to traditional subject panels. Psychology, psychiatry and neuroscience form a single panel, for example. 

Unfortunately, every discipline has its own hierarchy of what might be considered excellent research. Sometimes these hierarchy's are based on a list of journals in someones head that you can guess (e.g, psychology). At the other end of the spectrum, medicine can set high expectations with highly selective journals dominating the landscape. Management schools rely on the ABS list. This ranks dozens of journals from 1 to 4 with a special category for a few select 4* journals...so..em..1 to 5 then. 

It is unrealistic to suggest that any of these hierarchies are perfect. For example, I believe the intention of the ABS list was to allow outputs to be compared in an area that is naturally interdisciplinary. Makes sense, but the current list is pretty limited and changes infrequently. Most psychologists would be amazed to see what psychology journals are both present or absent let alone how they are ranked. There are no major medical or general science journals (Science, Nature, Lancet and JAMA publications are all missing) and no open access only publications (e.g., PLOS). In fact, open science practices embraced by other fields including data sharing and pre registration are not really a thing (yet). 

With or without such frames of reference, to appreciate if a paper is any good, you'll have to read it or get someone who knows the area to read it and come to a conclusion. 

Believe it or not, this is actually what the REF aims to do. In practice, these hierarchies including related metrics like impact factors, are not meant to impact REF outcomes. When the results are announced, an individual never finds out how high or low their work is rated and institutions are only provided with summary data. Papers can of course be screened by other academics before submission and many institutions pay outside sources to help prepare submissions.  

Despite all this, getting papers in the ‘right’ places remains core to hiring, probation and promotion decisions in the UK. Having worked in several places, the process rarely deviates. 

Researchers who wish to pursue interdisciplinary research have little choice but to publish across multiple fields in order to meet both personal research goals and those pre-determined by a primary discipline to ensure long-term success. This places early-career academics under significant pressure as they try to establish themselves as future leaders. 

Attempts to mould findings or methods to a specific journal can come at the expense of conducting the best science possible. In turn, this can reduce the potential impact of work that might be better placed elsewhere, which is another core part of the REF incidentally. However, the level and type impact is also an area that varies tremendously between individual disciplines and again interdisciplinary impacts challenges these norms. 

Yet interdisciplinary research is what UKRI are keen to fund. This is another core mission of all research intensive universities. 

For example, if I was based in computer science and working with colleagues in medicine, there is no way they or I would be happy to go for a computer science outlet. We could force it, but the process would be disastrous and not appropriate for the work. It will also limit the real-world impact due to differences in readership. Regardless, I need to cover my bases to ensure I defiantly pick up a more suitable discipline specific paper elsewhere. 

So you do what is right by your colleagues and your science, but accept it will increase your workload.

I do not personally resent this work at all, but occasionally wonder if someone had told me all this 5 years ago - would I have planned things a little differently? Probably not, but I didn't really anticipate all the above until fairly recently! I just did stuff I genuinely wanted to do! 

All this gets easier as you build up research networks, which interdisciplinary work requires by default. This provides a host of other benefits including opening doors to new funding streams. Further, if the aim is to move over to another discipline in the future, then none of this should be viewed as a problem.  

But go in with your eyes open. 

2. The process of conducting and publishing research can be radically different between disciplines even for areas that appear analogous.

So if the above hasn't put you off, the language used to describe the same thing in different disciplines remains challenging. I still struggle a bit with some computer scientists describing the running of different statistical models as experiments!

In terms of publishing specifically, I like to imagine how different disciplines might behave if they were all at the bar and attempting to order drinks:

Computer science places more value on conference proceedings, rapid turnarounds and getting stuff finished. They want to get drunk quickly and move onto the next pub. 

Medical journals still have incredible copyeditors and illustrators. Medicine arrives well presented and popular. They leave the pub looking immaculate despite drinking 12 pints of high impact lager.  

Psychology will start a fight at the bar as it is unable to agree on how drinks should be measured. After another crisis of confidence, they settle for tap-water, which tases different from last time. 

Management will think carefully, attempt to calm the situation and spend a long time at the bar studying their own menu (called the ABS list). They will then order what they believe to be 4* drink. No other discipline has ever seen the menu or heard of the drink in question.  

On a serious note, establishing a common problem can help break down barriers when working as part of an interdisciplinary team. Even when the above becomes confusing, getting back the to task in hand encourages people to articulate how their perspective or method can help. But this doesn't mean it shouldn't be challenged. This is one of the reasons interdisciplinary research takes longer. Getting through disagreements or misunderstandings can take a long time, but the end result is always stronger. 

Again, probably should have realised this 5 years ago. 

3. Have faith in good mentors and others who are helping to shape what excellent science looks like. The landscape is changing.

REF panels now have interdisciplinary members, although I'm less clear on how this will work in practice. While this appears to have had little effect on the selection of papers submitted by departments, it is an encouraging sign. 

Perhaps more importantly, PhD students are often part of doctoral training centres (like this one) and are already being trained as part of an interdisciplinary enterprise. Masters programmes are heading in a similar direction.

At the same time, some senior academics are encouraging a variety of several sensible changes or launching initiatives that will improve science more generally (e.g., Psychological Science Accelerator). All this means increased transparency and larger numbers of authors from different disciplines working together. Even if existing reward structures haven't quite yet caught up with reality, the direction of travel is probably irreversible. I suspect current PhD students and post-docs will drive this change further. 

Conversely, university's might need to acknowledge that other senior academics have come through a very different system that was driven by the loan genius model of academia. This tends to prioritise individual success over the advancement of science. For good or bad, what the majority think makes a good scientist is changing and I can see some tearing in the space time continuum when listening to advice from some professors nearing retirement on what ECRs should be prioritising. There is no point pretending that what worked for the last generation of professors will be the path for the next. 

Despite pressures placed on institutions, I am a firm believer in allowing researchers to do things they genuinely care about and trusting them to do excellent science. Ensuring this happens makes the things we get grumpy about as academics easier to accept. 

Case in point, while I am throwing out some words of caution that go against what I would like in an ideal world, my intention is not to discourage others from embarking on an interdisciplinary career. 

Quite the opposite.

Sunday, 11 October 2020

Cyberpsychology, Behavior, and Social Networking now charging fees for new submissions

Charging $50 to submit a manuscript. 

This is another baffling move from a journal that was already heading in the wrong direction for three reasons:


(i) Editorials are all over the place. See a recent letter in response to one from last year. 

(ii) The editor-in-chief is not an editor. They don't handle manuscripts or communicate with reviewers. Many reviews are definitely not worth $50 (good, bad or neutral)! 

(iii) An increasing number of papers have obvious issues with basic reporting, which the journal clearly doesn't care about. 



In some areas of finance and accounting, it is common to have submission fees, which are sometimes refunded following acceptance. However, this is not the norm for psychology (nor should it be). 

It looks like the publisher has implemented this on the quiet (kudos to Lee Hadlington for noticing) and is presumably an attempt to handle submission loads. 

The number of submissions shouldn't come as a huge surprise. 

First, cyberpsychology is a growth area and this is a positive sign. Second, but more troubling, is that there is literally nothing of note on what the journal wants or doesn't want. This will lead to a large number of manuscripts being submitted that are not in line with the scope or aims of the journal. 

In order to manage this problem, a sensible editorial might provide clarification on issues facing the journal and outline a vision for papers that would be competitive for publication in the future. 

But there is no vision because there is no editor

At the same time, non-transparent decision making alongside poor communication with reviewers and authors has become a growing problem for both this journal and related publications. This simply serves to alienate everyone who gives the journal a reason to exist in the first place. Exasperated reviewers will refuse to give their time for free and so manuscripts move even more slowly through the system. 

It's a vicious cycle.

I wonder if members of the editorial board were consulted about these changes? Assuming they weren't, I would urge them to consider their position carefully. 

Personally, for me this is the final straw. I will no longer be reviewing manuscripts or submitting work to the journal. 

There are a growing number of excellent alternatives. 


Update 12/10/2020

At least three members of the editorial board have now resigned.  


Wednesday, 15 July 2020

Why publishing a paper every day is a problem.

[see updates at the end of this article]

I disagree with a fair chunk of Griffiths and co's work theoretically and methodologically. That's science. But Griffithsgate goes beyond that and raises some uncomfortable questions about editorial bias and the very real consequences of careless applied research (see Dorothy Bishop's blog and Tom Chiver's article in Unheard).

In saying that, it is tricky to separate procedures from the science because the rushed nature of the work means that it is riddled with contradictions. Like a political party trying to avoid the opposition, it is almost impossible to debate a moving target. For example:  
That's all just procedural remember and long before getting to the actual science. I've previously written about the problems of publishing on an industrial scale as it relates to the impacts of technology on people and society. However, this tale all started with a straightforward request to view data based on a data sharing statement.

I like to think that when questions are raised, people should speak (I suspect most would want to) and discuss what is going on. For example, problems with a recent paper in Psychological Science were swiftly dealt with by the original authors

Indeed, stuff goes wrong all the time. That's research. It's the response that matters. 

To date (15/7/2020) we have heard: 
  • Nothing from the editor-in-chief of IJMHA or editorial board. Authors have been allowed to speak on their behalf.
  • Nothing from editor-in-chief or editorial board of JBA.
  • Nothing from the publisher of either journal
  • Nothing from Nottingham Trent University.
I'm surprised that others who sit on the editorial boards of either journal (aside from Griffiths) have been so quiet. Personally, I would resign my position if no statement is forthcoming or if their hand is forced by a publisher. Doing nothing risks making this appear like it is all business as usual. I would also be curious to know how many papers submitted to IJMHA or JBA that included Griffiths name have ever been rejected. 

One thing I've learnt over the years is that many of the above people who could answer these questions, and who are in positions of power to do so, will immediately side-step the issue and not see it as their problem to solve. 

Of course, this explains why the system has allowed all this to happen in the first place, despite the fact that every scientist in the land can see the problem. 

Updates:

[7/12/2020]

A new blog from Dorothy Bishop raises even more concerns. Results following a formal investigation promised by publishers, have yet to materialise. 

[18/9/2020]: 

Data has now been provided alongside a correction to the paper that appeared in IJMHA following the original request. 

[8/9/2020]: 

Formal investigation has yet to materialise from JBA. 

Information that was previously online about a COVID-19 special issue appears to have disappeared (including all tweets from the Editor-in-chief).

In the meantime, papers have become (I think) even more erratic, relying largely on newspaper articles and Google search queries.  

[21/7/2020]

1. Griffiths has responded to evidence regarding self-plagiarism. The argument encourages readers to consider how re-using text is ok if the audience is different. A somewhat confusing line of thought given that you would normally change how you write depending on the audience (e.g., academic vs interested member of the public). Most of the text recycling flagged by others appears to occur in peer-reviewed outlets, which at the very least requires permission from the copyright holder (usually the publisher) in advance of publication. 

This is the problem with writing a paper everyday. It's just going to repeat itself. 

[20/7/2020] 

1. The publisher of JBA (Akadémiai Kiadó) have posted a response on Dorothy Bishop's blog (see the comments section). This is a positive step and in answer to my earlier question, it would appear that some papers co-authored by Griffiths have been rejected in JBA. They conclude:

We believe that the data support that our publication process is not biased.

It is somewhat a shame that the publisher has not shared their data in a way that could illustrate the number of reviews per paper or the number of reviewers who reviewed multiple papers by the same authors. This may have have helped put this issue to bed. Dorothy Bishop's blog and analysis, in contrast, provided all the underlying code and data to support her conclusions. 

A more formal investigation regarding the specific scientific claims made in Griffiths et al's papers is now in the pipeline. Hopefully, this will be conducted by someone who is independent and has no conflicts of interest with the editorial board, journal or the publisher. 

2. Griffiths has also responded to issues concerning publication metrics of JBA that appeared in the same blog. 

Monday, 25 May 2020

To review or not review, that is the question.


I tweeted about a dilemma earlier this week. It's a familiar tale. 

In the last few months, I've reviewed 3 papers for the same journal. I am also a co-author of a paper that is under review at the same outlet.  

To be completely transparent, our paper has been reviewed as far as I can tell, but it has now spent more time on a desk than with reviewers. A colleague emailed politely asking if the paper would be sent out for review in Feb after it sat for a month with no activity. I emailed again asking for an update on our paper earlier this month. The journal office claims to forward emails on, but we receive zero response from any editor. 

What do you do?



The problem with not reviewing is that I am not helping authors who deserve to have their paper reviewed in a timely fashion. On a side note, the very same journal also has a habit of giving reviewers a set number of days to complete a review and then cancelling the review before the due date. 

And then they wonder why they can't get reviewers!

Going against the grain, I have accepted the review request (and now I've blogged about it as well - sorry). 

Anyway.....I should really let this go, but the injustice of it annoys me. My co-authors deserve something from the whole process rather than a wall of silence. It also rubs salt in the wound when papers co-authored by editors appear to sail through the review process. 

Honestly, what message does that send to the community who serve your journal? 

I grow weary of editors who are clearly asleep at the wheel. This affects every author and reviewer and I have written before about the distinction between editors who rightly want to to give something back to the academic community and those who are using it as a line of their CV without lifting a finger.  

Anyway...I should really let this go. But before I do....

I decided to explore if the publisher had ever purchased the domain name of the journal in question. I naively assumed that publishers or editors might check on this for journals that have been around for awhile, especially where technology is a core focus. 

They haven't.

I am pleased to say that I am now the proud owner of the domain name associated with this specific journal (.com naturally). 

If you want to guess which journal I've been talking about, try going to that URL. 

A correct guess will point straight to my personal website. 

However, I should warn you that after getting carried away, I now own a variety of domain names for journal titles that specialise in technology and psychology.

Stay safe.

Thursday, 7 May 2020

Smartphones within Psychological Science: It's on

Writing academic books has become somewhat less fashionable in psychology, but I’ve always been encouraged to do things I genuinely want to do rather than be completely guided by the REF, TEF or KEF etc. That advice has always stuck with me. 

The book is now almost finished pending some minor edits and a bit of copyediting. I am aware that a few early versions of the manuscript have gone out to some folk who might say something nice for the back cover.

Pretty much all the content is new and, I hope, as up-to-date as a book can be. Some of it naturally pulls ideas from a handful of recent papers. On a side note, it's been an interesting experience to wrestle permissions from publishers so I can re-use portions of text or figures from my own papers!




Publication is penned for later this year (update September 2020: you can buy it now), but in the meantime here are three general things that have stuck with me throughout the course of putting it together. 

1. I'll start with the positive. Mobile technology is letting psychologists make exciting advances in almost every area of the discipline. Cognition, personality and social psychology are the real highlights, but this often extends beyond psychology and demonstrates clear benefits of interdisciplinary research. I've read so many papers where I find myself muttering 'I wish I had thought of that'. 

2. On the other hand, theoretical and methodological misalignment remains a grand challenge. For example, smartphone interactions that might support positive health interventions, but which could also drive a negative outcome remain distanced. They are considered by a completely different set of researchers under different theoretical frameworks despite obvious overlaps in the application of the technology. Similarly, from a methodological perspective, the gulf is arguably even wider. Research that considers how smartphones might limit cognitive functioning for example, appears to be separated from groups who have developed apps that can test cognitive functioning!

3. Psychology has a tendency to obsess about why new technology is harmful and then struggles to be involved productively when it becomes a key component of everyday life. This cycle then loops. Genuine harms are very real and include issues that pertain to unequal access, cyberbullying, misinformation and security vulnerabilities, but these are not specific to smartphones. They are universal and are as relevant to software developers as they are to behavioural scientists.  

I offer a few suggestions on how these problems might be mitigated in the future although several recent pre-prints with smarter colleagues already feel like they are light years ahead. 

Kaye, L. K., Orben, A., Ellis. D. A., Hunter, S. C. and Houghton, S. (2020). The conceptual and methodological mayhem of “screen time”. International Journal of Environmental Research and Public Health, 17 (10), 3661, https://doi.org/10.3390/ijerph17103661

Satchell, L., Fido, D., Harper, C. A., Shaw, H., Davidson, B. I., Ellis, D. A., Lancaster, G. L. J. and Pavetich, M. (in press). Development of an Offline-Friend Addiction Questionnaire (O-FAQ): Are most people really social addicts? Behavior Research Methods, https://doi.org/10.3758/s13428-020-01462-9

Shaw, H., Ellis, D. A., Geyer, K., Davidson, B. I., Ziegler, F. V. and Smith, A. (in press). Quantifying smartphone ‘use’: Choice of measurement impacts relationships between ‘usage’ and health. Technology Mind and Behavior

Davidson, B. I., Ellis, D. A., Bowman, N. D., PhD, Liveley, G., Shaw, H., Przybylski, A. K., & Levine, M. (2019, October 7). Avoiding Irrelevance: The manifestation and impacts of technophobia in psychological science. https://doi.org/10.31234/osf.io/b9f4p

Davidson, B. I., Shaw, H., & Ellis, D. A. (2020, March 1). Fuzzy Constructs in Assessment: The Overlap between Mental Health and Technology ‘Use’. https://doi.org/10.31234/osf.io/6durk




* * *
However, one big problem remains.

A small, but vocal number of researchers in the area of technology effects have a track record of peddling utter bullshit. Anyone reading this far will have already generated their own list. 

Those last few sentences didn’t make it into the book I’m afraid, but the mantra of ‘technology is the biggest public health problem ever because because’ is being very publicly exposed. 

It is not simply the result of highlighting poor research practices, but something more profound. I wish I had seen it before.

Many have simply ceased to be scientists.

When scientists disagree or are provided with evidence that contradicts an existing theory or viewpoint, it might be disappointing, but it should advance the field. We are all in a privileged position. We get paid to think carefully and hopefully learn something along the way.

Over the last 24 months, two response patterns have emerged from those who are challenged. Rather than engage, discuss or collaborate in an adversarial fashion you get the following responses.

Pattern (a) involves ignoring everything and saying the same thing - alone or with other authors. Often on an industrial scale. It’s the equivalent of a child putting their fingers in their ears and shouting louder to compensate.

Pattern (b) is a form of token based engagement, but somehow manages to do (a) at the same time. Quite a remarkable skill, but also not very scientific. I continue to be amazed at the volume of stuff that gets past peer review although based on publication dates, it's obvious that some work hasn't been reviewed at all.

As a result, screen time or 'smartphone addiction' debates are often no longer discussing theory A versus theory B.

To borrow a line from a colleague, psychology finally has it's very own version of flat-earthers.

Ironically, social media has done more to expose these issues, which is just as well because it sure as hell isn’t journal editors, the BPS or the APA. 

Social media can perpetrate misinformation, but also expose it for what it is. In the vast majority of cases, any effects of mass communication technology on people and society are unlikely to be uniform. 

That last sentence did make it into the book. 

Thursday, 5 September 2019

Technology Addiction Claptrap


We recently ran a little study to see what happens when you prevent a small group of students from using their smartphone for 24 hours. Participants were instructed to place their smartphone in a secure evidence bag. So what happened?

Well not much really, they missed their smartphone. 

As part of this study, we also asked participants to complete the Smartphone 'Addiction' Inventory (SPAI). Interestingly, a few participants who dropped out later in the study had fractionally higher SPAI scores. This may indicate that smartphone ‘addicts’ were unable to fully participate in the study and so discontinued, thus affecting our findings. However, this is unlikely given that smartphone addiction scales do not align favourably with objective behaviour. It is worth noting that while this small number of participants were slightly more anxious at time 1 they were also on average, a bit happier.

It's increasingly difficult to know exactly what these 'addiction' assessments measure. They don't correlate with the types of behaviour you might expect from an addiction (e.g., rapid smartphone checking). It perhaps has more to do with how much someone enjoys or thinks about their smartphone. This alone does not support the notion of any addiction - a point that we could have pushed further in our discussion. For example, SPAI scores in our sample correlated very highly with smartphone craving at every time point, but not mood or anxiety. I suspect our sample was too small to detect these smaller effects, which have routinely appeared in larger samples


Of course, these scales could have nothing to do with technology at all, and a stronger replication of our study would include a second group to examine how giving up any valued personal possession might generate similar effects (or a lack of). But I think you can already guess what the outcome of such a study might be.


I remember being surprised to learn that those who argue for the existence of additive tendencies of smartphones haven’t attempted this sort of study before. In closing, my challenge for those who continue to argue for addictive tendencies associated with technology use is to start collecting (and maybe even sharing) data that will support its existence. 

A scale with a clinical sounding name plucked out the air won't suffice. The odd case study where a person has apparently been self referred to a clinic that has untried and untested methods is also not going to cut it. Experimental work without a control is also a long-standing problem.

If I wanted to make arguments about the impact of mobile technology on people and society more generally, I’d might start with a survey of general practitioners who are at the frontline of healthcare provision in the UK. Do they see negative issues in the population associated with specific technology use? 

I suspect if they do, then it will have more to do with a lack of physical activity and poor diet. The latter of which may well be a consequence of spending too much time in front of a screen. However, having been fortunate enough to work with a number of general practitioners on related research projects, I suspect they would confirm that social deprivation is the single biggest issue for people they see on a regular basis. Not technology 'addiction'.

We are all guilty sometimes of forgetting the bigger picture (myself included), but the above might serve as a reminder for what should be at the forefront of any new policy. 


One final thought. I am currently working my way through a variety of technology related literature as part of a forthcoming book (shameless plug). 

It's amazing to behold how much time psychological science has spent trying to convince itself that smartphones are damaging health, cognition and social communication. An identical web can be woven from research concerning video games and even the internet itself. The notion of 'addiction' is woven into almost every narrative when psychologists should really be referring to habitual behaviour, which of course sounds a lot less clinical or dangerous. 

I completely take the point that psychological science has a duty to understand, and where possible, predict or mitigate future problems that new technologies could bring. 

But a sense of balance is required. 

As a consequence, far less attention has been devoted to developing new methods following the widespread adoption of such technology. Meanwhile, many other fields (e.g., medicine and computer science) are methodologically moving out of sight.

Social psychology also has offer much more when it comes to discussions around issues of ethics and morality regarding new and future technologies, but this will become increasingly difficult unless the discipline positively engages with new technology in the first place. 

Ironically, when new technology or methods are used in way that benefits research directly, we often find that any effects on health and social interaction largely vanish.   

Thursday, 14 February 2019

Replicating habitual smartphone behaviours: 2009-2018

We recently collected more smartphone usage data to test if pen and paper scales could predict behaviour (they didn't). However, in the process we managed to replicate some of our previous results from 2015.

Specifically, the average number of smartphone pick-ups per day remains remarkably similar across both samples despite using different software and smartphone operating systems to quantify these behaviours.

These results therefore cast some doubt over the idea that Android and iPhone users differ in their usage behaviours (we previously observed some demographic and personality differences between these two groups).


Mean number of pick-ups from 2015 sample: 84.68 (SD=55.23).




Mean number of pick-ups from 2018 sample: 85.44 (SD=53.34).



It's worth remembering that our results in 2015 were already comparable with data collected by others in 2009!

The idea that people are using their phones more doesn't really hold up to scrutiny. 

In terms of total hours usage, this did differ somewhat between the two samples with a more youthful sample in 2015 averaging 5.05 hours a day (SD=2.73). Fast forward to 2018 and this dropped to 3.9 hours (SD=1.99).

Finally, while we can't be sure, it looks like Apple might be using a very similar feature of the operating system that is freely available within Android devices to record and store usage data.