Thursday, 5 September 2019

Technology Addiction Claptrap

We recently ran a little study to see what happens when you prevent a small group of students from using their smartphone for 24 hours. Participants were instructed to place their smartphone in a secure evidence bag. So what happened?

Well not much really, they missed their smartphone. 

As part of this study, we also asked participants to complete the Smartphone 'Addiction' Inventory (SPAI). Interestingly, a few participants who dropped out later in the study had fractionally higher SPAI scores. This may indicate that smartphone ‘addicts’ were unable to fully participate in the study and so discontinued, thus affecting our findings. However, this is unlikely given that smartphone addiction scales do not align favourably with objective behaviour. It is worth noting that while this small number of participants were slightly more anxious at time 1 they were also on average, a bit happier.

It's increasingly difficult to know exactly what these 'addiction' assessments measure. They don't correlate with the types of behaviour you might expect from an addiction (e.g., rapid smartphone checking). It perhaps has more to do with how much someone enjoys or thinks about their smartphone. This alone does not support the notion of any addiction - a point that we could have pushed further in our discussion. For example, SPAI scores in our sample correlated very highly with smartphone craving at every time point, but not mood or anxiety. I suspect our sample was too small to detect these smaller effects, which have been routinely appeared in larger samples

Of course, these scales could have nothing to do with technology at all, and a stronger replication of our study would include a second group to examine how giving up any valued personal possession might generate similar effects (or a lack of). But I think you can already guess what the outcome of such a study might be.

I remember being surprised to learn that those who argue for the existence of additive tendencies of smartphones haven’t attempted this sort of study before. In closing, my challenge for those who continue to argue for addictive tendencies associated with technology use is to start collecting (and maybe even sharing) data that will support its existence. 

A scale with a clinical sounding name plucked out the air won't suffice. The odd case study where a person has apparently been self referred to a clinic that has untried and untested methods is also not going to cut it. Experimental work without a control is also a long-standing problem.

If I wanted to make arguments about the impact of mobile technology on people and society more generally, I’d might start with a survey of general practitioners who are at the frontline of healthcare provision in the UK. Do they see negative issues in the population associated with specific technology use? 

I suspect if they do, then it will have more to do with a lack of physical activity and poor diet. The latter of which may well be a consequence of spending too much time in front of a screen. However, having been fortunate enough to work with a number of general practitioners on related research projects, I suspect they would confirm that social deprivation is the single biggest issue for people they see on a regular basis. Not technology 'addiction'.

We are all guilty sometimes of forgetting the bigger picture (myself included), but the above might serve as a reminder for what should be at the forefront of police markers minds. 

One final thought. I am currently working my way through a variety of technology related literature as part of a forthcoming book (shameless plug). 

It's amazing to behold how much time psychological science has spent trying to convince itself that smartphones are damaging health, cognition and social communication. An identical web can be woven from research concerning video games and even the internet itself. The notion of 'addiction' is woven into almost every narrative when psychologists should really be referring to habitual behaviour, which of course sounds a lot less clinical or dangerous. 

I completely take the point that psychological science has a duty to understand and where possible, predict or mitigate future problems that new technologies could bring. 

But a sense of balance is required. 

As a consequence, far less attention has been devoted to developing new methods following the widespread adoption of such technology. Meanwhile, many other fields (e.g., medicine and computer science) are methodologically moving out of sight.

Social psychology also has offer much more when it comes to discussions around issues of ethics and morality regarding new and future technologies, but this will become increasingly difficult unless the discipline positively engages with new technology in the first place. 

Ironically, when new technology or methods are used in way that benefits research directly, we often find that any effects on health and social interaction largely vanish.   

Thursday, 14 February 2019

Replicating habitual smartphone behaviours: 2009-2018

We recently collected more smartphone usage data to test if pen and paper scales could predict behaviour (they didn't). However, in the process we managed to replicate some of our previous results from 2015.

Specifically, the average number of smartphone pick-ups per day remains remarkably similar across both samples despite using different software and smartphone operating systems to quantify these behaviours.

These results therefore cast some doubt over the idea that Android and iPhone users differ in their usage behaviours (we previously observed some demographic and personality differences between these two groups).

Mean number of pick-ups from 2015 sample: 84.68 (SD=55.23).

Mean number of pick-ups from 2018 sample: 85.44 (SD=53.34).

It's worth remembering that our results in 2015 were already comparable with data collected by others in 2009!

The idea that people are using their phones more doesn't really hold up to scrutiny. 

In terms of total hours usage, this did differ somewhat between the two samples with a more youthful sample in 2015 averaging 5.05 hours a day (SD=2.73). Fast forward to 2018 and this dropped to 3.9 hours (SD=1.99).

Finally, while we can't be sure, it looks like Apple might be using a very similar feature of the operating system that is freely available within Android devices to record and store usage data. 

Saturday, 22 December 2018

The Role of a Journal Editor

Having navigated the academic publishing system as an author, reviewer and occasional editor, I can confirm that there are inconsistencies at almost every turn*. From the variable quality of reviews to the procedures in place during article or chapter production, even within the same journal, authors and reviewers can have very different experiences.

Previously, I was led to believe that the role of an editor was clear and consistent. Editors read manuscripts, find reviewers, and make key decisions based on reviews and their own expertise. In addition, editors can help guide authors towards what they feel are the most important points that should be addressed following review. This is particularly important when it comes to clarifying the direction of a paper when reviewers express conflicting views.

However, a number of editors and associate editors, who are sometimes paid by journals, don't always act in a way that is helpful or fair to authors and reviewers. In many instances, following peer review, authors receive a 100% stock reply with no indication that an editor has even read the paper. This may not matter if all reviews pull in the same direction, but that rarely happens.

For example, imagine one reviewer is positive and recommends revisions, but another argues convincingly to reject the paper. The default option for 'non-editors' will be to reject the paper following review without explanation. Alternatively, if 2 or 3 reviewers are positive and suggest revisions, the decision will automatically become revise and resubmit again, with no explanation or guidance.

Even a single sentence to confirm that the paper has been read would add something.

The problem is that authors can be left in limbo wondering what comments are more important especially when reviews are contradictory or if one review is simply of a higher quality than another. A paper in this position will typically go back out to the same reviewers and unless both are happy, the paper will be rejected (based on the logic outlined above).

This lack of direction can have a negative impact on revised manuscripts as the content becomes muddied when authors feel they must appease every reviewer comment. It is possible to request additional clarification from an editor, but this does not guarantee a reply.

Regardless of the outcome, if there is zero editorial input, the editor simply becomes an administrator rather than a fellow academic who plays an important role in developing the work and journal. This also poses problems for reviewers who then feel that their contributions are not worthy of further comment regardless of the final outcome.

Of course, many journals appoint excellent editors and their input ensures that authors will submit their best work to that outlet again and again. These editors have a genuine desire and duty of care that improves the work.

On the other hand, those who are new to scientific publishing may be surprised to learn that when you lift the lid, many journals appear to have almost no editorial input. This makes me think that there may be some confusion of what is expected following an editorial appointment.

Perhaps as well as a list of predatory publishers, we also need a list of journals that have 'real' editors. That is, experts who continue to set the bar high by providing a service to the field, which should be welcomed, encouraged, applauded and rewarded.

The role of a journal editor may be changing, but when much of psychology still struggles to ensure that papers, data and associated materials are freely available, I'd rather editors said more, not less.

*Note: This is purely based on my own and colleagues experiences publishing within psychology, health and computer science. 

Tuesday, 22 May 2018

Why measuring screen time is important

Sometimes there is a disconnection between scientific methods used to investigate a given phenomena and the language used to describe subsequent results. 

When it comes to understanding the impact of technology use on health, wellbeing or anything else for that matter, the gulf is vast. See here for a related discussion. 

This has become even more important as the UK Government is currently conducting an enquiry around the impact of screen time and social media on young people. The outcome could lead to new guidelines for the public especially parents with young children.  

Many academics have already submitted evidence that argues for a cautious approach when developing any new guidance or regulation because the evidence base suggesting any effects (good or bad) remains relatively weak. 

However, other voices have concluded that social media and screen time is a public health issue that needs to be addressed as a matter of urgency. 

To me, this view ignores the methodological elephant in the room. 

The Elephant

In order to argue that social media or smartphone usage is indeed a genuine behavioural addiction and/or a societal problem then the data on which that conclusion should be drawn will need to, at some point, include measures of behaviour.

Research concerning problematic smartphone or social media usage typically relies on correlations derived from estimates or surveys. These measures have some value, but evidence concerning their validity to detect problematic behaviours remains mixed. 

It's also worth noting that when correlational designs are scaled up (e.g., when N's become greater than 10,000) self-reported social media use, for example, explains little about a persons' wellbeing. 

Collectively, this body of work tells us very little regarding cause and effect. 

This methodological gap is particularly frustrating when the very technology we wish to study has the added advantage of being able to accurately record human-computer interactions in real-time. People were quantifying such behaviours in 2011

In terms of present concerns, longitudinal data of this nature could be linked with other life outcomes, which might include measurements of academic performance, physical and mental health. 

Just to be transparent, I have no axe to grind or conflict of interest, but like most scientists, getting to the truth is important. 

This will take time, considerable effort and (some) money. 

The alternative involves destroying public trust in science by wasting taxpayers' money and indoctrinating people with divisive nonsense.

Current Work

With these issues in mind, my research group often consider how we can encourage other researchers to harness behavioural measures from smartphones. We have developed mobile applications to assist researchers, and analysis routines to process subsequent data. 

There is still a long way to go and the rapid pace of technological development makes this even more challenging. It feels like everything moves faster than science. In the time it has taken me to write this Blog, Google have announced new tools to foster digital well-being

That said, we recently considered how long researchers might need to collect smartphone usage data in order to understand typical patterns of behaviour. In other words, if I check my phone 30 times today, will I do the same tomorrow, in 2 days, or a week later?

Of course, there is nothing to stop us collecting such data indefinitely from a digital device, but this would be ethically questionable. 

Therefore, the development of norms regarding the type and volume of data required to make reliable inferences about day-to-day behaviour is important for the field to move forward.

We realised it was possible to consider this in more detail by conducting some additional analysis on an existing data set originally reported by our group here in (2015).

Fig 1. Barcode of smartphone use over two weeks.
Black areas indicate times where the phone was in use and Saturdays are indicated with a red dashed line. Weekday alarm clock times (and snoozing) are clearly evident.

Consistency of Behaviour

Participants had their smartphone usage tracked over a period of 13 days. From this data we were able to calculate the number of total hours usage and the number of checks for each day. Checks are defined as periods of usage lasting less than 15 seconds.

First, we compared the total hours spent using a smartphone between the first and second week of data collection. This generated very large correlation coefficients of .81 and .96 respectively (Figures a and b). 

An additional analysis revealed that even 48 hours of data collection alone was highly predictive of future behaviour particularly in relation to checking (usages lasting less than 15 seconds). 

Similarly, 5 days of total usage still resulted in remarkably high correlations when compared to a subsequent week. Interestingly, no differences were observed between weekdays and weekends.

So, how much data do you need to determine typical smartphone usage? - not a lot as it happens, but this might vary across different samples. We did however, observe that patterns of usage and checking were more consistent for participants with lower levels of smartphone use overall. 

This doesn't completely rule out collecting data for longer periods of time and there are occasions where this might be useful, for example, as part of an intervention to reduce usage or track how these trends might predict subsequent health outcomes. Nevertheless, at this stage we can conclude that even small amounts of usage data can be highly revealing of typical behaviour. 

Finally, we observed that self-reported usage derived from the Mobile Phone Problematic Use Scale (MPPUS) remains unreliable when predicting any of these usage behaviours.

Moving Forward

My lab and others continue to develop and refine similar methods and analysis guidelines. We are now running a number of studies which focus on both specific applications usage and the effects of smartphone withdrawal. 

Despite results to the contrary, we also remain hopeful that it may be possible to validate some existing self-report scales with patterns and types of usage in the future.

Our hope is that these methods will help answer what have become ever more pressing concerns for society and social science more broadly. That is, what can screen time tell us about people and how might these behaviours lead to positive or negative life outcomes. 

I initially became involved with this area of research because it was methodologically and theoretically interesting however, it has now also become politically charged with mixed messages being communicated to the general public. 

The discussion has become swamped with celebrities and commentators. Many have good intentions, but these are driven entirely by gut instinct or anecdotal evidence. 

Combined with an extremely weak evidence base, these arguments should not drive changes in policy. 


Wilcockson, T. D. W., Ellis, D. A. and Shaw, H. (Epub ahead of print) Determining typical smartphone usage: What data do we need? Cyberpsychology, Behavior and Social Networking, 21,

Sunday, 5 November 2017

Altmetric Page Finder HTML

Altmetric collect and collate all  disparate information to provide a single visually engaging and informative view of the online activity surrounding scholarly content. This can be useful for both individual researchers and institutions who want to understand where work might be having an impact beyond citations by other academics.

However, I have spoken to a few folk recently about accessing their own (and others) Altmetric scores when they don't have access to a specific link or an Altmetric paper number. The easiest way to do this is via a short snippet of HTML that includes the DOI number of any paper.

Here is the HTML (DOI number is in bold):

Which generates the following badge and link:

This also allows for some interesting customisation options by modifying variables (e.g. type and/or mentions).

Altmetric actually provides a longer winded version of this tutorial on their own website, which can generate the code from a few drop down menus!

Monday, 4 September 2017

Apple Watch Series 2: A short review

UPDATE 4/09/2018 - The hardware failed after 18 months. Interestingly enough, a colleagues watch also failed around the same time.

Unfortunately, from a build quality perspective, and like many other consumer wearables, they are simply not built to last.

And so ends my Apple Watch journey.


I've owned an Apple Watch Series 2 for about 8 months now and given that I have tested and written about a fair few activity trackers in the last 18-24 months, it seemed appropriate to share a few thoughts.

Apple Watch Series 2 - I didn't own the Series 1 so can't really compare the two from a personal perspective. However, the Series 2 is both waterproof and has improved battery life over the first version.


In terms of fitness trackers, it is up there with some of the best when it comes to tracking running, cycling and swimming.

When running the watch reports total time, average pace, heart rate and distance travelled.

Apple's own activity application is impressive (or depressing depending on how you look at it) because it separates active from non-active calorie burn. Most of the energy we use simply keeps us alive on a day-to-day basis, and as a result this serves as a regular reminder of how little extra energy is actually expelled from exercise. These calculations are based on heart rate and movement/distance covered.

The activity tracker application. The pink circle is movement, green represents minutes of exercise, blue is time spent standing.

Personally, I found the heart rate readings to be pretty accurate after comparing these with a Polar heart rate strap, which sits around the chest. For accurate readings however, the watch does need to sit tight on the wrist and sports like squash or football for example, might pose a problem if its not securely held in place. In saying that, some people have struggled to get accurate readings regardless.

Beyond Health

Apple Pay is cool and being able to pay for things just by tapping your wrist is handy. While contactless payments aren't universal and max out at £30, I am now comfortable to leave my wallet in the office at lunchtime.

Battery life is a lot better than expected. Over a weekend, provided the watch is turned off overnight, it can easily last from Friday afternoon until Sunday evening. It does use a lot more power when further away from the iPhone (at the time of writing you need an iPhone 5S in order to use the Apple Watch), and when taking exercise. For example, runs are location tracked via GPS and heart rate data is updated more regularly. This functionality can be turned off, but to me that sort of defeats the purpose of owning an Apple Watch.

Siri, Apple's voice assistant is onboard, but it hasn't ever been particularly useful and struggles a bit with my Scottish accent. It only really gets used for setting timers when cooking.

For what it is, The Apple Watch remains an excellent fitness tracker, but there is a distinct lack of applications out there to do other things. itunes can also be controlled via a Remote, but no applications tap into other smart home devices. Even within the fitness domain, Strava, while popular is remarkable similar to Apple's own activity application. The main difference here is that Apple's activity application lacks a social media aspect.

My heart rate during a squash match using Apple Watch captured by the Heartwatch application (I made the graphic myself however)

Given the amount of data generated by the Apple Watch, some more applications to visualise this would be welcome. One good example of this is Heartwatch, which has a ton of extra features that can track changes in activity and heart rate.

Beyond health, this data remains valuable for consumers and I'm surprised that more algorithms haven't been created that could improve the user experience. For example, it would seem sensible if rules could be set up so that if someone is obviously engaged with sport, an incoming call will be sent straight to voicemail, if not, feed the call to the watch as normal. Similarly, locations could be used to activate certain features. In other words, there is a long way to go until these devices become as smart as the manufactures like to claim.


I guess the question people might want answered here is should I buy one?

If you want a really good fitness or sports tracker then yes, beyond that it's a bit of a brick.

In terms of an Apple product that goes beyond a fitness tracker, I am not quite convinced it is there yet. Perhaps I am being impatient and it is worth remembering that the original iPhone didn't really pick up the batten until the second or third iteration. Like an early iPhone, the watch requires some refinement if it is ever to become an essential purchase.