Self-Care Tips to Keep You Sane: Reading for Pleasure

At the end of January I wrote about the importance of academic self-care for PhD students; I didn’t delve too far into the specifics of what I do in my downtime and a lot of people asked. ‘It’s hard to switch off’ and ‘I find it hard to relax’ were the two phrases I encountered most frequently, so I thought I’d introduce a series of posts that provide more information, and recommendations, on what to do to give yourself a break during the inevitable stressful periods that come with doing a PhD.

This is the second installment in this ‘self-care tips to keep you sane’ series, and this week I’m talking about reading for pleasure. I read a lot, and I think it’s really important to make time to read exciting and imaginative stories that are in no way linked to your work. It gives you a chance to switch off.

A few years ago I joined my local library – it’s free and you get to keep books for a month, perfect for those of us on a budget! I’d really recommend checking out your local library; and make sure to venture further afield than your university library, you’ll end up picking up books related to your studies and that defeats the point.

I feel like I’m really making progress with my PhD, and this month I decided to treat myself. I don’t often re-read books so I don’t often buy books – they end up on shelves forever and I’d much rather borrow or swap books with other people to prevent me from building up a ridiculous collection of books I’ll never read again. I also really like surprises. Enter, Moth Box.

Moth Box is a small business set-up by Youtuber Mercedes Mills. It’s a book postal service with a difference; you don’t know what books you’re getting, and each is from an independent publisher. Each box is £20, and for that you get 2 books fitting that month’s theme – May’s was novels – neatly wrapped in tissue paper, and a bookmark that features a quote from each of the books.

The two books I got in my box are: Ties by Domenico Starnone, and Star-Shot by Mary-Ann Constantine. I hadn’t heard of either, but they both sound like stories I’ll enjoy. Overall I’m really happy with my May Moth Box and can see myself ordering again in the future; great value, brilliant books I wouldn’t otherwise have found, and beautifully packaged too.

Struggling to find time to read? I often do too. Lately I’ve been turning the TV off and closing my laptop an hour and a half before I plan to go to bed; that gives me time to read for an hour or so before sorting my life out for the next day (ironing clothes, making lunch etc etc) and getting ready for bed. It’s been such a lovely way to switch off at the end of a busy day, and I think it’s made my sleep quality improve too.

‘I Just Don’t Want to be a Human Guinea Pig’ – Why Taking Part in Trials Isn’t What You Think

When I tell people that I’m doing a PhD in clinical trials methodology I’m usuallly greeted with one of two responses, ‘Oh right, so you’re still a student?’ or ‘Oh my god, trials? To test drugs and stuff?’. The ‘test drugs and stuff’ response isn’t usually framed in a positive light either; eventually these conversations result in mumbles of ‘human guinea pig’. So, as today is International Clinical Trials Day, I thought I’d take some time to write about why taking part in trials does not make you a human guinea pig.

Public perception of research, and a few figures

  • Estimates of the percentage of people who say they think it’s important for the NHS to participate in research vary, but are largely very high – a 2012 poll (OnePoll) gave a figure of 87% and a similar poll conducted by Ipsos MORI gave a figure of 97%
  • Only 7% of people said they’d never take part in research

So why am I having so many encounters with people using the words ‘human guinea pig’ when these positive results suggest the public are in support of research?

I think it’s the perceived risk of taking part in a trial. The word ‘trial’ doesn’t exactly reassure you that you’re signing up for something that’s unlikely to harm you – it raises questions, reinforces uncertainty, and screams ‘risk’.

This is somewhat true; we do trials because we don’t know the answer – which treatment is ‘better’, which is most cost effective, which will improve quality of life rather than simply length of life, should we avoid surgery and just go for medical management, is the short-term stint in hospital required for surgery better than a long-term physiotherapy plan? These are questions that staff working in our NHS have to ask themselves each day, whenever they see a patient. If there is no evidence for them to base their decisions on, then we really should be working to answer that question in an ethical and efficient way.

Without evidence, people are exposed to harm, and the NHS is not providing the best possible care to the public. Without trials, the world of health and social care would never, ever progress.

Clinical trials aren’t always about testing new drugs

The trials unit at Aberdeen University, CHaRT (Centre for Healthcare Randomised Trials), specialises in pragmatic trials of discrete non-drug technologies or complex interventions. What does that mean? In short, usually not drugs, and hardly ever new drugs. Non-drug technologies are just that, it could be a type of scan, a device, a diagnostic tool – the list goes on. Complex interventions are a bit trickier; the Medical Research Council defines these as interventions with several interacting components. Again the variety in this category is huge; it could be an abdominal massage like in Glasgow Caledonian’s AMBER trial, cognitive behavourial therapy, interventions aimed at groups or communities of people – basically complex interventions are just that; complex. They’re more difficult to evaluate but very useful nevertheless.

So, if you take part in a clinical trial you will not necessarily be taking a new drug. There are lots of trials, particularly publicly-funded trials, that aim to find out which of two existing interventions is most useful. By existing interventions, I mean things that are already being used in standard care, we just aren’t sure which one works best. An example would be CHaRT’s C-GALL trial, which aims to find the most effective treatment for gallstones; is it better to remove the gallbladder altogether or to go down the route of medical management – both of these approaches are used in the NHS today, and we genuinely don’t know which is the best.

What have trials done for us?

Trials are absolutely central to our healthcare system, they impact each of us all the time – without us even realising.

On a personal note, someone close to me took part in a clinical trial a few years ago. The intervention proved to be successful and they’re still regularly receiving that treatment for free because they took part in the trial for it. That’s life-changing not only for the person taking part in the trial, but their family and friends too.

Taking a step back, Cassandra Jardine was a journal for the Telegraph, she died in 2012 from lung cancer – after her diagnosis she wrote extensively about her illness, winning the Lung Cancer Journalism Award in 2011. She took part in a trial of a lung cancer vaccine that aimed to extend her life; she knew her illness would kill her, but she wanted to do something good to contribute to the advancement of medicine, and to see if she could hang on for an additional few months. I’d really recommend you read her piece here.

Eventually she came to the conclusion that she was in the placebo group (something she covers in her article as an ‘extremely rare’ trial), but despite that, she did benefit from that trial. She said:

‘Most persuasive of all is the evidence that patients on clinical trials do better than the norm because they are monitored more closely. Instead of quarterly X-rays, I have CT scans and monthly blood tests.’

Whether trials provide a direct benefit to you or a loved one, you’ll still be benefiting indirectly.

Trials have influenced clinical practice. Beta blockers and aspirin following acute myocardial infarction, calcium antagonists following non-Q-wave myocardial infarction, aspirin and heparin following unstable angina, hypertension control and lipid lowering to reduce coronary heart disease mortality… the list goes on.

Our National Health Service is admired by people around the world, and rightly so – we build in the need to evaluate interventions, we allocate public money to funding these trials, and then we change practice to ensure more people have the chance of benefiting, or less people are exposed to harm. If you are ever approached about taking part in a clinical trial, I urge you to give that researcher a chance. Let them talk you through the trial, weigh up the potential risks and benefits, and make an informed decision based on your own circumstances and feelings.

Take a look at #ICTD2017 and #WhyWeDoResearch to find out more about trials, taking part in research, and why research is so important.

4th International Clinical Trials Methodology Conference and 38th Annual Meeting of the Society for Clinical Trials – Liverpool, May 2017

This week I left the grey skies of Aberdeen in favour of… Liverpool. Nowhere hugely exotic, but the weather was absolutely beautiful for the 3 days I was there.
Anyway, more about why I was there. Sunday to Wednesday saw almost 1,000 delegates congregate in Liverpool for the joint 4th International Clinical Trials Methodology Conference (ICTMC) and 38th Annual Meeting of the Society for Clinical Trials (SCT). 2 and a half days of people interested in trials, tackling subjects like: data-sharing, registry-based trials, recruitment and retention, patient and public involvement with research, qualitative research, funding, publishing, and a tonne of other subjects besides.

I’ve been to one ICTMC before, in Glasgow in 2015, but this was a much bigger version because it was joint with the American SCT annual meeting. The days were jam packed and I came home with a notebook full of ideas. Really I think it’ll take a few weeks for me to process everything properly and start to formulate my own ideas for future research based on the priorities demonstrated at the conference.

Anyway, just a short blog post from me this week – with 3 days out of the office I’m a bit behind on my to do list! As with the SWAT workshop that we had in Aberdeen in March, I’ve consolidated the majority of my notes into a sort of mind map/cartoony page of doodles. I find that this really helps me to get to grips with what’s been talked about, and ensures that I don’t leave all my notes held captive in a notebook at the back of my desk drawers.

I shared these on Twitter earlier in the week and got a really positive response, so I thought I’d upload them here too.

What to do When You Don’t Feel Like Writing

A few weeks ago I posted a blog post about the good things about freelancing whilst doing a PhD. On that post, Jennie from A Muddled Student commented asking about how I got used to writing when I didn’t feel like it, so I thought it’d be a good idea to write up a blog post with the techniques and methods I’ve used to make sure I get my writing tasks completed on time.

When you feel like writing, don’t stop

This one seems obvious but I didn’t used to do it, so maybe it’s worth mentioning. When you’re in the mood to write, keep writing; get ahead with tasks, write blog posts, pieces of text about what you do, summaries of journal articles etc. Just keep writing. I find that in one day where I’m in a good place to write, I can get really ahead of freelance work (I work on a 3-month calendar so know what content I need to write for weeks ahead). Not only that, if you write summaries of journal articles, experiences you’ve had or pieces of text about what you do, you can always use that text later down the line. Having existing blocks of text also removes that fear of the blank page that you might get when you’re not in the mood to write.

Make realistic to do lists

I navigate my entire life with the help of lists. Whether it’s things to do, what to read, shows to watch, podcasts to listen to, or tasks at work. Write lists for each day, tasks to be achieved over the week, and future deadlines. Make these to do lists realistic, and get into a routine of completing each task on them before you leave the office each day.

Freewrite

I was first introduced to freewriting when I attended a scientific writing course with Allan Gaw during the first year of my PhD. Freewriting is a practice that helps to get over writer’s block, increase the flow of ideas, and help you to connect themes/topics together in your writing.

With freewriting, you set a timer and put your pen to paper (I really recommend doing this with a real pen and a notebook/piece of paper – the process isn’t as beneficial when you’re typing or scribbling on an iPad etc). Until your timer goes off, you don’t stop writing. A word of warning – it’s much, much harder than you think it will be.

If you want to have a go at freewriting, I’d recommend you start with a 1-minute timed write, and then work up, minute by minute, until you reach 10 minutes. Don’t think about spelling and grammar, and if you can’t think of anything to write, simply write ‘I cannot think of anything to write’. Just keep going. Eventually your thoughts will come back and your words will begin to flow again.

On the writing course I went on, we had a few different freewriting tasks that acted as a good introduction:

  1. 1-minute timed write – write a story and include the words ‘princess’, ‘frog’ and ‘California’
  2. 2-minute timed write – write about your research area, what you do, why you like it, what made you focus on this specific area

After these tasks you can then begin to make your freewriting more focussed. For example, if you need to write a conference abstract, focus on that with a 5-minute timed write, and then work to edit and craft the text you’ve come up with.

At the beginning of my PhD/freelancing balance, I only worked with lists. It worked to a certain extent, but if I wasn’t in the mood to write I’d find myself writing right up until the deadline, and not enjoying the process as a result. After I was introduced to freewriting I used that for a while, and now I find it much easier to write when I need to, rather than when I really want to.

What tips and tricks have you picked up to help you write even when you’re not in the mood to? Leave comments below and share your ideas!

#365papers April Update

In my first post on this blog, I set myself 3 PhD-related goals for 2017. One of those goals was to read more widely, and more frequently, and I decided that doing the #365papers challenge would be a good way to do that.

This month’s reading has been good! After a slow March, I was right back into reading regularly and broadly. I chose to read a lot of these papers as I’m starting to write the literature review for my thesis (i.e. my least favourite thing to write, probably ever), so I wanted some relatively general pieces and some more focused work looking at specific aspects of recruitment to trials. I’m also slightly freaked out by the fact that it’s now the end of April and we’re going into summer – where has this year gone?! Time to step it up a gear and get this lit review written!

April’s reading:

  1. Statistics and ethics in medical research: III How large a sample
  2. Factors associated with clinical research recruitment in a pediatric academic medical center – a web-based survey
  3. False hopes and best data: Consent to research and the therapeutic misconception
  4. Influence of clinical communication on patients’ decision making on participation in clinical trials
  5. Sharing interim trial results by the Data Safety Monitoring Board with those responsible for the trial’s conduct and progress: a narrative review
  6. Agreement of treatment effects for mortality from routinely collected data and subsequent randomized trials: meta-epidemiological survey
  7. Why should I do research? Is it a waste of time?
  8. Avoidable waste in the production and reporting of research evidence
  9. An unfinished trip through uncertainties
  10. Patients’ consent preferences for research uses of information in electronic medical records: interview and survey data
  11. Time to publication for results of clinical trials
  12. Reasons for non-recruitment of eligible patients to a randomised controlled trial of secondary prevention after intracerebral haemorrhage: observational study
  13. Increasing value and reducing waste in biomedical research regulation and management
  14. The Guinea Pig Syndrome: Improving clinical trial participation among thoracic patients
  15. Accrual to cancer clinical trials: Directions from the research literature
  16. Barriers to participation in clinical trials of cancer: a meta-analysis and systematic review of patient-reported factors
  17. Why patients enroll in clinical trials: Physicians play a key role
  18. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme
  19. Strategies designed to help healthcare professionals to recruit participants to research studies
  20. Effective recruitment strategies in primary care research: a systematic review
  21. Mexican-American perspectives on participation in clinical trials: a qualitative study
  22. Medical research: missing patients
  23. Barriers to recruiting underrepresented populations to cancer clinical trials: a systematic review
  24. A nudge toward participation: improving clinical trial enrolment with behavioral economics
  25. The costs of conducting clinical research
  26. Prospective preference assessment: a method to enhance the ethics and efficiency of randomised controlled trials
  27. Clinicians’ views and experiences of offering two alternative consent pathways for participation in a preterm intrapartum trial: a qualitative study
  28. Lay perspectives: advantages for health research
  29. Random allocation or allocation at random? Patients’ perspectives of participation in a randomised controlled trial
  30. Lay public’s understanding of equipoise and randomisation in randomised controlled trials

ROBINS-I: My Thoughts and Experience

I’ve been meaning to write this post for a few months, so I’ll warn you – it’s going to be long. Since I realised I’d be using the ROBINS-I (Risk Of Bias In Non-randomised Studies – of Interventions) tool for my systematic review I have been searching for people, blog posts, and snippets of experience via Twitter to tell me how people have found this tool, and I didn’t find much – probably because the tool is so new. I did find a brilliant blog post from the Methods in Evidence Synthesis Salon (University of Bristol), which explains why we need a risk-of-bias tool for non-randomised studies, and gives an overview of the domains of bias that are assessed. I’d recommend reading that post before you carry on reading this one if you’re new to ROBINS-I: here. The lack of experiential posts got me thinking though – if I was looking for advice/guidance/stories of experience, surely others would appreciate that too? So, given that I couldn’t find a whole lot of detail, here’s my two-cents on ROBINS-I.

In this post, I’d like to give an overview of my experiences of using ROBINS-I (I will stress again that these are my experiences, if you’re going to use the tool for wildly different subject matters then my thoughts may not translate, and even if you use the tool for very similar studies you may still disagree – it’s all good), and then some ideas on what I think the tool does really well, and what it could do better.

First impressions
As a very new systematic reviewer (i.e. this was my first) I had no pre-conceived thoughts or views on ROBINS-I, it was just another thing for me to learn how to do – just like protocol writing, abstract screening and data extraction had been. Saying that, I’ll admit that I was a bit hesitant when first downloaded the ROBINS-I pdf. I had expected the tool to be 2 or 3 pages at most, but this was 22 pages. I then looked immediately for the guidance document which was a further 53 pages. To reinforce – that’s a lot of pages to get your head around. Looking further at the tool itself calmed me down a bit; the domains were nicely split and well defined, and the tool itself contains a lot of guidance within it.

Using ROBINS-I
Right, using ROBINS-I. As I said, my first impressions were a mix of hesitation and ‘I’m sure this will be fine‘, but when I went into the first meeting with my Supervisor to talk ROBINS-I tactics, I went right back to hesitant. He’d never used the tool either, and we were both sat with the guidance document, a print out of the tool, and a huge stack of studies to assess – all covered in scribbles and highlighted areas that were fighting for our attention. My thought process went something like this: ‘Yep, bit nervous about this now – how long is this going to take? Will I ever get all this done before I have to submit my thesis? This is probably going to kill me.’

I’ll say upfront, the whole process of using the ROBINS-I tool to assess risk-of-bias for 103 included studies was not as much of a nightmare as I thought it would be. We (my Supervisor and I) did all of the risk-of-bias assessments together; and when I say together I mean we sat in a room and talked through the entire assessment process. The decision to do the assessments in a pair was for a few reasons; 1) neither of us had used the tool before so it was good to talk through each domain, challenge each other and then reach agreement, 2) the time it would have taken for each of us to do risk-of-bias assessments individually and then meet up to discuss discrepancies would have meant the process took at least double the amount of time, and with 103 studies that wasn’t workable, 3) honestly, I was a bit nervous.

The first study we assessed took a relatively long time. I gave my Supervisor an outline of the study, he looked over the completed data extraction form, and we talked through any flaws we could see in study design. After that we went through the ROBINS-I tool domain by domain, making sure to refer to the guidance when we needed clarification. I also made notes throughout this process, which was invaluable when trying to ensure consistency between assessments. I’ll give you an example, if we pulled one study down to moderate risk-of-bias in the ‘classification of interventions’ domain, I’d write down why. That would ensure that the text time we saw the same flaw in a different study, we’d be sure to pull it down to moderate too.

Once we were happy with the first assessment the second took less time, and the third less still.

After about 10 assessments it was clear that the studies we were looking at were falling down in similar places, and I made a sort of crib sheet (example on the right). This was how a typical study came out for us; obviously not all of them did, but it was a good way to build a loose structure. Things sped up after that. We’d arrange to meet for one or two hours at a time every week, and we got through the assessments much quicker than we first anticipated. When they were all done my Supervisor provided baked goods in celebration, I think that helped.

Advice for future users

  • Do your risk-of-bias assessments in pairs if possible
  • Write everything down, yep, that’s everything underlined and in bold, if you don’t do this you’ll be really angry at yourself later
  • Make a loose crib sheet after you’ve got to grips with the assessment process, tweak it until you’re happy, and then apply to the rest of your studies
  • Invest in highlighter pens, and lots of them – highlighting specific parts of your documents will ensure you don’t forget where there are flaws in the study design, and you can highlight the tool itself so you can see your usual ‘path’ through it

What ROBINS-I is really good at

The studies that we were looking at were not particularly good quality. We were very open right from protocol stage (screenshot on the left) that the collated data may remain at low or very low quality. That made me panic a bit; what was I going to do with a big pile of poor quality studies?! ROBINS-I provided a way to distinguish between the poor quality studies, and the not-so-poor quality ones. Using the tool helped us to create a quality gradient within the pile, which (gladly) prevented me from hating the process of writing the review up. I say that now, I’ve only just started writing the results, so there’s still time yet.

The length of the tool wasn’t a big deal for me. As I said earlier, at the beginning of the process it really intimidated me, but the judgements you need to make are guided heavily, without the named guidance document. There are lots of ‘if you answered yes to X, go to Y’ which means you never answer every question within the 22 pages, and the entire process speeds up considerably because you don’t need to keep checking what each question/judgement means in minute detail – the tool holds a lot of information itself.

What changes I think ROBINS-I would benefit from

  • It’s longer than it needs to be

Let’s tackle the obvious thing first. The tool is really long, and whilst the guidance contained within it is good and it’s relatively easy to navigate, the process of doing an assessment could be very time-consumptive. My review has one outcome that we could apply ROBINS-I to, but for non-randomised clinical studies, especially those that involve multiple outcomes, this is going to take an age.

  • What’s the difference between ‘yes’ and ‘probably yes’, ‘no’ and ‘probably no’, and ‘probably yes’ and ‘probably no’?

I know that the judgements you make throughout each and every domain in the tool are subjective, but the nuances between these responses makes them even more subjective, which I’m not sure is a good thing. In the older version of the risk-of-bias tool for randomised controlled trials, the responses were simply ‘yes’ ‘no’ and ‘unclear’. That seems like an easier route to ensure consistency between reviewers. As well as that, the ‘probably yes’ and ‘yes’ responses, like ‘probably no’ and ‘no’ tend to result in the same judgement for that domain anyway, so I’m unsure what these subtleties are adding to the judgement itself.

Some clarification on the need for these additional degrees of judgement would be great; if they’re not adding much to the final judgement outcome then they could either be taken out, or at least if people know the finer judgements don’t have a huge impact, they won’t agonise over their decision-making.

  • When should you complete the optional question, ‘What is the predicted direction of bias due to selection of participants into the study?’ and how?

This one is a weird one for me – how do you make that judgement, and what is it adding to the process? For me, I don’t think I’d feel comfortable saying that the direction of bias could be characterised as favouring the experimental arm or the comparator. In my (perhaps incorrect – feel free to discuss!) view the fact that the study is at risk-of-bias means just that, it’s too difficult to tell what the direction of that bias is, and it ends up being another gut judgement that you can argue either way.

  • Is one ‘serious’ really the same as four ‘serious’ judgements?

This was my main problem with the tool. If an overall risk-of-bias judgement using ROBINS-I comes out at ‘serious’, that means that the study is judged to be at serious risk of bias in at least one domain, but not at critical risk of bias in any domain. Meaning then, that one ‘serious’ domain and four ‘serious’ domains equate to the same overall judgement. When I was thinking about this I decided to look at it in a completely out-of-context example; image you’re a child and you get a detention once over the course of an entire school term, if you get a detention 5 times in the space of one week is your punishment or judgement by parents/teachers etc the same? I wouldn’t have thought so. I got detention once (and it really was only once in my entire school career), my parents weren’t very happy, but it wasn’t something that they were particularly worried about. If I’d come home with detention every week though, I’m pretty sure I’d have been grounded. See what I mean?

This is more tricky because all of my studies started with a ‘serious’ judgement in the confounding domain, meaning they had no chance of redemption. We knew they were all going to be at serious risk-of-bias due to confounding from the type of studies they were, so it was the other domains that allowed us to see which studies were truly of poor quality.

Have you used the ROBINS-I tool yet? What did you think? I’d really like to hear your thoughts on it, and I’m happy to answer any questions you have on my experiences. When a new tool comes out it’s always a bit tricky to navigate, and I think speaking to others and listening to their thoughts and experiences is invaluable. Leave a comment and let’s get talking.