#365papers September Update

In my first post on this blog, I set myself 3 PhD-related goals for 2017. One of those goals was to read more widely, and more frequently, and I decided that doing the #365papers challenge would be a good way to do that.

I ended last month’s #365papers update by saying ‘hopefully September’s reading won’t be quite so late as August’s was…’ – and here I am 13 days late. September was a really busy month and though I was reading, it was snippets and abstracts and posters from conferences, rather than entire papers. I’ve now caught up – and I’m determined to make sure that October’s update is back on track time-wise!

This month’s reading has been a big mix of things because I’m working on my literature review, and also getting involved with some new projects. I’ve really enjoyed this month’s reading – when I had time to do it at least, so hopefully there’s some interesting papers in this list for others too.

September’s reading:

  1. The ethics of underpowered clinical trials
  2. The ethics of underpowered clinical trials
  3. Informing clinical trial participants about study results
  4. Women’s views and experiences of two alternative consent pathways for participation in a preterm intrapartum trial: A qualitative study
  5. Recruiting patients as partners in health research: a qualitative descriptive study
  6. Identifying additional studies for a systematic review of retention strategies in randomised controlled trials: making contact with trials units and trial methodologists
  7. Methods for obtaining unpublished data
  8. Clinical features of Parkinson’s disease patients are associated with therapeutic misconception and willingness to participate in clinical trials
  9. Health research participants are not receiving research results: a collaborative solution is needed
  10. Health research participants’ preferences for receiving research results
  11. Why is therapeutic misconception so prevalent?
  12. Recommendations for the return of research results to study participants and guardians: a report from the children’s oncology group
  13. Oncology physician and nurse practices and attitudes regarding offering clinical trial results to study participants
  14. Search for unpublished data by systematic reviewers: an audit
  15. Patient and public involvement in data collection for health services research: a descriptive study
  16. Health researchers’ attitudes towards public involvement in health research
  17. Patients’ and clinicians’ research priorities
  18. Public involvement at the design stage of primary health research: a narrative review of case examples
  19. The impact of patient and public involvement on UK NHS health care: a systematic review
  20. Involving South Asian patients in clinical trials
  21. No longer research about us without us: a researcher’s reflection on rights and inclusive research in Ireland
  22. Willingness to participate in pragmatic dialysis trials: the importance of physician decisional autonomy and consent approach
  23. How important is patient recruitment in performing clinical trials?
  24. Recruiting hard-to-reach subjects: is it worth the effort?
  25. Fundamental dilemmas of the randomised clinical trial process: results of a survey of the 1,737 Eastern Cooperative Oncology Group investigators
  26. The research-treatment distinction: A problematic approach for determining which activities should have ethical oversight
  27. Leaving therapy to chance
  28. Use of altered informed consent in pragmatic clinical research
  29. A framework for analysis of research risks and benefits to participants in standard of care pragmatic clinical trials
  30. Public engagement on global health challenges
Advertisements

Health Advice Overload: What Should We Believe?

This article, written by Francesca Baker, featured in the September issue of Balance magazine. I spoke to Francesca whilst she was writing this piece, and she has included a few quotes from me, so I’ve republished it here with permission. Make sure you take a look at her blog and Twitter account to keep track of future articles from Francesca – she writes across a diverse range of topics.


The sheer volume of data available when trying to decide what’s good and bad for your health is overwhelming. So how do you know what to believe?

This is a world with easy access to a huge amount of information. Just about everything you could possibly want to know is available at the touch of a button, from what to eat or how much exercise to do, to the best way to raise a child, where to invest your money or who to vote for.

You want to make informed decisions and you’ve never had more information at your fingertips. Trouble is, it can actually make life really confusing.

If you’ve ever been unclear about whether butter is actually good or bad for you, tried to ascertain if the antioxidants in wine outweigh the hangovers, or ‘hacked’ your sleep to achieve a solid eight hours only to discover that seven hours is, in fact, what you should be aiming for, you’re not alone.

A 2014 study in the Journal of Health Communication: International Perspectives examined the effects of conflicting media information about fish, coffee, red wine and supplements.

The report raised ‘concern that exposure to contradictory health information may have adverse effects on cognition and behaviours.’ The more information people were exposed to, the higher the level of confusion they reported, which led them to making the wrong decisions.

Not to mention that evidence changes all the time, as more scientific discoveries are made. It’s difficult to believe that smoking was once deemed ‘healthy’ and 1950s adverts for cigarettes featured doctors encouraging the public to smoke.

In fact, in 1980, there were only seven dietary guidelines which Americans were encouraged to follow; by 2005, that had swelled to more than 40.

It’s not about quantity of information – the abundance of evidence can be empowering – but much depends on our ability to scrutinise its quality and how useful it is.

THE RANKING OF EVIDENCE

According to scientists Mark Petticrew and Helen Roberts in a study published in the BMJ, there is a ‘hierarchy of evidence’. They outline seven different levels of study, ranking them based on effectiveness, process, salience, safety, acceptability, cost effectiveness, appropriateness and satisfaction. At the top – the most rigorous and accurate – are systemic reviews and randomised control trials, followed by cohort studies, observational studies and surveys through to testimonials and case studies.

You see, it’s not only the type of evidence that matters, but where it comes from.

Dietary guidelines are drawn up by governments who also want to keep food manufacturers in business. Studies aren’t cheap to run and are often funded by parties with a vested interest in a positive outcome for their products.

The American Diabetes Association, for example, is one of many health groups which get funding from fizzy drink manufacturers – Time magazine reported last year that between them, Coca Cola and Pepsi gave money to 96 health groups in the US.

A study of 206 pieces of research that looked at the ‘Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles’ found those sponsored by a food or drink manufacturer were four to eight times more likely to show positive health effects from consuming those products.

Often health claims or scientific breakthroughs are reported in the media without context. Heidi Gardner, PhD researcher and science communicator, believes ‘poor quality science is easily disseminated broadly and good quality science gets minimal coverage because researchers are open about the limitations of the work they’ve done.’ We want conclusive answers, and for science to provide them, but ‘that just isn’t possible with decent quality research – the best we get is ‘yes or no for now’.’

Helen West and Rosie Saunt from the Rooted Project, a scheme they co-founded when they ‘became tired of the nutri-b****cks filling our social media feeds’ stress the importance of looking at the whole body of evidence, rather than only that which supports your personal belief. They see big problems in the health and wellness industry, where qualifications are not regulated in certain fields, but recognise the public are starting to understand the importance of evidence-based nutrition and are ‘demanding credibility from the industry’.

THE SIGNAL OR THE NOISE

Rob Briner is scientific director at The Center for Evidence-Based Management. His advice is to think widely and deeply. ‘It is essential to get evidence from a range of different sources… because using information from just one source means it is more likely we will be using information that is either limited or biased, or both. The second thing we need to remember is to judge the quality of the information or evidence we obtain.’

It has never been easier to share your thoughts with the world via the internet. Technology means anyone can have a voice. While there are enormous advantages to this, it’s difficult to separate real expertise or verifiable news from opinion and idea – to ‘hear the signal through the noise’ as Rob puts it. We’re also human and tend to believe things because other people do, or experience confirmation bias where we tend to search for information consistent with beliefs we already hold.

Dr Joseph Reddington is director of EqualityTime, a charity using critical thinking to solve social problems. He’s also active in London’s ‘Quantified Self’ movement which is based on daily self-tracking and says technology offers a chance to become your own expert. ‘Being able to fact check in real time empowers normal people with just enough truth to fight back,’ he says.

A yoga teacher specialising in prenatal and baby yoga, Hayley Slatter aims to help individuals find their own sense of wellbeing. Even with experience as a physio, a Masters in neuropsychology and additional qualifications in yoga and pilates, she finds the field overwhelming. The number of yoga teachers demonstrates the versatility of a yoga practice. But when you add endless articles, so-called expert bloggers and what Hayley calls ‘Instayogis’, showing the benefits of particular poses, classes and even nutrition, it’s difficult to know which practice is right for you. ‘I believe the common theme through all these yoga types is that a true practice requires a degree of self-awareness,’ she says.

Heidi Gardner agrees, ‘people tend to tune out of their own bodies in favour of trying to find evidence for what they should or shouldn’t be doing.’ She has been working with a nutritionist since ‘feeling overwhelmed’ with all the healthy living ‘evidence’ she was faced with. ‘I was relying on claims I’d seen to tell me what was healthy,’ she says. Stopping soaking up all the ‘evidence’ has made her happier and more relaxed – and probably healthier, too.

So, how do we gain that self-awareness? We might have to accept that there isn’t one. We’re all human and after we’ve read widely and deeply, asked critical questions and considered all the evidence, sometimes the only thing to do is take a deep breath and jump in to what feels right for you.

FIND YOUR BALANCE:

HOW YOU CAN APPLY EVIDENCE TO YOUR OWN LIFE

Search for the best available evidence. As well as a degree of quantity, you need quality. Who wrote it? What do they have to gain? What is their experience?

Play the ‘why’ game and approach what you read and hear with a dose of healthy scepticism. ‘Asking “why?” repeatedly and focusing on making better-informed and not perfect decisions’ is important says Rob Briner, The Center for Evidence-Based Management.

IS IT IMPORTANT?

Use Pettigrew and Roberts’ idea of salience, or how important something is. Basically, does what you’re investigating even matter? And why? Do we actually care about taking 10,000 steps a day, or whether we have 35 or 40 grams of protein? In the grand scheme of our lives, how much does it really matter?

THREE WISE THOUGHTS 

The Rooted Project has three questions to ask:

‘Is the claim based on a person’s story, or a scientific study?’ If it’s an anecdote, you can be pretty certain it’s not a fact and probably not applicable to the whole population.

‘Is the newspaper headline referring to one study or multiple?’ Single studies do not change public health advice.

‘Is it a human or animal study?’ You are not a mouse, rat or monkey. You can’t extrapolate data from animals to humans.

LISTEN TO YOUR HEART

Remember that intuition is itself a form of evidence. If your gut is telling you something, then you should listen to it. The more practiced we become in doing this, the more we will learn to trust our own instincts and develop self-awareness.

Introducing: Science On A Postcard

As well as my PhD I’m a freelance copywriter, currently working with 5 clients on a weekly basis. For some reason I decided that I just wasn’t busy enough, so I’ve also set up a little Etsy shop where I’m selling science postcards and prints. That little Etsy shop is called ‘Science On A Postcard’ and it’s also got its own Instagram page too.

How did Science On A Postcard start?

When I was younger I didn’t think I’d be a scientist – I planned on studying graphic design. I still like to doodle, and even throughout my science life I’ve injected creativity. I find drawing really relaxing. At the start of the year I started drawing, scanning drawings in to my laptop, and then messing about with them using Adobe Illustrator. Fast forward to a month or two ago, and I found myself really, really wanting to do this more regularly. Nothing career-changing, I just found it a really enjoyable way to get some head-space whilst communicating science at the same time. I bought an iPad Pro and Apple Pencil, and decided I should probably do something to encourage myself to keep up this little doodling habit I’d built up – and Science On A Postcard was suddenly a thing!
In fact, I was at London City Airport waiting for my flight back to Aberdeen after the ABSW’s Science Journalism Summer School when I set up the Instagram account and started making things a bit more solid.

Currently I have two prints/postcards designed and listed in my shop. I opened the shop last night and I had a whole 2 sales before I went to bed! Super exciting. I’m not looking to turn this into a job, or a substantial money-maker, I just really like doodling science. I also really love getting mail so this seemed like a cute little hobby to keep going in my spare time.

Over the next few months I want to design more prints/postcards that show other types of scientists – qualitative researchers, chemists, geologists, geneticists, planetary scientists.. etc etc. Eventually I’m hoping that there’s a whole range of postcards that can be used to briefly show what different scientific careers are like.

To see more from Science On A Postcard visit the Instagram page, or take a look at the Etsy shop. If you have ideas for science careers that you’d like to see presented in postcard format, please do get in touch and let me know!

Breaking Down The Silos – Global Evidence Summit 2017

I’m currently in Cape Town for the Global Evidence Summit; no doubt there will be a few blog posts over the coming days/weeks about what I’m up to over here, what I’m learning, and what it’s like to travel so far from home as part of my PhD. I wanted to write this post whilst it’s fresh in my mind – so it’s a little more research focussed than the exciting travel pictures I hope to bring you soon!

One of the main reasons I wanted to come to the Global Evidence Summit was for one of the themed days. Each of the conference days starts with a plenary, and then there are threaded sessions that allow you to explore a subject in more depth throughout the rest of the day. After a bit of a nightmare with flight delays and missed connections, I missed the first day of the conference. The second day was always going to be my highlight though; day 2 focussed on the ‘Evidence Ecosystem’ and ensuring that we improve the way the entire ecosystem of evidence generation works together, to ensure that evidence can lead to improved patient care.

The plenary session was absolutely brilliant, and certainly did not disappoint.

BREAKING DOWN THE SILOS: Digital and trustworthy evidence ecosystem

This plenary will set out to understand how explicit links between actors are needed – and now possible – to close the loop between new evidence and improved care, through a culture for sharing evidence combined with advances in methods and technology/platforms for digitally structured data.

The session featured talks from Chris Mavergames, Karen Barnes, Greg Ogrinc and Jonathan Sharples, and gave a really brilliant overview of what the evidence ecosystem is, the actors within it, and how it all links together. The cycle below gives you an idea of what I’m talking about (image taken from the MAGIC Project website).

My work, which focusses on improving the efficiency of clinical trials, fits into the ‘Produce evidence’ section at 9 o’clock on the cycle above. As passionate as I am about improving the production of primary research evidence – it is of absolutely no use whatsoever if the rest of the ecosystem doesn’t function properly. So, we need to improve the way we generate primary research evidence, but then ensure that that evidence is synthesised and reviewed effectively and efficiently too. In turn, that information can then be disseminated to clinicians, then to patients, the evidence can be implemented, and then we evaluate it and use it to improve practice. The cycle then begins again.

This plenary session then led to threaded sessions throughout the rest of the day – the first of which I was given the opportunity to co-chair. This was my first experience of co-chairing anything, so I was a bit nervous, but mainly really excited to listen to the speakers’ presentations and then be able to take part in the discussion that followed.

Threaded session 1 of Thursday was titled ‘The inefficiency of isolation: Why evidence providers and evidence synthesisers can break out of their silos’, and focussed on the journey from producing evidence to synthesising evidence.

The 4 talks in this session were:

  1. The problems of poor and siloed primary research – a funder’s view (Matt Westmore).
  2. New ways to access primary research data (Ida Sim).
  3. Data journeys from studies to accelerated evidence synthesis (Anna Noel-Storr).
  4. Connecting primary research and synthesis in education – experiences of operating in a linked system (Jonathan Sharples)

Honestly, these talked linked together really beautifully, and gave me lots to think about in terms of what I can be doing to try and make sure that my research is slotting into the wider evidence ecosystem in a more cohesive way.

Matt gave a funder’s perspective on the problem of disconnected research, and explained what the National Institute for Health Research (NIHR) in the UK are doing to combat these problems. He also gave Trial Forge a shout out too which is always welcome!

Ida’s talk showcased Vivli and explained why it’s so important to share clinical trial meta-data to ensure that we’re not duplicating effort. I’d never heard of Vivli before I’d started doing research into the speakers on the panel, so this was a really interesting session.

Anna gave a fantastic talk on the journey that data takes from studies all the way through to evidence syntheses – the image below shows a slide that she used to explain what evidence sysnthesis is used for, I thought it was a really good way to communicate the concept so I’ve included it here.

Anna is also heavily involved with Cochrane Crowd – a platform that allows volunteers to help to categorise evidence to ensure that evidence syntheses are more efficient. It’s a brilliant platform and one that I’ve contributed to, and with continue to contribute to (probably when I have more time post-PhD though!).

Jonathan then impressed us all with his experience of doing research in the UK education sector. Education is clearly an entirely different beast than healthcare is, but the work that Jonathan and the rest of the team at the Education Endowment Foundation have done really is astounding. I think there’ll be lots of healthcare researchers dissecting the work they’ve done in an effort to try and translate some of their successes into the world of evidence-based healthcare.

I’m not going to go into detail about the rest of the threaded sessions because I’ll be here all day, but as expected they were great. The way that this conference has covered different topics and themes has been so useful, but also totally overwhelming – there is just so much going on! If you’d like to know more about the other threaded sessions, and the Global Evidence Summit as a whole, take a look here.

Why Clinical Trials Should Be At The Forefront of Public Science Knowledge

I originally wrote this post as a guest feature on ‘An Anxious Scientist‘. The piece was originally published at the beginning of August, and I’ve republished it here with permission from Rebecca who runs An Anxious Scientist. Make sure you take a look at her blog for brilliant posts explaining complex science concepts in engaging ways, showcasing scientists in all fields, and of course some of Rebecca’s own PhD experiences too.


Public engagement with science is not a new concept, but with the rise in social media usage and pressure on scientists to prove the impact of their work, the world of science communication is advancing at a rapid rate. Many early career researchers now contribute to online blogs, Instagram and Twitter profiles with the aim of disseminating their research, breaking down stereotypes, and ultimately getting the public excited about science. The opportunities that science communication opens up for both academics and public audiences is huge. It’s difficult to see a downside; academics work to improve the way they communicate, and the public finds out more about the research that’s going on around, or in some cases with them.

The diversity of fields covered by science communicators is vast; but is there room for everyone?

I’ll say up front that I think good quality science communication from any field of research is a good thing; but as a clinical trials methodologist, clinical trials in the public sphere of scientific knowledge hold a different level of importance for me. That’s not to say that other types of science are not important, just that trials are a topic I really feel the public could benefit from knowing about.

My work focusses on improving the way we do clinical trials – in particular, how we recruit participants into clinical trials in an efficient way. Efficiency here could mean lots of things; cheaper, faster, less patient burden, less administrative work, etc – I’m interested in making the process better, whatever ‘better’ means.

Each trial has statisticians that process the huge amount of data that comes from trials, but way before results start coming in, these statisticians are charged with the task of calculating how many people need to take part in the trial for the results to be robust. This is important because if trials recruit too few participants then the results of those trials could actually be showing us unreliable data. Estimates currently show that ~45% of trials globally don’t recruit enough people.

Clinical trials are the types of studies that we want our healthcare system to be based on. Trials are able to differentiate between an intervention causing an outcome, and an intervention being correlated with an outcome. In simple terms, they can answer questions like ‘does taking a paracetamol get rid of my headache, or would my headache have disappeared without it?’

Understanding the strengths and limitations of trials, and being able to unravel what features differentiate a reliable trial from an unreliable one, would empower the public.

Take the example of the Alzheimer’s drug LTMX that caused these headlines in July 2016:

With those headlines in mind, take a look at these articles that are about that exact same drug, LMTX:

In this case, newspapers with high readership figures and easy access to the public told of a drug that would halt Alzheimer’s disease – and the public could be forgiven for thinking that the problem of Alzheimer’s was now solved. Scientific media, and news outlets with smaller readerships provided a more balanced view of the trial that tested LMTX.

Surely this means newspapers should be reporting better, rather than putting the onus on the public?

News outlets like The Sun, The Daily Mail and The Times are not scientific experts; their reporting on health research could be discussed in another article entirely! What I do think is important, is that the public feel equipped to critique these sensationalised pieces in order to get to the root of the story – the facts.

All of the articles state that 891 people were enrolled in the trial, the majority were also taking treatments that have already been approved to help relieve Alzheimer’s symptoms. 15% (144) of the 891 people were only taking the trial drug (LMTX), or a placebo. In this group the researchers noticed a difference. All of the articles provide that information – it’s the headline that is swaying the public’s thoughts on the results.

Given what I mentioned earlier about the importance of recruiting the correct number of participants, the results of this work are immediately put in doubt. If the trial’s statisticians calculated that 891 people were needed to find a clinical difference between patients taking the experimental drug and those taking other drugs, then why does it matter that a difference was found in a group of 144 patients? Put bluntly, it doesn’t. These trial results do not offer a definitive answer to the question of whether LMTX could prevent cognitive decline in Alzheimer’s patients.

As we can’t control what headlines are plastered over the front page, it’s important that we empower, educate, and answer questions from the public about trials so that they can make these judgements themselves.

So, what’s the solution? Whilst the science communication world advances, I feel like we are focussing too much on the discoveries themselves, over the methods we use to discover. The addition of a level of transparency and openness about the flaws in scientific methods would go further to empower the public. It would begin to break down barriers years of science has built between scientists and the public – science may have the answers, but we need to be open and honest about the methods we use to get those answers.

If you’re a science communicator, why not challenge yourself to explain the limitations of your work rather than simply strengths?

#365papers August Update

In my first post on this blog, I set myself 3 PhD-related goals for 2017. One of those goals was to read more widely, and more frequently, and I decided that doing the #365papers challenge would be a good way to do that.

July’s post for #365papers was too cocky – I finished July ahead of schedule and then skipped off on holiday. August’s reading was not so good. It’s currently Saturday 9th September and I have only just caught up with August’s reading, so still a bit of catching up to do from the start of September!

This month when I did get round to reading I was concentrating on qualitative studies; I was doing my own qualitative analysis through August and it’s nice to get an idea how different people write and look at their own studies. I also managed to have a really good look at the literature on user-testing and think aloud protocols. On September 10th I’ve off to Cape Town for research trip – I’ll be going to the Global Evidence Summit (blog post(s) to follow for more info!), and then I’m staying in Cape Town to meet with clinical trialists based with the South African Medical Research Council. These trialists will be user-testing evidence-presentation formats – this work makes up part of my PhD project so i’ll do a more in-depth blog post another time. Anyway, hopefully September’s reading won’t be quite so late as August’s was…

August’s reading:

  1. Supporting positive experiences and sustained participation in clinical trials: looking beyond information provision
  2. How many interviews are enough? An experiment with data saturation and variability
  3. Barriers to the conduct of randomised clinical trials within all disease areas
  4. ‘We knew it was a totally at random thing’: parents’ experiences of being part of a neonatal trial
  5. What are funders doing to minimise waste in research?
  6. J Guy Scadding and the move from alternation to randomisation
  7. UK publicly funded Clinical Trials Units supported a controlled access approach to share individual participant data but highlighted concerns
  8. Why do we need evidence-based methods in Cochrane?
  9. Receiving a summary of the results of a trial: qualitative study of participants’ views
  10. The rights of patients in research
  11. Using routinely recorded data in the UK to assess outcomes in a randomised controlled trial: The Trials of Access
  12. The impact of active stakeholder involvement on recruitment, retention and engagement of schools, children and their families in the cluster randomised controlled trial of the Healthy Lifestyles Programme (HeLP): a school-based intervention to prevent obesity
  13. Evaluating the efficiency of targeted designs for randomised clinical trials
  14. Improving clinical trial efficiency: thinking outside the box
  15. Stratified randomisation for clinical trials
  16. Factors associated with online media attention to research: a cohort study of articles evaluating cancer treatments
  17. Feasibility of a randomised single-blind crossover trial to assess the effects of the second-generation slow-release dopamine agonists pramipexole and ropinirole on cued recall memory in idiopathic mild or moderate Parkinson’s disease without cognitive impairment
  18. Improving the process of research ethics review
  19. Retrospectively registered trials: The Editors’ dilemma
  20. Cancer Research UK: Taking a broad view of research impact
  21. Getting access to what goes on in people’s heads?: reflections on the think-aloud technique
  22. A description of think aloud method and protocol analysis
  23. How to study thinking in everyday life: Contrasting think-aloud protocols with descriptions and explanations of thinking
  24. Think-aloud technique and protocol analysis in clinical decision-making research
  25. The use of think-aloud methods in qualitative research, an introduction to think-aloud methods
  26. User-centred design
  27. Interpreting the evidence: choosing between randomised and non-randomised studies
  28. The use and abuse of multiple outcomes in randomised controlled depression trials
  29. The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials
  30. Are randomized clinical trials good for us (in the short term)? Evidence for a “trial effect”
  31. The ethics of underpowered clinical trials (Reply – Janosky)