Thesis Update – T-Minus 12 Months

I’m going into my final year as a PhD student; it’s 1 year until I hand in my thesis. I’ve been working on my project for 2 years. On one hand it feels like I’ve been working in my department and with my current colleagues for much longer, but on the other it feels like I’ve been here for 5 minutes. Having 1 year to go until hand in has made this PhD thing a lot more real. That sounds stupid – of course it’s real, I’ve been turning up to work for 2 years and learning more and more about clinical trials methodology, but starting to write the thesis is making all of that settle in.

I thought I’d do a few blog posts to track my progress with thesis writing. Primarily I think this will be a nice thing to look back on after the whole process is over, though I also hope these posts are helpful to those who aren’t yet at the writing up stage, or those who are writing up alongside me.

So, how far have I got?

Structure

About 6 months ago I wrote out a skeleton structure – this included chapter titles, headings, and notes/pieces of text that I had from other documents.

For me, this process has been invaluable. I feel much more relaxed with this skeleton structure than I did without it; I know what I need to do, and what text needs to go where. It’s as if I’ve created a template of how I’ll write the thesis in the end, and that’s very comforting when you’ve got the task of writing such a huge document ahead of you.

Literature Review

The literature review is the thing I’ve been dreading most about the thesis. I read a lot, and I feel well informed about my topic and the wider literature around it, but the task of demonstrating that feels both daunting and honestly, kind of boring. I’ve put the literature review off for long enough now, and I’m aiming to make a decent dent in it over the summer months. So far I’ve worked with the Information Specialist that’s based in my Unit to create a search strategy, and I’ve got all the sections and headings sketched out. Now it’s a case of screening the results of that search (~4,000 abstracts!), putting the relevant results into the right headings, and then knitting everything together. Sounds simple right? Probably not. I think this will be the bit of the thesis that takes the longest, largely because I keep trying to avoid it.

Systematic Review

The best thing my supervisors did when I was planning my systematic review, was encourage me to get the protocol published – I would 100% recommend you do this if you can. It meant I had to really think about what I was going to do, and keep a written record of when, why and how each decision was made throughout the process. I published my protocol this time last year, which also gave me a huge confidence boost, and a much-needed win in the middle of the PhD – often a time period that gets lost.

Now I’ve made a decent dent in the writing up of my results. I’ve sent the first draft of my systematic review chapter (without discussion) to my primary supervisor for him to have a look over, so hopefully over the next few months I’ll be able to edit that and then write the discussion for it. After that I’ll have a chapter done and tied up, and I can then work on reformatting and editing that chapter to generate the final systematic review paper before my PhD is finished.

Side note – zooming out of really big Word documents always makes me realise how much I’ve actually done, so I do this pretty often to ensure I don’t get lost within the pages of edits and text I’m finding tricky to write.

Qualitative Study

My qualitative study is the thing I’m most nervous about writing up – I’ve never written up qualitative findings before so I’m thinking it’s going to be a case of write, edit, edit, edit, edit, edit, write some more, edit again, etc. That said, I think once I get to grips with this chapter I’ll really enjoy writing it; I need to get over the initial hurdle first and allow myself to write some rubbish without feeling too proud. The qualitative component of my PhD project is the thing I enjoyed doing the most, I loved being able to get away from my desk and go out to speak to people. It made the reasoning behind my project much more real, and confirmed to me (again..) that my work is valuable. As a PhD student you sometimes need those confirmations, and qualitative research allowed me to see how my work will make a difference to people once it’s complete, published and disseminated.

Aims for the next 6 months
  • Literature review – get all of the abstracts screened and the relevant references sorted into the headings I’ve already sketched out, write at least 2,000 words (whether these make it into the final thesis is irrelevant – I just need words to start with, I can edit and tweak that text once I’ve got a starting point).
  • Systematic review – get this finished and make a start on pulling a paper together out of it.
  • Qualitative study – seek out some training on writing up qualitative findings (I suspect this will be useful in terms of a confidence boost, and will force me to start writing), and then make a start.
  • I am also on the look out for a PhD writing retreat around December/January time. The prospect of moving away from my desk, inbox and phone is strange to think about, but I think I could make some real progress with thesis writing towards the end of 2017/start of 2018. That will then set me up well for the final 6-month push before hand-in.

Have any of you starting writing your thesis yet? If you’ve got any tips or resources that you’ve found helpful, please pass them on!

Evidence Live Day 2 – 22nd June 2017

As I said yesterday, this week I’ve been in Oxford at the Evidence live conference. Thankfully, Oxford cooled down today and I’m much, much less grouchy as a result.Yesterday’s blog post got a really good response and people seemed to appreciate my write up of day 1, so here’s day 2 for you.

Breakfast session – The REWARD alliance and the EQUATOR network: promoting increased value of research

Iain Chalmers and Doug Altman

Weirdly, breakfast sessions at conferences are usually my favourite talks despite the fact I’m very much not a morning person. The people who attend always seem extra passionate, they’re engaged with discussion, and there’s usually croissants which helps too. This session from Iain Chalmers and Doug Altman was predictably, brilliant. I already knew about the REWARD Alliance and the EQUATOR Network before attending, but this fleshed out my thoughts on both of these initiatives. It also demonstrated that even Doug Altman can struggle to get funding sometimes – the EQUATOR Network doesn’t have grant funding currently. The discussion after Iain’s talk was particularly interesting; during his talk he said that ethical committees are actually being unethical in the way that they approve research but do not make up a big proportion of the people making a noise about research waste. An audience comment (from an ethics committee chair) followed and he made a very good point – we as researchers are often going into the conversation with ethics committees with our guards up. We’re ready for a fight before the first punch has been thrown, and it automatically puts the committee on the back foot before you’ve started. Rather than trying to invent ways to circumvent the bureaucracy we so often encounter, we need to work with ethics committees to change the system so that we can improve the quality of research, and try and make the process of doing so more easy for researchers.

Keynote session – Better data, reduced waste in research and public engagement to transform patient care

Chaired by Tessa Richards, presentations from Trish Groves, Simon Denegri and James Munro

Yet another brilliant keynote session; I think this might have been my favourite session of the entire conference.

Tessa Richards started off by talking about patient perspectives, involvement, and how patients are pushing the research agenda where we are dropping the ball. She signposted to some useful resources: www.disruptivewomen.net the #WeAreNotWaiting and #patientsincluded movements, and highlighted the lack of patient voices at Evidence Live this year. Hopefully the organisers will hear Tessa’s call and ensure we have patient representation weaved throughout next year’s programme. Trish Groves took to the floor and discussed how we can increase value in health research and make it truly able to improve patient care. The answer? Patient involvement. She highlighted the role of patient reviewers within the BMJ’s publication and review process too which I thought was brilliant.

Next up, Simon Denegri. He started his talk with a series of emojis so that was me immediately on board.. Anyway, yet another fantastic talk that focussed on patients, patient involvement, and the funding gap between research planning and research conduct. We shouldn’t be involving patients throughout every stage of our research work; it can be a waste of their time. To get the most value out of patient insight, and the most efficient use of patient time, we need to work with patients to see where they want to be involved and where they feel they can make the most difference to the research project. Simon also had a great analogy for the tokenistic patient involvement we so often see – a Ford Escort is still a Ford Escort no matter if you add a spoiler, tint the windows and lower the suspension; for you to maintain any street-cred at all, you really just need to re-build.

The session finished with a talk from James Munro from Care Opinion; a website that allows patients to give anonymous feedback about the health services they interact with across the UK. Uniquely, this platform is also linked up with healthcare professionals, this means that concerns, complaints etc can be resolved efficiently. James gave a few examples throughout his talk, one of the most simple being a patient that could not hear when the nurse called her name in the clinic – the seats were faced towards the wall and the patient was deaf in one ear, meaning she couldn’t figure out where the noise was coming from. In just 2 days contact had been made with the clinic involved, and a plan made to turn the seats around to ensure patients don’t run into this problem again. This service isn’t focussed on clinical problems, it really gives patients a voice about any aspect of their interactions with the health services; something as simple as turning the chairs to face a different direction could make the process of visiting a GP so much more comfortable. James also highlighted the need to keep an eye on the ‘small data’ in a world where big data seems to be promising us so much. Big data certainly has the potential to be great, but small data gives meaning. My personal favourite patient quote that James referenced in this talk? ‘Thank you for fixing my brain, it’s chuffing great.’

Parallel session – Clinical trials

Chaired by Jeffrey Aronson, presentations from Amy Rogers, Penny Reynolds, Heidi Gardner (yep, me!), Patrick van Rheenen and Ignacio Atal

A really interesting session that covered a broad range of topics related to clinical trials. Highlights from Amy Rogers from the University of Dundee who gave a brilliant talk, ‘Large streamlined trials – what works, and what doesn’t’. Her talk gave a brilliant overview of the challenges that pragmatic trials can bring, but also the ways that trials units manage to overcome these hurdles in order to conduct brilliant trials. Penny Reynolds’ talk was also brilliant, ‘Why academic clinical trials fail: trial ‘cemetery demographics’ and a case study’. The trials graveyard is something I seem to know quite well given that my research focusses on recruitment – trials are often abandoned due to poor recruitment. Penny’s study drew attention to the management problems that trials faced, and she hypothesised that poor recruitment is a symptom of the underlying disease of bad management.

After these two talks I then presented work on behalf of the HSRU Public Engagement With Research Group – mentioned in this blog post. I talked about our event, ‘Explorachoc’; a chocolate trial that aimed to demonstrate randomisation to the public. I took along the coloured balls we used in Explorachoc, and sweets in yellow and blue bags, and did a live demo of our event. Jeff Aronson who was chairing the session seemed to enjoy being randomised to the blue arm of the trial, which earned him a marshmallow (I did take chocolates with me but they suffered a tragic melting accident between London and Oxford on Tuesday evening, so marshmallows and jelly babies it was!).

A few pictures of my presentation taken by members of the audience:

Picture taken from the The Centre for Evidence-based Veterinary Medicine Twitter page.

Picture taken from the BMC Medical Evidence Twitter page.

Unfortunately I had to leave after this session to catch a bus to Heathrow Airport to make sure I made my flight. British Airways had other ideas though; I’m currently typing this from the Holiday Inn Express at Heathrow because my flight was cancelled due to bad weather. Hopefully I’ll finally make it back to Aberdeen tomorrow morning! I wish I’d known that my flight was cancelled earlier because the final talks looked brilliant, I did manage to keep up to date via Twitter which was great – take a look at #EvidenceLive if you want to find out more!

So that’s it, Evidence Live is over for another year! A brilliant programme filled with inspiring and thought-provoking talks, enthusiastic speakers and a beautiful setting too. This time next year I’ll be nearing thesis submission, so I may have to skip Evidence Live 2018; I’m keeping my fingers crossed that I can make the timings work because it’s one of the most down to Earth, friendly and determined atmospheres I’ve experienced at a conference.

The one thing I wish Evidence Live had this year? A doodler (I don’t think that’s the technical term). Last year Stefania Marcoli was at the conference each day, and she did live summaries including snippets of talks and quotes from attendees. This year we didn’t have anyone doing this, and I think the conference really missed it.

Here’s last year’s summary from day 1:

Credit: Stefania Marcoli

Evidence Live Day 1 – 21st June 2017

This week I’ve been in Oxford at the Evidence live conference – side note: Oxford is currently hotter than the surface of the sun and I genuinely miss Autumn and Winter. As I’m at Evidence Live I thought if would be cool to blog each day; there’s lots of people I have spoken to that wanted to attend but couldn’t, so here’s a run down for those that missed day 1.

Introduction

Carl Heneghan

As expected, Carl started Evidence Live in a really enthusiastic way. He introduced the Evidence-Based Medicine (EBM) manifesto (more on that later), revealed his new post as Editor-in-Chief of the BMJ’s EBM Journal, and told us how we are currently relying on readers to have ‘a good nose’ when it comes to evidence. Yes, in the 21st century, ‘a good nose’ is apparently a real thing we’re using to distinguish decent research from crap research. Carl highlighted that lots of the things that can have an impact on research are types of biases; when paired with the fact that effects of new interventions are often very small, and we’re left with an evidence-base that’s incredibly vulnerable.

Here Carl talk more here in this short clip recorded after his welcome address.

Workshop – Routinely collected health data (RCD) for randomised controlled trials (RCTs)

Lars Hemkens, Kimberly McCord and Heidi Gardner (yes, me!)

I can’t really give you an unbiased report of this workshop since I was one of the speakers, so I’ll try to remain objective. We uncovered the links between trialists and coffee (the stress of doing trials); why clinical research often fails; what RCD could do for RCTs and why we shouldn’t be thinking of the two concepts as opposites; and also gave a few examples of trials where RCD has been used successfully. Lots of brilliant discussion, and a feeling that we need the infrastructure and IT systems to catch up with our thought process; some cases where a trial team has had to wait over a year for data to be released from NHS Digital, and that’s when the trial’s been up and running for a while beforehand. I really enjoyed this session because the audience were so engaged, ideas exchanged and barriers/solutions brain-stormed. Looking forward to working further with Lars and Kim on this over the next few months; we’ve got a paper almost ready to submit so fingers crossed we’ll have that published before my PhD is over too.

Keynote session – Transparency of trial data, improvement in safety and better quality research to improve healthcare

Chaired by Kamal Mahtani, presentations from Fergal O’Regan, Mary Dixon-Woods and Doug Altman

SUCH A GOOD SET OF KEYNOTE LECTURES. If I can present half as well as these guys one day I’ll be so chuffed with myself. Anyway, fangirl moment over – this session looked at transparency of clinical trial data and pharmacovigilance data from the perspective of the European Medicines Agency (EMA); how we can improve the evidence for improving healthcare, and we finished off with a cracker of a talk from Doug Altman on the scandal of poor medical research. This was my highlight of day 1. Doug published a paper titled ‘The scandal of poor medical research‘ in the BMJ in 1994, and this talk was a reflection on what’s changed in the 23 years since. Sadly, not much. Doug strengthened his message, “let’s start calling it bad research, it’s not just poor, it’s plain bad“, and also gave some brilliant words of wisdom on when to do research, “Ignorance of research methods is no excuse; if you can’t do it well, don’t do it.

Here Mary talk more here in this short clip recorded after her talk.

Workshop – How to write papers that add value in health research and deserve publication

Trish Groves

A brilliant session for early-career researchers and students – and one I found really valuable. Trish gave a few tips on resources that will help budding resources (e.g. checklists as found on the EQUATOR Network website), revealed the acceptance rate for submissions to the BMJ (~4%), and encouraged us to never give up when it comes to publishing; if the science is solid, it will get published somewhere. She also drew attention to the research methods we use, which I thought was a brilliant topic to highlight in this talk. Many people think journals publish based on ‘positive’ results, but the good ones don’t; they’re looking for new, interesting and relevant research questions with solid methodology. After that it doesn’t really matter what the results show. In Trish’s own words, “Methods are the most important part of any paper, without them the rest won’t make sense.

A final word of advice from Trish; keep the writing simple. For the slide below ‘the cow jumped over the moon’ would have worked better.. (slide taken from the Students 4 Best Evidence Twitter page)

Here Trish talk more here in this short clip recorded after her workshop.

Parallel session – Evidence synthesis

Chaired by Jeffrey Aronson, presentations from Tone Westergren, Sietse Wieringa, Carme Carrion, Lyubov Lytvyn, Izhar Hasan and Karolina Wartolowska

A really interesting session that covered lots of different aspects of evidence synthesis. Highlights from Sietse Wieringa who gave a thought-provoking talk titled ‘Has evidence based medicine ever been modern? A Latour inspired understanding of the changing EBM’; Lyubov Lytvyn who gave an overview of the RapidRecs we see in the BMJ with a talk titled, ‘Innovative patient partnership in creating trustworthy guidelines, from protocol to publication: Case studies of BMJ Rapid Recommendations’; and Karoline Wartolowska who discussed the placebo effect and how it changes over time in ‘Temporal characteristics of effect size in the placebo arm of surgical randomised controlled trials – a meta-analysis.’

Open session – Carrots, sticks, or stones? Audit and accountability to improve research quality

Ben Goldacre

Before I came to Oxford I was describing Evidence Live to a friend, I mentioned the usual faces and topics I expected would be discussed, and my friend did not focus on anything academic, none of the topics I’d mentioned; she just said, “Ben Goldacre is kind of a big deal.” I guess she was right, and this talk demonstrated just how many projects/initiatives he is involved in that are working to improve research quality. He talked at length about the need for ‘sustained pressure’ when it comes to getting reporting standards up – this isn’t something that will change over night, but the TrialsTracker will hopefully help.

He specifically asked that sections of the talk weren’t tweeted, and I nodded along when he asked for agreement – so that’s all I’m giving you.

Open session – Better evidence for better healthcare: Consultation

Ben Goldacre and Carl Heneghan

If you haven’t seen or heard of the EBM Manifesto then take a look here. Essentially this aims to kick us into action; we know that evidence generation is not at the standard it should be, patients are being let down as a result, and this manifesto aims to get a sort of action plan together of what we can do to improve things. The infographic below gives a summary of the 9 steps of the EMB Manifesto – which will hopefully lead to us changing the landscape of EBM for the better. I’ll just say that this session started at 6pm and the audience were getting restless at this point; there was the promise of a cold beer and 33 degree heat on the other side, so we made this session quick. That said, the way this was done was brilliant; super efficient and cut out the waffle that usually comes with a consultation of any kind.

One of the things I love about Evidence Live is how solutions-driven it is – speakers openly say that we no longer need to see hundreds of papers published telling us what the problems are with clinical research, we need to get to work and fix them. The aim of this session was exactly that. We were given the task of creating the jobs lists of different stakeholder groups (funders, journal editors, researchers, patient groups etc). We submitted our ideas for jobs along with the problem it’s trying to fix, and how we could measure its success if implemented. Simple as that!

There’s an online version of the form we completed at the consultation here – it’s still open to submissions so please do contribute if you wish.

Overall, an absolutely brilliant first day at Evidence Live; I’m really looking forward to tomorrow’s discussions – check back tomorrow for a blog post covering day 2!

Why PhDs and Perfectionism Doesn’t Mix

This post was originally written by me and published on the Let’s Talk Academia blog. Let’s Talk Academia is an open space on the internet, whereby advice, stories and experiences are shared about postgraduate life and academia. The process of working with Emily, who runs Let’s Talk Academia, was great too – so if you’re looking to get involved in blogging, I’d recommend getting in touch with her via the Let’s Talk Academia Facebook page.


Every PhD project is different, and every PhD student tackles a project in their own unique way. In my experience though, PhD students tend to have one thing in common; they’re high achievers.

When I was younger, I was always that kid that loved school. I was clearing out my old bedroom a few months ago and found diaries that we had to write at school when I was about seven years old. I’d written numerous times, ‘I had fun in Maths today’, ‘I did work at school, I like work’, or the line that makes me cringe the most, ‘I love work, work is easy.’ Please bear in mind I was 7 years old! I’m not that unbearable now at the age of 25, I promise.

I got good GCSE grades, and later on my A-level results got me into the University of Aberdeen to study Pharmacology. I worked hard to convert my Undergraduate BSc degree into an MSci when I took a year away from university for an industrial placement. In the end I graduated with a first class degree and won an academic prize for my final year dissertation; the results of which were then published in the journal Acta Neuropathologica and I was a named author.

I started my PhD in July 2015 and realised quickly that my usual high-achieving track-record wasn’t going to get me through this like it’d got me through exams and assessments before. I’ve always been a perfectionist, whether that’s manifested itself in redrafting and editing essays over and over again, or revising the same topic two or three times before an exam. That attitude simply does not work when you’re doing a PhD; realising that and having to adapt my mentality and working practices was difficult, and I think lots of other PhD students have experienced this too.

Why being a perfectionist simply does not work

A PhD is not an exam or assessment you can write in an evening and then forget about, it’s a really long process that involves literally years of work. If you try and make every single part of that process perfect, you’ll never, ever finish it. You’ll also likely hate the process, and your family and friends will want to strangle you because you’ll be no fun to be around.

Letting go of being a high achiever

After I’d started my PhD I learned pretty quickly that I couldn’t be the best at it. I’d get frustrated when I couldn’t do something, and my Supervisor would regularly remind me, ‘a PhD is a training degree, you’re not expected to know everything – you wouldn’t be here if you did’. That helped, and after repeating that to myself a few hundred times, it started to sink in.

I find it difficult to ask for help, and often I don’t find it easy to try new things; there’s a fear in me that I won’t be good at it so I’d rather not try than deal with the feeling of failure. (Side note – this is the reason why I can’t ride a bike…).

If you’re like me, I have some bad news for you. You are going to have to get used to dealing with perceived failures over the course of a PhD. Failures in PhD-land are common. Losing your memory stick with at least one month’s work on (I’m still not over this and it happened a year ago), software crashing and corrupting documents you’ve been working on for the entire day, missing out on funding, and not having abstracts accepted for conferences; it will all happen. You have to get used to it, learn to get over your defeats quickly and learn from them, otherwise you’ll drive yourself mad.

The intelligence myth

When telling people that I’m doing a PhD, more often than not I get the response ‘OMG you must be SO clever!’. I know this isn’t intentional, but it adds pressure. Every time someone says that, I feel a bit more stupid – know what I mean? Really though, a PhD isn’t about being smart. It’s about consistently learning from your mistakes, dusting yourself off and trying again. It’s a test of tenacity rather than intelligence.

Being able to let my perfectionist side ease off a little has undoubtedly made me a better student. I’m no longer afraid to ask questions, no matter how daft they might have seemed at first, and weirdly, I look forward to getting edits and comments back on my work because I know that’s just helping to improve it. Research is a big collaborative effort, we work in big teams across multiple projects at once, and making everything perfect is impossible. It’s also worth noting, if you’re the guy that wants everything to be ‘just right’, you’re probably a nightmare to work with.

Give yourself a break, and let yourself make mistakes – screwing up during your PhD is a really safe space to do so as well, you’ve got a supervisor who can help to get you out of sticky situations after all!

Using Your (Research) Superpowers for Good

Yesterday I spent the afternoon at a foodbank in Aberdeen conducting interviews as part of a project with the University of Aberdeen’s Enterprising Researchers Programme. Enterprising Researchers gets PhD students out of their usual environment and into local businesses. The aim of this is not only to empower researchers to think differently about their research through developing enterprising behaviours, but it allows local businesses to benefit from the skills of PhD students too.

I applied for the programme towards the end of last year; after passing the group interview stage I was then able to apply to a variety of projects advertised. These projects spanned every industry you could think of; oil & gas, food & drink, scientific research, third sector and beyond. I applied for one project based with Community Food Initiatives North East (CFINE). CFINE is a unique business in that it’s part enterprise (selling wholesale fruit and vegetables to businesses and within the community, and offering cookery classes through their ‘Cook at the ‘Nook’ facility), and part charity (offering foodbank services, financial capabilities help and support across Aberdeen City & Shire and Moray). I interviewed with CFINE’s Chief Executive and an Enterprising Researchers Project Officer from the University, and was offered the project.

CFINE’s Cook at the ‘Nook facility

The work itself took a while to properly get off the ground – getting admin sorted, protocols written and the project registered took around 2 months as I was fitting this around usual PhD work, and the freelance work I do as well. This month I’m starting data collection, and it’s going so well! With this project I’m speaking to a variety of people across CFINE’s business and charity sectors; volunteers, beneficiaries of the foodbank and people using the other support services they offer. The work aims to build on some work CFINE have done internally, and figure out what impact the organisation has on its volunteers and users, and how CFINE can improve going forward.

This isn’t a paid project. Every student that’s part of the Enterprising Researchers Programme (ERP) is juggling their own PhD projects, conference attendances, report writing, academic reading etc, with their ERP project.

I’m not writing this post to demonstrate that the people taking part in this programme are great (though we are pretty great!). I wanted to write this post to encourage other PhD students and established researchers to use their research skills to help others. CFINE runs largely on the work and generosity of volunteers; some people volunteer for an afternoon each week, others are there packing orders and manning the foodbank every day – for some it’s like a full time job.

Foodbanks in Aberdeen are reaching their limit; CFINE put out a call last week because they are low on food, a few months before another foodbank in the north of the city completely ran out of supplies. Initially I wanted to donate and volunteer at CFINE, but using my time to carry out research for them means they’re getting better value from the time I’m giving.

The CFINE foodbank

As PhD students we’re building lots of different skills; we’re figuring out how to design, conduct and report research. We’re also working to juggle multiple things at once, communicate complex information in oral and written forms, and get everything done before funding runs out. These skills are all transferable, and could be hugely valuable to the charities and local businesses around you. If you’re thinking of volunteering, I’d really recommend reaching out to charities to see if they could use your research skills. The services we can offer could save them the effort of finding research companies, and the financial costs involved.

My day at CFINE yesterday wasn’t just of benefit to them. I came back home after a jam-packed day feeling motivated and enthusiastic, and really excited to carry on with the project. Use your research superpowers for things other than your PhD; it’ll give you that warm fuzzy feeling and it’ll help your community too.

When Was the First Clinical Trial?

As you’ve probably (hopefully!) picked up from other posts on this blog, my research is centred around clinical trials and their methodology. Trials can be intimidating for people that don’t know a whole lot about them, and as I’ve mentioned before, the ‘guinea pig‘ concept seems to haunt trial participation.

In this series of posts I want to answer any questions people have – from the basic to the obscure and everything in between – and demystify clinical trials. I asked a few friends who don’t work in a trials environment what they don’t know about trials, and the obvious starting point was ‘when was the first clinical trial?’, so here we are. Read on to find out when and how and first clinical trial came about.

Some sources say the first clinical trial was conducted in 605-562 BC, as outlined in the Old Testament’s Book of Daniel. Put simply, King Nebuchagnezzar II ordered the children of royal blood to eat only meat and wine for 10 days. Daniel asked that he and three other children be allowed only to eat vegetables, bread and water. After the 10 days was over, Daniel and the three children were noticeably healthier than the children who had eaten only meat and wine. Whilst this is clearly research (though as Ben Goldacre points out, probably underpowered research), the groups were not controlled. This was probably one of the first times in evolution of human species that an open uncontrolled human experiment guided a decision about public health.

James Lind is credited with the first controlled clinical trial; controlled meaning that his study included a comparison, or control, group. The comparison group received a placebo, another treatment or no treatment at all. Lind, a Scottish Naval Surgeon, conducted the first controlled clinical trial on the 20th May 1747 on a group of sailors suffering from scurvy.

He included 6 pairs of sailors in his trial; placed them all on the same diet, and then gave each of the pairs an additional intervention. One pair had a quart of cider each day; one pair took 25 drops of elixir vitriol (sulphuric acid) three times a day; one pair had 2 spoonfuls of vinegar three times a day; one pair were put on what Lind describes as a ‘course of sea-water’; one pair each had 2 oranges and 1 lemon given to them each day; and another had what’s described as a ‘bigness of a nutmeg’ three times a day.

I know which of the treatments I have preferred at that time (i.e. not a course of seawater!).

At the end of day 6 of Lind’s trial, the pair that had eaten 2 oranges and 1 lemon each day were fit for duty and taking care of the other 5 pairs of sailors. Lind notes in his book ‘Treatise on Scurvy’ (published in Edinburgh in 1753) that he thought after the citrus fruits, the cider had the best effects.

We now know scurvy is caused by a deficiency in vitamin C, hence why fruits rich in vitamin C had his sailors fighting fit again after just 6 days.

Clinical trials like James Lind’s are what we base our current practice on. Over the years since Lind found the cure for scurvy, huge advances have been made in the methodology of trials; we now have placebos, use randomisation, adhere to various codes of conduct, and work with huge groups of patients and teams of research staff across the world in an effort to answer clinical questions.

This is the first post in a series I’m calling ‘Clinical Trials Q&A’. If you have any questions about clinical trials, what they are, why we do them, what their limitations are.. etc, please pop them in a comment or tweet me @heidirgardner and I’ll be sure to answer them in upcoming blog posts.

#365papers May Update

In my first post on this blog, I set myself 3 PhD-related goals for 2017. One of those goals was to read more widely, and more frequently, and I decided that doing the #365papers challenge would be a good way to do that.

This month’s reading has been pretty rubbish if I’m honest. I’ve been spending my spare time reading books with plots and characters instead of p values and methods, and whilst in PhD mode I’ve been working on writing up the systematic review chapter of my thesis. Last month I was all motivated and excited to write my literature review – that took a total back seat, and I suspect it will remain there for the next few weeks whilst I finish up a first draft of that thesis chapter. I’m super excited to get this chapter written – I think it’ll calm me down a bit when it comes to writing the thesis as a whole; it feels like a head-start, and mentally, I think that’s a good move. Anyway, I managed to get through May’s reading, but that did involve a pretty heft few days of reading towards the end to catch up.

May’s reading:

  1. The impact of advertising patient and public involvement on trial recruitment: embedded cluster randomised recruitment trial
  2. Novel participatory methods of involving patients in research: naming and branding a longitudinal cohort study, BRIGHTLIGHT
  3. Developing the SELF study: a focus group with patients and the public
  4. What can we learn from trial decliners about improving recruitment? Qualitative study
  5. Overcoming barrier to recruiting ethnic minorities to mental health research: a typology of recruitment strategies
  6. Systematic techniques for assisting recruitment to trials (START): developing the science of recruitment
  7. Testing the effectiveness of user-tested patient information on recruitment rates across multiple trials: meta-analysis of data from the START programme
  8. Challenges to evaluating complex interventions: a content analysis of published papers
  9. An optimised patient information sheet did not significantly increase recruitment or retention in a falls prevention study: an embedded randomised recruitment trial
  10. Sharing individual level data from observational studies and clinical trials: a perspective from NHLBI
  11. Data sharing: not as simple as it seems
  12. Protecting patient privacy when sharing patient-level data from clinical trials
  13. Predictors of clinical trial data sharing: exploratory analysis of a cross-sectional survey
  14. Opening clinical trial data: are the voluntary data-sharing portals enough?
  15. Subversion of allocation concealment in randomised controlled trial: a historical case study
  16. Making a decision about trial participation: the feasibility of measuring deliberation during the informed consent process for clinical trials
  17. Participants’ preference for type of leaflet used to feed back the results of a randomised trial: a survey
  18. Trialists should tell participants results, but how?
  19. HELP! Problems in executing a pragmatic, randomised, stepped wedge trial on the Hospital Elder Life Program to prevent delirium in older patients
  20. Clinician engagement is critical to public engagement with clinical trials
  21. Patient engagement in research: a systematic review
  22. Health researchers’ attitudes towards public involvement in health research
  23. Open clinical trial data for all? A view from regulators
  24. Patient and public involvement: what next for the NHS?
  25. ‘Ordinary people only’: knowledge, representativeness, and the publics of public participation in healthcare
  26. Reflections on health care consumerism: insights from feminism
  27. Publishing information about ongoing clinical trials for patients
  28. Effectiveness of strategies for information, educating and involving patients
  29. Patient involvement in patient safety: what factors influence patient participation and engagement?
  30. Promoting public awareness of randomised clinical trials using the media: the ‘Get Randomised’ campaign
  31. Communicating the results of clinical research to participants: attitudes, practices, and future directions