Organisation for PhD Success: Reading & Referencing

A few weeks ago my lovely friend Lyuba sent me a message on WhatsApp: “How do you manage a bucket of readings? Do you use mendeley?” The conversation that ensued after this initial message was a few days of absolute organisational-nerd filled glory. Lyuba has just started her PhD at McMaster University, and she’s not the first relatively new PhD student to ask me for advice on how to organise things.

This post will focus on how I track my academic reading, how I make sure I can find papers months later, and how I reference. Later on I have another post planned that will cover task management and to do lists, but I suspect that’s going to take a little while to pull together, so I’m starting with the easier one first!

Tracking your reading

Reading is a massive part of the PhD process. Throughout my first year and into the early part of my second year I read, probably not as often as I should, but wasn’t particularly good at keeping track of what I’d read or what I thought about the studies. That changed in January when I started the #365papers project. I knew I needed to get more organised with my reading, and this was a surefire way to make it happen.

To track my reading I use a simple Word document; the image below shows part of this year’s document.

The table you can just about make out in the image above, is made up of 4 columns; ‘Date’, ‘Number’, ‘Title, First Author, Link’, and ‘Notes’. The ‘Number’ column refers to the amount of papers I’ve read in that month, so that I can easily look back when putting together my #365papers blog posts. Another thing to note is that the ‘Link’ in the third column is always a hyperlink to a full text of the paper – this makes things much easier than if you accidentally link to an abstract and then have to waste time finding the full text again each time you want to refer to the study.

The ‘Notes’ section is the most important part of this document – in it I write comments about the paper, whether I think it’s useful, comments on the quality of it, what I would change if I did the study myself etc. I also include ‘tags’ in this section – these tags help me to re-find papers weeks after I’ve first read them. Tags I use regularly include; ‘recruitment research’, ‘public engagement’, ‘patient involvement’, ‘methodology’, ‘qualitative research’. These are so incredibly helpful when I want to go back and find the notes I’ve written about papers covering different topics.
In the ‘Notes’ section I also highlight sections of text – you can see the yellow areas in the image above. Again, these help to remind me of papers that I’ve read and know I’ll want to refer back to. Usually the highlighted areas are notes to myself, e.g. ‘useful for thesis introduction’, ‘check if this included in systematic review’, or ‘very clear writing style – refer back to when writing up qualitative findings’.

I’ve used this system since January, and I’ve found it so useful – I’m going to start a new Word document in January and keep the 2017 #365papers document for reference, so I’ll eventually have a big archive of all the papers I’ve read. For me, this is a really easy and simple way to track my reading; I have never used Mendeley because it’s not supported by my University, so if anything went wrong with it I’d freak out and not have anywhere to get help – seriously, the librarians at Aberdeen Medical Library are absolute superheroes, and have helped me tonnes in the past with various things. This brings me neatly on to referencing..

Referencing

Referencing is my least favourite part of academic writing – I don’t mean the whole finding information and referring to it thing, I mean the painful task of formatting the names of authors, papers, journals etc into a very specific format. What I find particularly infuriating about it is the time that it takes, and the fact that I know that no one reads reference lists with as much effort as it takes to write them.

As you might expect, referencing software is one of my favourite things about academic writing. I use RefWorks. It doesn’t allow me to store entire papers, just the reference for that paper (as far as I know anyway..), hence the big Word document of reading I talked about earlier. What it does have though, is folders. These enable me to make buckets of references that I know I’ll refer to in pieces of writing later on, speeding up the process of referencing whilst writing.

RefWorks is the reference software that Aberdeen University uses, so I’ve used the same account since I started my undergraduate degree – meaning that I can track references back throughout every assessment I’ve submitted for the last 7 years. The major perk of using the system that’s supported by the University is that the librarians know exactly how to do just about anything linked to RefWorks. On the very rare occasion that something goes wrong with RefWorks – it’s happened once in 7 years – the support team there are really good. I emailed the support desk a copy of my undergraduate thesis along with a very panicked email because the referencing just wasn’t working, and it was sent back to me the next day with the references exactly where I wanted them.

If your university doesn’t use RefWorks, check what they do support – and go with that. Lots of PhD students I know don’t use referencing software and honestly, I have no idea how they have got this far without being driven insane by the process. I can’t imagine trying to reference my entire thesis by hand; I’d probably need a 3 month extension and then time off afterwards to recover.

Advertisements

MRC Network of Hubs for Trials Methodology Recruitment Working Group Meeting – Liverpool, 5th October 2017

The Medical Research Council (MRC) has a number of ‘Hubs’ across various cities in the UK, each conducting research into different aspects of clinical trials methodology. Together, the Hubs are known as the HTMR Network – the Hubs for Trials Methodology Research Network. As part of these hubs, there are a number of Working Groups. These Working Groups each focus on a specific area of interest in the trials methodology world.

I’m part of the Recruitment Working Group, and a few weeks ago we had a face to face meeting in Liverpool. Usually we have monthly teleconferences to ensure that we all know what projects are ongoing, and there are distinct pieces of work being done by groups of people within the group too. Until now I hadn’t met many of the group members face to face, so this was a brilliant opportunity for us to work together and make decisions on where we wanted to go next in terms of projects, funding and potential collaborators.

The meeting was incredibly productive, and I came away inspired and exciting for the work we’ll do together in the future – on a side note, if you’re ever feeling uninspired by your research, make an effort to go to a conference, symposium, or big meeting with people that have similar research interests to you; I always come away feeling enthusiastic and ready to work!

Anyway, when I got back I flicked through my notes and came up with an infographic that covers (in brief!) lots of what we talked about:

I thought I’d give an outline of what this infographic shows, and when paper(s) eventually start to come out, I’ll update this blog and explain a bit more about the specifics of the research too.

ORRCA

Online Resource for Recruitment research in clinical Trials (ORRCA) is one of the Recruitment Working Group’s biggest success stories. Carrol Gamble gave a short presentationon ORRCA, explaining that it was a huge project made possible by many of the members of the group giving up there time to screen abstracts and categorise studies so that the database could be populated.I was one of the people who categorised papers etc, and I’ll be a named author on the paper when it comes out, so I’ll do another blog post with more details then. In brief – ORRCA is a database full of recruitment research, it is updated every year and means that recruitment researchers can use it as a one stop shop for relevant literature. This is incredibly useful because when you’re doing a systematic review you inevitably end up screening through hundreds, if not thousands, of irrelevant literature. Using the ORRCA database means that a lot of the irrelevant studies have already been weeded out, so the entire process of doing a systematic review could be sped up hugely.

The Cochrane Recruitment Review

Taken from Wikipedia: The Cochrane Library (named after Archie Cochrane) is a collection of databases in medicine and other healthcare specialties provided by Cochrane and other organizations. At its core is the collection of Cochrane Reviews, a database of systematic reviews and meta-analyses which summarize and interpret the results of medical research. The Cochrane Library aims to make the results of well-conducted controlled trials readily available and is a key resource in evidence-based medicine.

My PhD Supervisor has a Cochrane systematic review that looks at strategies to improve recruitment to clinical trials. The review was published in 2010, and is now in the process of being updated; it’s important that systematic reviews are updated so that we can hoover up and include data from recent studies. The short talk that Shaun gave focussed on the results of the update (it’s currently under review and should hopefully be published soon – and I’m a named author, hoorah!). Largely, the information that we have about recruitment strategies is thin, that was the case in 2010 and it’s still the case now. There’s one notable exception though – the MRC START project. MRC START was a project that offered something that we so often lack in the world of recruitment research; coordination. I’m not going to go into too much detail here, I’ll just say that when a coordinated effort focusses on answering a research question, that research question is much more likely to be answered with a satisfactory body of evidence. The updated review doesn’t provide us with groundbreaking results, but it does provide encouragement – we are seeing slow progress in the world of methodology research, and that’s better than no progress at all!

The Non-Randomised Recruitment Review

After Shaun had given his presentation on the Cochrane review, I then gave a short presentation on the systematic review that I lead. This review makes up a substantial part of my PhD project; the protocol for it was published last year. This review differs from the Cochrane review in that it includes only non-randomised studies, i.e. a bigger body of evidence, that is of a much lower quality. I’m planning on doing a more detailed blog post about this review when it is published, so keep an eye out for that – hopefully next Spring.

PRioRiTY

I don’t want to give too much away about the PRioRiTy project because I know that the paper from it has just been submitted, so again, I’ll do a more detailed blog post when it’s out. The basic outline of PRioRiTy is a priority setting project in partnership with the James Lind Alliance.

The James Lind Alliance believes that:

  • addressing uncertainties about the effects of a treatment should become accepted as a routine part of clinical practice
  • patients, carers and clinicians should work together to agree which, among those uncertainties, matter most and deserve priority attention.

They usually get involved with prioritisation work around clinical outcomes, but this was their first methodology-based project, so very exciting! The project involved lots of different stakeholders, with the aim of coming up with a prioritised list of topics for research within the area of trials recruitment. Declan Devane explained how the project progressed, and then unveiled the top 10 questions that came out of the work. This work provides a point of focus for us as recruitment researchers. As I mentioned earlier, the concept of coordinated effort is something we’ve lacked, meaning that a lot of work is happening in a lot of different areas, but the effort involved isn’t particularly focussed.

What next?

The meeting was super productive, and we’re planning another face to face meeting for the early part of 2018 so that we can work up some of the ideas that we came up with in that meeting. Ultimately, we want to have a few well thought-out project ideas, so that we can start looking at potential pots of funding for the collaborative work we’ve planned.

#365papers September Update

In my first post on this blog, I set myself 3 PhD-related goals for 2017. One of those goals was to read more widely, and more frequently, and I decided that doing the #365papers challenge would be a good way to do that.

I ended last month’s #365papers update by saying ‘hopefully September’s reading won’t be quite so late as August’s was…’ – and here I am 13 days late. September was a really busy month and though I was reading, it was snippets and abstracts and posters from conferences, rather than entire papers. I’ve now caught up – and I’m determined to make sure that October’s update is back on track time-wise!

This month’s reading has been a big mix of things because I’m working on my literature review, and also getting involved with some new projects. I’ve really enjoyed this month’s reading – when I had time to do it at least, so hopefully there’s some interesting papers in this list for others too.

September’s reading:

  1. The ethics of underpowered clinical trials
  2. The ethics of underpowered clinical trials
  3. Informing clinical trial participants about study results
  4. Women’s views and experiences of two alternative consent pathways for participation in a preterm intrapartum trial: A qualitative study
  5. Recruiting patients as partners in health research: a qualitative descriptive study
  6. Identifying additional studies for a systematic review of retention strategies in randomised controlled trials: making contact with trials units and trial methodologists
  7. Methods for obtaining unpublished data
  8. Clinical features of Parkinson’s disease patients are associated with therapeutic misconception and willingness to participate in clinical trials
  9. Health research participants are not receiving research results: a collaborative solution is needed
  10. Health research participants’ preferences for receiving research results
  11. Why is therapeutic misconception so prevalent?
  12. Recommendations for the return of research results to study participants and guardians: a report from the children’s oncology group
  13. Oncology physician and nurse practices and attitudes regarding offering clinical trial results to study participants
  14. Search for unpublished data by systematic reviewers: an audit
  15. Patient and public involvement in data collection for health services research: a descriptive study
  16. Health researchers’ attitudes towards public involvement in health research
  17. Patients’ and clinicians’ research priorities
  18. Public involvement at the design stage of primary health research: a narrative review of case examples
  19. The impact of patient and public involvement on UK NHS health care: a systematic review
  20. Involving South Asian patients in clinical trials
  21. No longer research about us without us: a researcher’s reflection on rights and inclusive research in Ireland
  22. Willingness to participate in pragmatic dialysis trials: the importance of physician decisional autonomy and consent approach
  23. How important is patient recruitment in performing clinical trials?
  24. Recruiting hard-to-reach subjects: is it worth the effort?
  25. Fundamental dilemmas of the randomised clinical trial process: results of a survey of the 1,737 Eastern Cooperative Oncology Group investigators
  26. The research-treatment distinction: A problematic approach for determining which activities should have ethical oversight
  27. Leaving therapy to chance
  28. Use of altered informed consent in pragmatic clinical research
  29. A framework for analysis of research risks and benefits to participants in standard of care pragmatic clinical trials
  30. Public engagement on global health challenges

Health Advice Overload: What Should We Believe?

This article, written by Francesca Baker, featured in the September issue of Balance magazine. I spoke to Francesca whilst she was writing this piece, and she has included a few quotes from me, so I’ve republished it here with permission. Make sure you take a look at her blog and Twitter account to keep track of future articles from Francesca – she writes across a diverse range of topics.


The sheer volume of data available when trying to decide what’s good and bad for your health is overwhelming. So how do you know what to believe?

This is a world with easy access to a huge amount of information. Just about everything you could possibly want to know is available at the touch of a button, from what to eat or how much exercise to do, to the best way to raise a child, where to invest your money or who to vote for.

You want to make informed decisions and you’ve never had more information at your fingertips. Trouble is, it can actually make life really confusing.

If you’ve ever been unclear about whether butter is actually good or bad for you, tried to ascertain if the antioxidants in wine outweigh the hangovers, or ‘hacked’ your sleep to achieve a solid eight hours only to discover that seven hours is, in fact, what you should be aiming for, you’re not alone.

A 2014 study in the Journal of Health Communication: International Perspectives examined the effects of conflicting media information about fish, coffee, red wine and supplements.

The report raised ‘concern that exposure to contradictory health information may have adverse effects on cognition and behaviours.’ The more information people were exposed to, the higher the level of confusion they reported, which led them to making the wrong decisions.

Not to mention that evidence changes all the time, as more scientific discoveries are made. It’s difficult to believe that smoking was once deemed ‘healthy’ and 1950s adverts for cigarettes featured doctors encouraging the public to smoke.

In fact, in 1980, there were only seven dietary guidelines which Americans were encouraged to follow; by 2005, that had swelled to more than 40.

It’s not about quantity of information – the abundance of evidence can be empowering – but much depends on our ability to scrutinise its quality and how useful it is.

THE RANKING OF EVIDENCE

According to scientists Mark Petticrew and Helen Roberts in a study published in the BMJ, there is a ‘hierarchy of evidence’. They outline seven different levels of study, ranking them based on effectiveness, process, salience, safety, acceptability, cost effectiveness, appropriateness and satisfaction. At the top – the most rigorous and accurate – are systemic reviews and randomised control trials, followed by cohort studies, observational studies and surveys through to testimonials and case studies.

You see, it’s not only the type of evidence that matters, but where it comes from.

Dietary guidelines are drawn up by governments who also want to keep food manufacturers in business. Studies aren’t cheap to run and are often funded by parties with a vested interest in a positive outcome for their products.

The American Diabetes Association, for example, is one of many health groups which get funding from fizzy drink manufacturers – Time magazine reported last year that between them, Coca Cola and Pepsi gave money to 96 health groups in the US.

A study of 206 pieces of research that looked at the ‘Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles’ found those sponsored by a food or drink manufacturer were four to eight times more likely to show positive health effects from consuming those products.

Often health claims or scientific breakthroughs are reported in the media without context. Heidi Gardner, PhD researcher and science communicator, believes ‘poor quality science is easily disseminated broadly and good quality science gets minimal coverage because researchers are open about the limitations of the work they’ve done.’ We want conclusive answers, and for science to provide them, but ‘that just isn’t possible with decent quality research – the best we get is ‘yes or no for now’.’

Helen West and Rosie Saunt from the Rooted Project, a scheme they co-founded when they ‘became tired of the nutri-b****cks filling our social media feeds’ stress the importance of looking at the whole body of evidence, rather than only that which supports your personal belief. They see big problems in the health and wellness industry, where qualifications are not regulated in certain fields, but recognise the public are starting to understand the importance of evidence-based nutrition and are ‘demanding credibility from the industry’.

THE SIGNAL OR THE NOISE

Rob Briner is scientific director at The Center for Evidence-Based Management. His advice is to think widely and deeply. ‘It is essential to get evidence from a range of different sources… because using information from just one source means it is more likely we will be using information that is either limited or biased, or both. The second thing we need to remember is to judge the quality of the information or evidence we obtain.’

It has never been easier to share your thoughts with the world via the internet. Technology means anyone can have a voice. While there are enormous advantages to this, it’s difficult to separate real expertise or verifiable news from opinion and idea – to ‘hear the signal through the noise’ as Rob puts it. We’re also human and tend to believe things because other people do, or experience confirmation bias where we tend to search for information consistent with beliefs we already hold.

Dr Joseph Reddington is director of EqualityTime, a charity using critical thinking to solve social problems. He’s also active in London’s ‘Quantified Self’ movement which is based on daily self-tracking and says technology offers a chance to become your own expert. ‘Being able to fact check in real time empowers normal people with just enough truth to fight back,’ he says.

A yoga teacher specialising in prenatal and baby yoga, Hayley Slatter aims to help individuals find their own sense of wellbeing. Even with experience as a physio, a Masters in neuropsychology and additional qualifications in yoga and pilates, she finds the field overwhelming. The number of yoga teachers demonstrates the versatility of a yoga practice. But when you add endless articles, so-called expert bloggers and what Hayley calls ‘Instayogis’, showing the benefits of particular poses, classes and even nutrition, it’s difficult to know which practice is right for you. ‘I believe the common theme through all these yoga types is that a true practice requires a degree of self-awareness,’ she says.

Heidi Gardner agrees, ‘people tend to tune out of their own bodies in favour of trying to find evidence for what they should or shouldn’t be doing.’ She has been working with a nutritionist since ‘feeling overwhelmed’ with all the healthy living ‘evidence’ she was faced with. ‘I was relying on claims I’d seen to tell me what was healthy,’ she says. Stopping soaking up all the ‘evidence’ has made her happier and more relaxed – and probably healthier, too.

So, how do we gain that self-awareness? We might have to accept that there isn’t one. We’re all human and after we’ve read widely and deeply, asked critical questions and considered all the evidence, sometimes the only thing to do is take a deep breath and jump in to what feels right for you.

FIND YOUR BALANCE:

HOW YOU CAN APPLY EVIDENCE TO YOUR OWN LIFE

Search for the best available evidence. As well as a degree of quantity, you need quality. Who wrote it? What do they have to gain? What is their experience?

Play the ‘why’ game and approach what you read and hear with a dose of healthy scepticism. ‘Asking “why?” repeatedly and focusing on making better-informed and not perfect decisions’ is important says Rob Briner, The Center for Evidence-Based Management.

IS IT IMPORTANT?

Use Pettigrew and Roberts’ idea of salience, or how important something is. Basically, does what you’re investigating even matter? And why? Do we actually care about taking 10,000 steps a day, or whether we have 35 or 40 grams of protein? In the grand scheme of our lives, how much does it really matter?

THREE WISE THOUGHTS 

The Rooted Project has three questions to ask:

‘Is the claim based on a person’s story, or a scientific study?’ If it’s an anecdote, you can be pretty certain it’s not a fact and probably not applicable to the whole population.

‘Is the newspaper headline referring to one study or multiple?’ Single studies do not change public health advice.

‘Is it a human or animal study?’ You are not a mouse, rat or monkey. You can’t extrapolate data from animals to humans.

LISTEN TO YOUR HEART

Remember that intuition is itself a form of evidence. If your gut is telling you something, then you should listen to it. The more practiced we become in doing this, the more we will learn to trust our own instincts and develop self-awareness.

Introducing: Science On A Postcard

As well as my PhD I’m a freelance copywriter, currently working with 5 clients on a weekly basis. For some reason I decided that I just wasn’t busy enough, so I’ve also set up a little Etsy shop where I’m selling science postcards and prints. That little Etsy shop is called ‘Science On A Postcard’ and it’s also got its own Instagram page too.

How did Science On A Postcard start?

When I was younger I didn’t think I’d be a scientist – I planned on studying graphic design. I still like to doodle, and even throughout my science life I’ve injected creativity. I find drawing really relaxing. At the start of the year I started drawing, scanning drawings in to my laptop, and then messing about with them using Adobe Illustrator. Fast forward to a month or two ago, and I found myself really, really wanting to do this more regularly. Nothing career-changing, I just found it a really enjoyable way to get some head-space whilst communicating science at the same time. I bought an iPad Pro and Apple Pencil, and decided I should probably do something to encourage myself to keep up this little doodling habit I’d built up – and Science On A Postcard was suddenly a thing!
In fact, I was at London City Airport waiting for my flight back to Aberdeen after the ABSW’s Science Journalism Summer School when I set up the Instagram account and started making things a bit more solid.

Currently I have two prints/postcards designed and listed in my shop. I opened the shop last night and I had a whole 2 sales before I went to bed! Super exciting. I’m not looking to turn this into a job, or a substantial money-maker, I just really like doodling science. I also really love getting mail so this seemed like a cute little hobby to keep going in my spare time.

Over the next few months I want to design more prints/postcards that show other types of scientists – qualitative researchers, chemists, geologists, geneticists, planetary scientists.. etc etc. Eventually I’m hoping that there’s a whole range of postcards that can be used to briefly show what different scientific careers are like.

To see more from Science On A Postcard visit the Instagram page, or take a look at the Etsy shop. If you have ideas for science careers that you’d like to see presented in postcard format, please do get in touch and let me know!

Breaking Down The Silos – Global Evidence Summit 2017

I’m currently in Cape Town for the Global Evidence Summit; no doubt there will be a few blog posts over the coming days/weeks about what I’m up to over here, what I’m learning, and what it’s like to travel so far from home as part of my PhD. I wanted to write this post whilst it’s fresh in my mind – so it’s a little more research focussed than the exciting travel pictures I hope to bring you soon!

One of the main reasons I wanted to come to the Global Evidence Summit was for one of the themed days. Each of the conference days starts with a plenary, and then there are threaded sessions that allow you to explore a subject in more depth throughout the rest of the day. After a bit of a nightmare with flight delays and missed connections, I missed the first day of the conference. The second day was always going to be my highlight though; day 2 focussed on the ‘Evidence Ecosystem’ and ensuring that we improve the way the entire ecosystem of evidence generation works together, to ensure that evidence can lead to improved patient care.

The plenary session was absolutely brilliant, and certainly did not disappoint.

BREAKING DOWN THE SILOS: Digital and trustworthy evidence ecosystem

This plenary will set out to understand how explicit links between actors are needed – and now possible – to close the loop between new evidence and improved care, through a culture for sharing evidence combined with advances in methods and technology/platforms for digitally structured data.

The session featured talks from Chris Mavergames, Karen Barnes, Greg Ogrinc and Jonathan Sharples, and gave a really brilliant overview of what the evidence ecosystem is, the actors within it, and how it all links together. The cycle below gives you an idea of what I’m talking about (image taken from the MAGIC Project website).

My work, which focusses on improving the efficiency of clinical trials, fits into the ‘Produce evidence’ section at 9 o’clock on the cycle above. As passionate as I am about improving the production of primary research evidence – it is of absolutely no use whatsoever if the rest of the ecosystem doesn’t function properly. So, we need to improve the way we generate primary research evidence, but then ensure that that evidence is synthesised and reviewed effectively and efficiently too. In turn, that information can then be disseminated to clinicians, then to patients, the evidence can be implemented, and then we evaluate it and use it to improve practice. The cycle then begins again.

This plenary session then led to threaded sessions throughout the rest of the day – the first of which I was given the opportunity to co-chair. This was my first experience of co-chairing anything, so I was a bit nervous, but mainly really excited to listen to the speakers’ presentations and then be able to take part in the discussion that followed.

Threaded session 1 of Thursday was titled ‘The inefficiency of isolation: Why evidence providers and evidence synthesisers can break out of their silos’, and focussed on the journey from producing evidence to synthesising evidence.

The 4 talks in this session were:

  1. The problems of poor and siloed primary research – a funder’s view (Matt Westmore).
  2. New ways to access primary research data (Ida Sim).
  3. Data journeys from studies to accelerated evidence synthesis (Anna Noel-Storr).
  4. Connecting primary research and synthesis in education – experiences of operating in a linked system (Jonathan Sharples)

Honestly, these talked linked together really beautifully, and gave me lots to think about in terms of what I can be doing to try and make sure that my research is slotting into the wider evidence ecosystem in a more cohesive way.

Matt gave a funder’s perspective on the problem of disconnected research, and explained what the National Institute for Health Research (NIHR) in the UK are doing to combat these problems. He also gave Trial Forge a shout out too which is always welcome!

Ida’s talk showcased Vivli and explained why it’s so important to share clinical trial meta-data to ensure that we’re not duplicating effort. I’d never heard of Vivli before I’d started doing research into the speakers on the panel, so this was a really interesting session.

Anna gave a fantastic talk on the journey that data takes from studies all the way through to evidence syntheses – the image below shows a slide that she used to explain what evidence sysnthesis is used for, I thought it was a really good way to communicate the concept so I’ve included it here.

Anna is also heavily involved with Cochrane Crowd – a platform that allows volunteers to help to categorise evidence to ensure that evidence syntheses are more efficient. It’s a brilliant platform and one that I’ve contributed to, and with continue to contribute to (probably when I have more time post-PhD though!).

Jonathan then impressed us all with his experience of doing research in the UK education sector. Education is clearly an entirely different beast than healthcare is, but the work that Jonathan and the rest of the team at the Education Endowment Foundation have done really is astounding. I think there’ll be lots of healthcare researchers dissecting the work they’ve done in an effort to try and translate some of their successes into the world of evidence-based healthcare.

I’m not going to go into detail about the rest of the threaded sessions because I’ll be here all day, but as expected they were great. The way that this conference has covered different topics and themes has been so useful, but also totally overwhelming – there is just so much going on! If you’d like to know more about the other threaded sessions, and the Global Evidence Summit as a whole, take a look here.