Skip to content

Heidi R. Gardner

Clinical Trials Methodologist | Health Services Researcher | Mixed Methods Researcher | Science Communicator | Evidence Enthusiast

  • ABOUT
  • HEIDI ELSEWHERE
  • WORK WITH ME

Tag: impact assessment

Learnings From #uwescu17 #2: Evaluating Science Communication

December 21, 2017December 13, 2018Leave a comment

Last month I attended the UWE Science Communication Masterclass, and I promised that I’d come back and write a few blog posts on the topics that we covered. This is the second of these blog posts – you can find the first one, which focussed on face to face communication methods, here.

Dr Margarido Sarda (left)’s ‘Evaluation’ session was the part of the course that I was most intrigued by. Evaluating science communication is something I feel (or felt before Margarida’s talk!) completely out of my depth with, so I entered the session with my notebook and pen in hand ready to absorb as much information as possible.

With the public engagement events I’ve been involved in so far we have tried to build in some evaluative component, but I wasn’t sure if it was working/what we could do to improve things, and what others were doing in this area. This session was brilliant, and gave me the most ideas for what to change and implement in my own practice. The points raised in this session, both by Margarida, and in the discussions with other attendees, could make a huge difference to the quality of science communication and public engagement activities – so I figured an overview of the session would be helpful for those that couldn’t make it.

Why do we evaluate?

Often, evaluation is the thing that’s missed out of the public engagement/science communication planning process – in my experience anyway. It’s very easy to focus your time on refining and developing ideas for activities, but base this development on your own thoughts and conversations with other members of your team, rather than a cohesive process of evaluation. So, if we’re developing activities anyway, then why should we be making evaluation a more structured aspect of our practice?

For ourselves

  • If we make things more structured, we can disseminate those experiences and findings more easily, publicising our achievements and mistakes so that others can learn from them – and vice versa
  • Being able to prove, and improve, what we’re doing means it’s easier to get funding now, but it’ll also be easier to sustain that funding for future projects
  • Reflective practice is central to the process of communication – if we don’t learn from what we do, then we’re very likely to continue to repeat those mistakes and reduce the utility of our activities

For participants/the public

  • Evaluation can become an integral part of the activity you’re running – it extends participation and can ensure that members of the public feel their voices are being heard and included within the conversation
  • Showing that we do evaluation can also be a really important point for participants; we’re increasing transparency of our processes, and hopefully increasing trust too
  • The most important thing for participants – enjoyment. If we run an event and don’t know that people are finding it patronising/confusing/annoying etc, then we’re doing our participants a disservice
Without evaluation, how can we possibly say which part (or parts) of an activity are making a difference?
Types of evaluation

After Margarida covered the various reasons why evaluation is so important, she then got to the exciting bit – the methods used to do that evaluation. There are so many different ways to do evaluation, and each of them have their own advantages and disadvantages. A few ideas below:

Traditional methods

  • Quantitative
    • Questionnaires
    • Structured interviews
    • Observations (e.g. how many people were at the event, what age were they etc)
  • Qualitative
    • Semi-structured interviews
    • Focus groups
    • Observations (e.g. facial expressions of people attending, how people interacted with the event etc)

Non-traditional methods

  • Snapshot interviews
    Face-to-face, short (90 seconds to 2 minutes) snappy interviews, structured schedule made up of questions that are clear and require quick answers.
  • Graffiti walls
    Wall or big piece of paper that the audience can draw or write on – they can leave thoughts, experiences, doodles etc that you can analyse later.
  • Feedback boards
    Similar to graffiti walls, but with a central question (or sometimes more than 1 question) that you’re asking participants to answer – e.g. ‘what would you improve about this event?’ ‘what did you like best?’ etc.

    An example of a feedback board, from the MRC Laboratory of Molecular Biology Open Day.
  • Visitors’ book
    A good way of capturing the impressions and recommendations of audience members at a less interactive event – e.g. a show, play or exhibition. A more creative way of doing this could include a photo booth style format with speech balloons and props that can be shared on social media sites with a hashtag linked to your event.
Points to take away

Creativity is key – A big part of Margarida’s session was interactive group work, which was a really good way to bounce ideas off each other – the main takeaway from these activities was that creativity is key. The process of evaluating science communication activities is still a relatively new field, so injecting your own creative ideas and sharing your experiences can be a brilliant way to develop new techniques.

Evaluation shouldn’t just be done at the end of a project – We should be doing evaluation right from the beginning of any science communication activity. That could mean assessing past events before tackling a new one, running pilot events in order to effectively develop your idea, and then working in evaluation at the event itself too.
As well as this, it’s key that we don’t just position our evaluation activities at participants; it’s important to learn what scientists, researchers and science communicators have taken away from the event too. If the person running the event feels uncomfortable, it’s likely that the publics they’re speaking to will sense this and feel a bit on edge too.

Evaluation is a key component in science communication, and it’s really important that we work to implement it routinely – it shouldn’t be an addition to a project, it should be a given.

I hope this summary of the session has been useful to those of you thinking of getting involved with science communication, and for those of you who are already involved with engagement activities that maybe aren’t embedding evaluation so much (yet!). Be sure to check back over the next few weeks for updates of other sessions that made up this year’s UWE Science Communication Masterclass. A huge thanks to Dr Margarida Sardo from UWE for such a thought-provoking session!

 

Advertisement

Enter your email address to follow this blog and receive notifications of new posts by email.

Get in touch

Email: heidi.gardner.10@aberdeen.ac.uk
Twitter: @heidirgardner
LinkedIn: https://uk.linkedin.com/in/heidigardner

Heidi R. Gardner

Clinical Trials Methodologist | Health Services Researcher | Mixed Methods Researcher | Science Communicator | Evidence Enthusiast

Website Powered by WordPress.com.
  • Follow Following
    • Heidi R. Gardner
    • Join 168 other followers
    • Already have a WordPress.com account? Log in now.
    • Heidi R. Gardner
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...