However tempting it is just to dive in a set projects up in the social web I you need to consider stopping to think about how you will measure and evaluate success. It’s a big part of using these new tools to have a positive impact rather than just creating empty buzz.
I have been doing a few things this week that all tie together to make me think about evaluation. I’m right in the middle of writing my research proposal and so am having to focus on how to evaluate the impact of the CitizenScape approach in an academically rigorous way. I also took part in a MJ round table event talking about the way in which the social web is being used by Local Authorities and finally I was helping with the judging of the LGComms reputation awards. All of these things highlight the important of figuring out how to measure the impact and effectiveness of using web 2.0 sites and technologies and the need to bring some discipline to the process. In many ways this is a reflection of the fact that these technologies and sites are entering the mainstream – after all if the Prime Minister can make a t*t of himself of YouTube then the possibilities for Councils are endless!!!!
What makes a good evaluation?
This is probably stating the obvious but the key to good evaluation is knowing what you want to achieve in the first place. I think that experimentation is a perfectly good reason in its own right to try something. Its obvious that Local Authorities need to get involved in the online world and that the social web phenomenon is now too big to ignore and I have a huge amount of respect for the Council’s who are making foray’s into this world. However without systematic evaluation of the impacts of these trials we are just dabbling and not really learning. Its the difference between skimming the headlines and sitting down and reading a book on current affairs – you may be able to give the sound bites but you won’t have any particular depth of knowledge. Clearly I am a bit biased here as I take the importance of evaluation so seriously I am doing the PHD but still – evaluation matters.
What can you evaluate?
So – how can we evaluate social web projects? Many people seem to be looking at the traditional web metrics of counting things; numbers of people joining a facebook group, number of followers on twitter, number of views or comments on YouTube. This is one approach but if you go back to the question of what you are trying to achieve then the only question you can really answer with basic metrics with these is “did more people see my content” – its an advertising eyeballs evaluation. For many marketing campaigns then this might be enough but if what you are really trying to do is to reach ‘hard to reach groups’ or encourage some kind of participation then you are missing both demographic and impact assessment data. The absence of traceable / checkable demographic data is probably the biggest frustration here and one of the main reasons why I think it will remain impossible to carry out deliberative debate on these sites – or at least deliberative debate which can then be counted strongly as part of the decision making process. Its also one of the reasons that I think the Virtual Town Hall approach is a better bet. The issue of impacts is also an interesting one. You can probably judge whether or not the numbers of people – the metrics – have effected the decision but how can you measure whether you have effected the people? If you are trying to increase democratic participation then you probably need to know whether your interventions have meant they are more or less motivated to participate in the future.
Finding richer data – not just a head count
Richer data of course means more work. You probably need to do a survey and hound people to answer it and you should also run some actual focus groups (yes – face to face evaluation of an online project – oh the irony!!). My basic plan is to gain a baseline of participation, in both democracy and generally online from as large a group as possible as I can initally and then re-sample this group at the end of the project (and again in the middle if the elapsed time is more than a few months). I will use this survey as a recruitment tool to find out who is willing to either be interviewed or join a focus group. Simply put this approach breaks down like this:
Web metrics will show you how many actions have been carried out
Surveys will show you has done this and some basic motivations for their actions
Interviews will allow you to get a sense of changes in attitudes
Hopefully this balances the need not to overburden the team with work and the need to actually find out more about the people and their reasons for being involved. I am currently working on a baseline questionnaire and hope to have it out in the world fairly soon.
Analysis: Find a framework and stick to it
So – now we have a lovely lot of data what are we going to do with it? The chances are you will not be thinking of one large pilot – more about a series of smaller projects. In which case a standard evaluation framework (and consistency across your survey questions) is going to help make data collected across pilots comparable and also allow you to make draw some conclusions about whether you are having an effect on your population. In my research I am intending to translate the ladder of engagement idea into something which relates more closely to formal democracy and then to define online activities which have equivalence (where appropriate) with offline democratic actions. The underlying idea of this of one of progression – you plot where people are in terms of democratic engagement at the start of the project and then see whether or not they have moved through the course of your actions. Because you are gathering qualitative data as well as the easier quantitative stuff you can find out more about people’s motivations and their attitudes to the process.
There are also all kinds of interesting social network analyse tools you can use to look at measuring social capital – but these are probably a bit too much for everyday use.
Good value for money?
Just one final thought – though we would all like to do these projects for the love of democracy and the common good the reality is that at some point we will be asked about value for money. This is a huge post in its own right but the basics are:
For communications projects: Equivalent ad spend figures can be a useful starting point
For Community engagement projects: Cost of recruitment to a process comparisons or cost effectiveness of running better attended meetings with online supporting
For democracy engagement projects: Democracy costs! But can you can make some comparisons between online and offline methods. If you look at the ‘cost ode democracy’ formula (yes – councils do have one) then online methods compare well to offline ones
Where you can make comparisons between offline methods then online always looks more cost effective. The issue is of course that no-one wants to stop doing offline – and nor should they. The trick then is to ensure that your pilots are not only creating online effects but also enhancing the existing offline process – for instance by reducing the cost of recruiting a citizens panel or by ensuring that more people attend a public meeting.
Well – this has been helpful for me as I will now try and write something very similar but far more detailed for my research proposal!