The role of health information management in monitoring and evaluation

What we need to know

The meaning of monitoring and evaluation (M&E)

Monitoring and evaluation (M&E) are the techniques we use to find out how well our health programme is achieving what it set out to do. We will originally have set objectives, i.e. the results we are aiming to achieve and may have recorded on the logframe (Chapter 7). M&E enables us to see how effectively we have reached those objectives. The techniques of M&E are one way to measure success, but other measures of success may be just as important.

Although M&E are bracketed together and are often confused, each has a specific meaning. Monitoring refers to ongoing assessment of our progress. It should be set up as part of our routine programme management and is ideally done by both programme and community members together. It uses the record systems we have built into the programme. Evaluation refers to a systematic review of the programme outcomes and impact often at the end of a funding cycle. It often involves an outside evaluation team.

One helpful way to distinguish between M and E is that monitoring asks the question ‘Are we doing things right?’, and evaluation asks ‘Are we doing the right things?’. If monitoring is carried out well, evaluation will be easier.

Who benefits from M&E?

The programme itself

We often start with good ideas and ambitious objectives. As time goes on, these may get lost in day-to-day activities or problems (See Figure 9.1). M&E can highlight whether the programme is still on the right road, how far it has travelled, and how far it still has to go. In this way M&E forms part of the planning cycle (Figure 9.2). Regular monitoring will also identify problems early so they can be corrected, and improvements can be suggested.

Figure 9.1

The role of health information management in monitoring and evaluation

Evaluation helps everyone to see what they are doing and where they are going.

Reproduced courtesy of David Gifford. This image is distributed under the terms of the Creative Commons Attribution Non-Commercial 4.0 International licence (CC-BY-NC), a copy of which is available at http://creativecommons.org/licenses/by-nc/4.0/.

Figure 9.2

The role of health information management in monitoring and evaluation

The Planning Cycle showing the roles of M&E.

Reproduced courtesy of Eleanor Duncan. This image is distributed under the terms of the Creative Commons Attribution Non-Commercial 4.0 International licence (CC-BY-NC), a copy of which is available at http://creativecommons.org/licenses/by-nc/4.0/.

The community

M&E helps the community to see how the programme is working, and shows the benefits it is bringing. Community members will work with us in this process. We will also regularly feed M&E reports back to the community as a means of promoting understanding of the whole process. Findings and results will need to be presented in such a way that the community sees the benefits (and problems) and is motivated to participate in improvements (see Chapter 2).

Donors, sponsors and a wider audience

In practice, evaluations are often carried out because donors want confirmation that their money is being well spent. But all stakeholders—programme, community, donors, and government—should benefit from evaluation if it is well planned and carried out.

An evaluation showing good results can help our programme to become better known and a model for other programmes. We can use Twitter, Facebook and other forms of social media to make findings known to wider audiences. If it uses a rigorous methodology it can be published to share the learning and raise the profile of the programme.

Government

Governments may want to know what results the programme is achieving and whether it is reaching district and national targets. If we are involved in specific programmes, e.g. End TB, Roll Back Malaria, their co-ordinators will need our results. Civil society organizations involved in community-based health care (CBHC) are often able to achieve more effective results at community level than government. Evaluation (and the return of regular monitoring figures) should enable us to demonstrate this and increase our credibility (Figure 9.3). In turn, this will enable CBHC as part of civil society to be entrusted with more health tasks in national health programmes, which will be to everyone’s benefit.

Figure 9.3

The role of health information management in monitoring and evaluation

Know the percentages.

Reproduced courtesy of David Gifford. This image is distributed under the terms of the Creative Commons Attribution Non-Commercial 4.0 International licence (CC-BY-NC), a copy of which is available at http://creativecommons.org/licenses/by-nc/4.0/.

Some useful definitions

The definitions in Box 9.1 each have examples from a well-building programme.

Box 9.1

Useful definitions for M&E using examples from a well-building programme

Activities: What is actually done

Evaluation: An assessment, at a specific time, of a programme’s outcomes and impact

How water use in the village has changed.

How the wells have influenced household hygiene and sanitation.

Monitoring: Continuous process to record, reflect and use information regarding progress

Use of resources, activities completed, progress towards programme objectives.

Indicators: Evidence or signs that change has taken place

Quantitative indicators are those that can be measured or counted

Number of people using the wells.

Qualitative indicators are those gained by observation

Local people’s views about the wells.

Goals: Long-term aims for impact

To improve health in target population.

Objectives: Results the programme is expected to achieve

To increase the amount of clean water used in village households.

Inputs: Physical and human resources used within the programme

Outputs: What is produced as a result of completed activities

Functional village wells.

Impact: Long-term and sustainable change resulting from an activity

Long-term improvements in the health of local people, social relationships in the village, and the position of women.

Outcomes: The effect on the original situation due to the programme

Increase in health through fewer households experiencing water- and hygiene-related illnesses.

Source: data from CSSDD Myanmar/Myanmar Baptist Convention. This box is distributed under the terms of the Creative Commons Attribution Non-Commercial 4.0 International licence (CC-BY-NC), a copy of which is available at http://creativecommons.org/licenses/by-nc/4.0/

Who should perform M&E?

The section on participatory appraisal (PA) in Chapter 6 should be read alongside this section. As the project moves into M&E, PA can give rise to participatory M&E (PM&E).1

The title of the book Nothing about us without us2 provides a slogan to remind us that the community needs to be involved closely at every stage, rather than being marked by outsiders as though they were taking an examination. This is especially important when vulnerable community members are monitored, as with the people living with disability and those with mental health issues. It is essential that these community members are involved in, and feel empowered by, the processes of M&E.

Monitoring can be done by the health team and the community together, but it will need to be planned carefully using appropriately simple participatory techniques.3 For example, A well-digging programme in Myanmar asked the community to make a chalk-mark each time they used the well to record what times of day it was being used.4 Evaluations can sometimes be done by community members alone. To be effective, these ‘insider evaluations’ need to be small scale and analyse a limited number of programme activities. The team would need to have, or be taught, necessary skills and good monitoring systems would need to be in place to feed into the evaluation. The term community-based participatory evaluation is sometimes used.5

In practice, final impact evaluations usually involve outside experts who come to work alongside the health team and community. The success of using outsiders depends on several conditions.

Firstly, the evaluation must be planned in advance. It may last one or two weeks, and will need to be done at a time of year when neither the health team nor community is overworked, nor the weather too extreme. Essential programme activities should continue, not least because evaluators will want to observe the programme at work.

Secondly, the evaluators must have clear terms of reference, i.e. know exactly what they are meant to be doing, and also what they are not meant to be doing. They should be sensitive to the local culture, and have an affirming attitude. Terms of reference should be agreed between evaluator, programme and any donor agency or government department involved, written down and signed by all involved. Evaluators should be carefully briefed both by donor and programme before starting work.

There are several advantages of using outsiders these include involving experts with special skills who will be able to advise on effective methods of carrying out the evaluation. Because of their lack of bias, results from outsiders may be more accurate as the evaluators have no personal interest in the achievements of the programme. Finally, outsiders may receive more accurate feedback from the community, and community members may be readier to tell outsiders how they really feel.

Disadvantages of using outsiders include higher costs in terms of time and money, although a donor agency often funds the evaluation. Also, the visiting experts may not know the local customs, language, or situation. It is therefore helpful if evaluators are familiar with the country, region, and type of programme they are evaluating. Finally, published material from evaluations or visits may be politically insensitive or unhelpful to the community. If the evaluation includes any research that may be written up, this must be clearly discussed beforehand.

Some pitfalls to avoid

Monitoring too little

Many programmes go from year to year without M&E. Annual reports are still written, with patient numbers, immunizations and procedures carried out. Such figures may accurately record the activity being carried out, but have little to say whether the programme is effective or meeting its objectives.

In practice it is quite easy for a programme to get ‘out of control’. It is easy to become overwhelmed by challenges and needs; record systems and reports can turn into a nightmare. Obviously, any lack of quality in reporting or recording information is serious and we need to act, including being honest with any agency that is helping to fund the programme. The list of questions in Box 9.2 was designed by one funding agency to help programmes in this situation to focus on key reporting criteria and develop their forward planning.

Box 9.2

Informal evaluation questions

Project impact on local community

What effect does the presence of the project have on the local community? (If the project did not exist, what would happen?) A story may help to show the project impact.

Disease prevalence and project impact

What do project staff consider the three most common diseases in the project area? What effect has the project had on the prevalence of these diseases during the last three years? (Is there any statistical evidence?)

Community involvement in project

Is the local community involved in this project? If yes, how is their involvement facilitated e.g. through village health committee, community volunteers, etc. How often do project staff meet with community members? If there is no community involvement, why? Are there any plans to increase this? If not, why?

Community volunteers

Do volunteers from the community work in the project e.g. as voluntary health workers (VHWs) or HIV/AIDS home-based care workers (HBCWs)? Approximately what number are there currently active and newly trained in the last year?

Current project problems and challenges

What are the three most serious problems which have a negative effect on the running of the project? How have these problems been addressed? How successful have these efforts been? What do project staff consider would be needed to solve them?

Do you expect any future changes in project area?

E.g. economic or political changes, climatic changes, staff 'brain drain' e.g. to rich countries, or changes in disease prevalence e.g. HIV/AIDS, etc.

Future plans and priority objectives for the next year

What are project staff’s 3 priority objectives for the project? What may stop them implementing these objectives? How do they plan to overcome these problems?

Thank you for completing this form.

Monitoring too much

Some programmes go to the opposite extreme, which can happen especially if they are very bureaucratic or run by managers interested in statistics. Collecting figures and producing good reports becomes more important than working for long-term improvements in the community.

Sometimes programmes are required to provide huge numbers of reports. This happens particularly if they are involved in special programmes such as EPI (immunization), End TB, or Roll Back Malaria. We must therefore ask donors to request only vital information, and we should only agree to evaluations that are genuinely useful for the programme. Programmes should clearly negotiate with donors, and consider refusing funding if it is tied to a very heavy monitoring schedule or to many new indicators.

Note that the quality and value of M&E decreases with every additional indicator, while the cost in terms of money, time and distraction increases. Thus, the role of a programme manager is to resist adding indicators. One suggestion is that indicators should be reviewed every year for usefulness and any ineffectual or burdensome ones discarded.

Ignoring the needs and opinions of the poor

When we gather qualitative information, the articulate and well-off usually do most of the talking, while the poor may have less chance to express their views. The process of identifying those who do and don’t benefit is known as ‘equity-based disaggregation’, a term often used by donors. Sometimes the overall health of a community improves but the health of the most vulnerable stays the same, or worsens. We can monitor this by keeping separate figures for different socio-economic groups. Similarly, we can keep separate figures for men and women, or for different language groups, or for those living with disability. We will need to think carefully about the most appropriate categories to use for our situation. Most importantly, we must ensure that any evaluation considers the impact of the programme on those in the community who are the most marginalized.

We should also be aware of the term ‘cross-cutting themes’ or ‘cross-cutting issues’ in evaluation. Many large donors identify certain cross-cutting themes that should be integrated into project design and evaluation.6 Typical themes are gender, equity, inclusion, disability, environmental sustainability, etc. Even if cross-cutting issues are not part of our programme’s funding requirements, it is often useful to assess what cross-cutting issues are most important in terms of the community’s health, then apply these to the programme’s objectives, and measure them effectively.

Results of the evaluation are used wrongly or not at all

If an evaluation produces negative findings, these should be shared fully with the team. Only then can everyone acknowledge the issues raised, learn from them, and make improvements. At the same time, blame and criticism must be avoided. Any positive findings should be shared more widely, especially with those who are mainly responsible. In some circumstances, it is prudent for the programme leaders to shoulder any blame, and for the community, government, or political leadership to receive any praise. In terms of negative findings it is worth asking whether these are simply the result of unrealistic goals, and consider creating new, more attainable goals.

What we need to do

How to choose what to monitor or evaluate

There is a long list of programme activities we could monitor or evaluate. Usually, however, we need to choose some key activities that reflect our programme objectives. Most chapters in this book set out aims for various programme activities and give ideas of what to evaluate. Table 9.1 is a sample M&E chart that includes some of these and lists more. Whatever the scale of our programme, we must be careful to set up simple but precise recording systems at the very beginning. These systems will lay the foundation for evaluation in the future.

SubjectExamples of indicatorsPossible sources of information, written or digitized

CHW work

(Chapter 8)

 

Percentage of all patient attendances seen by CHW.

Percentage of community homes visited on average once per week.

Level of CHWs’ knowledge about prevention and cure of common illnesses.

Percentage of families or individuals able to prevent and self-treat selected illnesses e.g. diarrhoea, scabies.

Level of satisfaction of community with their CHW.

 

CHW records

Clinic attendance register

Spot survey

Questionnaire

Questionnaire

Questionnaire PA methods

 

Use of essential medicines

(Chapter 11)

 

Percentage of CHWs or health centres with at least x number of doses of all medicines present and in-date at time of inventory.

Percentage of CHWs or health centres with regular supply of essential drugs, e.g. antibiotics, antimalarials.

Percentage of population with reliable access to affordable essential drugs.

Percentage of population reporting that they went without a medicine in the previous six months because unavailable or unaffordable.

 

Inventories

Spot surveys

Special surveys

CHW and clinic records

 

Child nutrition

(Chapter 14)

 

Percentage of under-fives who are underweight.

Percentage of children between six and sixty months

with MUAC under 125mm.

 

Child’s growth card

CHW notebook

MUAC charts

Family folder insert card

 

Immunizations (Chapter 15)

 

For selected immunizations, e.g. measles, DPT, rotavirus, etc., percentage of under-fives who have completed course (in past one, three, or five years).

For BCG, percentage of under-fives with BCG scars.

For tetanus toxoid, percentage of women at delivery who have had three or more injections.

Incidence rates of some diseases for which there are immunizations available.

 

Immunization register IMCI returns

Family folder and insert card

Mother’s home-based record card

Immunization register

CHW/TBA records

Returns from Partnership for Safe Motherhood

Disease register

Clinic and CHW records

Special survey

 

Control of diarrhoea, malaria

(Chapter 16)

 

For diarrhoea, percentage of families using ORS as first-line treatment.

For malaria, number or percentage of children five or under who slept under a bed net the previous night.

Number or percentage of five or under dying from malaria.

 

Special survey

Roll Back Malaria records

Clinic records

 

Maternal heath (Chapter 17)

 

Percentage of mothers attending for four or more antenatal checks in clinic or with trained midwife.

Percentage of newborns weighing 2500g or less or with MUAC of 8.7cm or less.

Percentage of babies delivered by midwife, trained TBA, or skilled attendant, using sterile delivery pack.

 

Mother’s home-based record card

TBA/CHW records

Returns from Partnership for Safe Motherhood

Mother’s home-based record card

 

Family planning (Chapter 18)

 

The contraceptive prevalence rate.

Number of women aged 15–49 (or partners) using contraceptionTotal  number of women aged 15–49×100

Average space between children

 

FP Register

Family folder and insert cards

Family folder

 

Control of TB

(Chapter 19)

 

Follow the National TB Programme Aims and Targets, recording TB cases and recording outcomes,

 
 

Use of clean water, waste disposal

(Chapter 21)

 

Percentage of families using clean water source within fifteen minutes’ walk from house.

Percentage of families with all family members using latrine.*

 

Family folder

Other surveys

 

Abuse of tobacco,

alcohol, drugs

(Chapter 22)

 

Percentage of population

aged e.g. 10 or 15 and over who admit to use.*

 

Family folder

Special surveys

 

Note: the section following is more complex and many of these topics will only be possible to carry out in larger programmes with good outside support and technical help. Further indicators for large-scale programmes can found in the indicator list of Sustainable Development Goal 3.

 

Infant mortality rate

 

This is:

Number of deaths under 12 monthsNumber of live births  × 1000per year

 

Vital events register

Family folder

CHW records

 

Under-five mortality rate

 

This is:

Number of deaths of ch ildren under 5Total number of  under 5 children at mid-year  × 1000per year

 

Vital events register

Family folder

CHW records

 

Maternal mortality ratio

 

This is:

Number of maternal deaths with pregnancy-relatedcause (during  pregnancy, delivery andup to  42 days after delivery) Number of live births × 1000per year

 

Vital events register

Duplicate mother’s home-based record

Family folder and insert cards

Clinic and hospital records

Partnership for Safe Motherhood

 

Adult or Female Literacy Rate

 

This is:

Number of adults(or women)aged15 or over who can read and writeTotal number of adults(or women) aged 15 or over× 100

 

Family folder

Special survey

 

Cost effectiveness

 

This is, broadly:

Total cost of projectNumber people covered by CBHC

But needs calculating with the aid of a health economist.

 
 

SubjectExamples of indicatorsPossible sources of information, written or digitized

CHW work

(Chapter 8)

 

Percentage of all patient attendances seen by CHW.

Percentage of community homes visited on average once per week.

Level of CHWs’ knowledge about prevention and cure of common illnesses.

Percentage of families or individuals able to prevent and self-treat selected illnesses e.g. diarrhoea, scabies.

Level of satisfaction of community with their CHW.

 

CHW records

Clinic attendance register

Spot survey

Questionnaire

Questionnaire

Questionnaire PA methods

 

Use of essential medicines

(Chapter 11)

 

Percentage of CHWs or health centres with at least x number of doses of all medicines present and in-date at time of inventory.

Percentage of CHWs or health centres with regular supply of essential drugs, e.g. antibiotics, antimalarials.

Percentage of population with reliable access to affordable essential drugs.

Percentage of population reporting that they went without a medicine in the previous six months because unavailable or unaffordable.

 

Inventories

Spot surveys

Special surveys

CHW and clinic records

 

Child nutrition

(Chapter 14)

 

Percentage of under-fives who are underweight.

Percentage of children between six and sixty months

with MUAC under 125mm.

 

Child’s growth card

CHW notebook

MUAC charts

Family folder insert card

 

Immunizations (Chapter 15)

 

For selected immunizations, e.g. measles, DPT, rotavirus, etc., percentage of under-fives who have completed course (in past one, three, or five years).

For BCG, percentage of under-fives with BCG scars.

For tetanus toxoid, percentage of women at delivery who have had three or more injections.

Incidence rates of some diseases for which there are immunizations available.

 

Immunization register IMCI returns

Family folder and insert card

Mother’s home-based record card

Immunization register

CHW/TBA records

Returns from Partnership for Safe Motherhood

Disease register

Clinic and CHW records

Special survey

 

Control of diarrhoea, malaria

(Chapter 16)

 

For diarrhoea, percentage of families using ORS as first-line treatment.

For malaria, number or percentage of children five or under who slept under a bed net the previous night.

Number or percentage of five or under dying from malaria.

 

Special survey

Roll Back Malaria records

Clinic records

 

Maternal heath (Chapter 17)

 

Percentage of mothers attending for four or more antenatal checks in clinic or with trained midwife.

Percentage of newborns weighing 2500g or less or with MUAC of 8.7cm or less.

Percentage of babies delivered by midwife, trained TBA, or skilled attendant, using sterile delivery pack.

 

Mother’s home-based record card

TBA/CHW records

Returns from Partnership for Safe Motherhood

Mother’s home-based record card

 

Family planning (Chapter 18)

 

The contraceptive prevalence rate.

Number of women aged 15–49 (or partners) using contraceptionTotal  number of women aged 15–49×100

Average space between children

 

FP Register

Family folder and insert cards

Family folder

 

Control of TB

(Chapter 19)

 

Follow the National TB Programme Aims and Targets, recording TB cases and recording outcomes,

 
 

Use of clean water, waste disposal

(Chapter 21)

 

Percentage of families using clean water source within fifteen minutes’ walk from house.

Percentage of families with all family members using latrine.*

 

Family folder

Other surveys

 

Abuse of tobacco,

alcohol, drugs

(Chapter 22)

 

Percentage of population

aged e.g. 10 or 15 and over who admit to use.*

 

Family folder

Special surveys

 

Note: the section following is more complex and many of these topics will only be possible to carry out in larger programmes with good outside support and technical help. Further indicators for large-scale programmes can found in the indicator list of Sustainable Development Goal 3.

 

Infant mortality rate

 

This is:

Number of deaths under 12 monthsNumber of live births  × 1000per year

 

Vital events register

Family folder

CHW records

 

Under-five mortality rate

 

This is:

Number of deaths of ch ildren under 5Total number of  under 5 children at mid-year  × 1000per year

 

Vital events register

Family folder

CHW records

 

Maternal mortality ratio

 

This is:

Number of maternal deaths with pregnancy-relatedcause (during  pregnancy, delivery andup to  42 days after delivery) Number of live births × 1000per year

 

Vital events register

Duplicate mother’s home-based record

Family folder and insert cards

Clinic and hospital records

Partnership for Safe Motherhood

 

Adult or Female Literacy Rate

 

This is:

Number of adults(or women)aged15 or over who can read and writeTotal number of adults(or women) aged 15 or over× 100

 

Family folder

Special survey

 

Cost effectiveness

 

This is, broadly:

Total cost of projectNumber people covered by CBHC

But needs calculating with the aid of a health economist.

 
 

Notes:

= Accurate definitions needed according to culture of programme

Family folder refers to the survey or resurvey done using the family folder. Full information on each family appears on the outside of the folder (see Chapter 5; Appendix C). Much of the most useful information for evaluation is best collected during house-to-house surveys using the folders.

There are several different record systems used by different programmes. Clinic records refers to any records or registers kept in clinics not otherwise specified. Most will now be computerized.

For some subjects more than one indicator is usually listed, although in practice only one would normally be chosen for any single evaluation.

Most information, from whatever source, would be tabulated annually and stored in the master register or computer.

For national programmes, official data collecting forms should be used.

We will probably have constructed a logframe (see Chapter 7) or a similar, less-detailed framework, and this can be used as a guide to help set up monitoring. We can then adjust the logframe based on the results of monitoring. We must remember the logframe not only sets out our goals and objectives but is also meant to be descriptive, so it must be updated with changes and developments on a regular basis. The logframe will need to be revised or even rewritten after evaluations.

For evaluation, we should compare our current position with our baseline situation or conditions, which we will have established at the time the programme started. Alternatively, we can compare how things are in the programme area with how they are in a ‘control’ area that we also surveyed originally but in which we have not been working. In practice this can be difficult as it is usually inappropriate to make measurements without making improvements. One way of doing this is to plan a roll-out to the ‘control area’ as the second phase of the programme.

In summary, for M&E we need to select a few important areas that are important to monitor, easy to record, and helpful for planning and evaluation. This is best done at the planning stage of the programme.

Select the most appropriate indicators

Having chosen criteria for the M&E, we must now decide how to measure it. Indicators are best defined as a measure or evidence of progress towards an agreed target. Table 9.1 provides examples of indicators.

Ideally, we should find out from either the district health officer or a local civil society organization programme those indicators that are most widely used where we are working. This avoids duplication of effort and helps make comparisons across programmes. We can use other sources, e.g. if we are engaged in a national HIV/AIDS programme, we should be aware of the indicators drawn up by UNAIDS in 2016.7

If our programme is large and well established we will need to be aware of the indicators used for the Sustainable Development Goals (usually Goal 3) that relate to our activities. Often however these are ‘big picture indicators’ and more useful for population-wide measurements. If we are involved in any national programmes we will need to follow the indicators set up by these programmes.

The way we collect data for our indicators is also important. It may be paper-based, but can increasingly be done digitally (see Chapter 26).

It is helpful at this stage to distinguish between two different types of indicator. The first is quantitative, numerical, or ‘hard’ data (numbers, rates, and percentages) and the second is qualitative, descriptive, or ‘soft’ data (knowledge, attitudes, practice, satisfaction levels, stories). Both types are important in CBHC. We must absolutely not disregard soft indicators as being inferior; on the contrary, they are increasingly seen as essential.

An example of quantitative indicators

Let us say that we want to monitor our DPT immunization programme over the past year. The indicator chosen must give the most accurate measurement of what we really want to know. Some possible indicators we could use include a) the total number of DPT injections given during the past year; b) the total number of children under five who received DPT injections during the past year; c) the total number of children under five who completed courses of DPT during the past year; and d) the percentage of children under five in the community who completed courses of DPT during the past year (See Figure 9.3).

As we move from the first to the fourth indicator, it becomes increasingly specific, and therefore, increasingly useful. While the first indicator (a) is commonly used in annual reports it has only limited value. What we really want to know is how completely we have immunized our target population, which means that (d) is therefore the most appropriate. These four indicators are all examples of input indicators—i.e. they measure activity carried out. But we could use a very different indicator.

We could also measure output or impact indicators, which measure the effectiveness or impact of the programme. Thus, we could measure e) the number of children who suffered from the diseases diphtheria, pertussis, and tetanus during the past year, or f) the percentage of children in the community who suffered the same diseases in the past year. Because our ultimate aim is to eradicate these three diseases from the population, f) is therefore the best indicator of all.

Impact indicators often take time to show any improvement, so they may not be useful in indicating achievements over the shorter term. However, in CBHC, we aim, with the community and government, for long-term sustainable and impactful programmes, so ultimately our indicators should reflect this.

It is worth bearing in mind that quantitative indicators are more useful if it is possible to segment the data to make comparisons, e.g. whether more boys are immunized than girls.

Examples of qualitative indicators

The use of qualitative indicators is especially valuable in CBHC. Measuring the community’s perception of the improvements and interventions, i.e. community response, is important and not difficult to do.

For example, we want to measure the extent to which people value their community health worker (CHW). We can ask a question like ‘What do you think of your CHW?’ or, more specifically, ‘What do you think of the way the CHW provides services for mothers and children?’ Instead of a free-form response, we ask for a ranking from 1 to 5, where 5 is ‘Very helpful’, 3 is ‘OK’, and 1 is ‘Unhelpful’. Answers can be tabulated as set out in Table 9.2.

Table 9.2

Differences in community member assessments of their CHW over a three-year period; numbers indicate the total number of people choosing that category (i.e. 2=two people)

Attitudes to CHWvery helpfulhelpfulOKnot very helpfulunhelpful

One year after appt

 

2

 

5

 

6

 

8

 

4

 

Three years after appt

 

11

 

9

 

7

 

4

 

1

 

Attitudes to CHWvery helpfulhelpfulOKnot very helpfulunhelpful

One year after appt

 

2

 

5

 

6

 

8

 

4

 

Three years after appt

 

11

 

9

 

7

 

4

 

1

 

Adapted with permission from Tearfund. Increasing our impact. Footsteps.(50).Copyright © 2002 Tearfund. This table is distributed under the terms of the Creative Commons Attribution Non-Commercial 4.0 International licence (CC-BY-NC), a copy of which is available at http://creativecommons.org/licenses/by-nc/4.0/

In another example, perhaps we want to evaluate ways in which women’s status has changed as a result of the programme’s work. We would provide statements to the community members about how women’s status has changed, and these statements might include:

Women are better able now to participate in family decision making than when the programmes started.

Women have a better opportunity to take on leadership roles in the community.

Women are better able to participate in village affairs.

Community members are asked to select one of the following options to indicate their response to each of the statements: strongly agree, agree, not sure, disagree, strongly disagree. Furthermore, questions like ‘What do you think about the following statement?’ are useful. Make sure to accurately record any response. Alternatively, a question could simply invite a yes or no answer.

It is worth noting that there are some differences in how programme evaluators use the words qualitative and quantitative. The above examples that rank responses or are yes/no questions could be seen as semi-quantitative because they can generate numerical data, such as satisfaction levels. Note that the term quality indicator is also sometimes used and it refers to measuring the quality of a programme, regardless of what method is used.

Collect the data

The way we collect data will depend on our indicators, but for qualitative indicators there is a variety of methods we can use other than asking questions, such as transcripts, interviews, indicator surveys, focus groups and stories.

We may want to collect more wide-ranging views from the community about the value of the programme. We may use focus groups or interviews to ask people about their feelings regarding the community health programme, and then transcribe their responses and analyse the words or phrases that recur. This way of analysing descriptive evidence is often known as coding, and is especially useful in drawing comparisons or showing improvement over a period of time. Adding quotes and comments is very valuable in finding a full picture of the community’s assessment.

When we are able to gather information in different ways and from varied sources it is known as triangulation, which helps to paint the most accurate picture. However, we must not make the process complicated as it can become very time-consuming.

Photos or videos of improvements, especially ‘before and after’ pictures, along with stories and anecdotes, can sometimes speak louder than words or statistics. A novel, participatory, and fun approach to evaluation is to use photovoice or videovoice. This is a structured method whereby community members are asked to take photos that represent what we are interested in (e.g. change in mental health, or attitudes to mental health) and this can stimulate some interesting discussions and even help to bring about behavioural change.

For further ways of gathering and processing information, see Chapters 6 and 7.

Analyse the data

Having completed our data collection, we must now analyse the data as quickly as possible so that it can be fed back to the various stakeholders, especially the community. A simple step-by-step approach would be as follows in Box 9.3.8 Other chapters such as Chapter 6 describe these processes in more detail.

Box 9.3

A simple step-by-step guide to analysing data from the evaluation.

1.

Reflecting

Think back to your evaluation questions, why you are doing the evaluation, who it is for, and what they want to know about your project.

2.

Collating

This involves bringing together the information into a workable format. Quantitative data may need to be organized through statistical analysis or using basic calculations (e.g. total numbers, averages, percentages of the total). Qualitative information needs to be organized thematically; the term ‘thematic analysis’ is used to describe the process of identifying key themes or patterns.

3.

Describing

You should provide a description of the facts which have emerged from the information gathered e.g. what was delivered, how much, to whom, when, and where. Remember to describe both positive and negative findings.

4.

Interpreting

Interpreting goes beyond describing the facts. Rather, it is about trying to understand the significance of your data and why things happened as they did. Look at internal and external factors that contributed to the project’s achievements; also consider any challenges or difficulties encountered.

5.

Conclusions and recommendations

Draw out conclusions based on the strengths and weaknesses of the project. You can then begin to make recommendations for building on these strengths as well as addressing areas for improvement.

Source: data from Community Evaluation Northern Ireland. Prove & Improve: a self-evaluation resource for voluntary and community organisations. Copyright © 2017 Community Evaluation Northern Ireland. Available at http://www.ceni.org/evaluation-impact-measurement. This box is distributed under the terms of the Creative Commons Attribution Non-Commercial 4.0 International licence (CC-BY-NC), a copy of which is available at http://creativecommons.org/licenses/by-nc/4.0/

Act on the result

Unless action is taken on the findings of M&E, the whole process is a waste of time and money. A good evaluation raises hopes that issues will be better understood by all partners, but lack of change in the situation will create disillusionment. The participatory evaluation9 process described earlier and under ‘Further reading and resources’ can help M&E become an effective learning process for the whole community, leading to change and improvement.

Use results widely

The results of regular M&E should be used as widely as possible and made known to any who will benefit from seeing them. It is common practice to provide different report versions for different purposes. A donor may not need the specifics and names. At the programme level specifics might be needed, e.g. the names of those who have not completed immunizations. But the community might need a summary that is depersonalized, or is written in simpler language than an official report.

Respond to recommendations

Regular monitoring gives evidence that helps to fine-tune the programme. We should respond to any unexpected findings and change course if necessary. Reports from evaluations need handling with care as the health programme and community will be eager to hear results, may be anxious about new ideas and suggestions, and must be involved in the process through a range of discussions.

Before releasing any final report, a further meeting should be held with the programme team to go through the main findings and recommendations. This allows them to agree and validate the findings and discuss the issues in a safe environment. This also helps them to start thinking through changes that need to be made. Bearing this in mind, any outside evaluation should comment on successes and failures, recommend courses of action, suggest how changes can be carried out, and propose specific follow-up to ensure any recommendations are acted upon in some form.

The health team, with the community and guided by the evaluation, will replan the programme or any parts of it that need changing. Usually this will mean rewriting part of the logframe for a new programme phase. This will need to include revised objectives, inputs, budgets, and plans.

We may find it hard to respond positively if some of the findings are discouraging. We should aim to be organized enough to do something about it, brave enough to face up to failure and acknowledge any areas that need changing, and flexible enough to modify our programme.

Further reading and resources

U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Office of the Director, Office of Strategy and Innovation.

Introduction to program evaluation for public health programs: A self-study guide.

Atlanta, GA: Centers for Disease Control and Prevention,

2011

. Available at: https://www.cdc.gov/eval/guide/cdcevalmanual.pdf

Valadez J, Weiss W, Leburg C, Davis R.

Assessing community health programmes: A trainer’s guide

. Lusaka: TALC;

2007

. A participant’s manual and workbook are also available.

World Bank.

Sleeping on our own mats: An introductory guide to community-based monitoring and evaluation

. Washington, DC: World Bank;

2002

.

References

2. Charlton J.

Nothing about us without us: Disability oppression and empowerment

. Berkeley: University of California Press;

2000

.

5. Braithwaite RL, McKenzie RD, Pruitt V, Holden KB, Aaron K, Hollimon C.

Community-based participatory evaluation: The healthy start approach.

Health Promotion Practice

.

2013

; 14 (2), 213–19.

What is the role of health information management?

What does a health information manager do? Health information managers (HIM) organize, oversee, and protect patient health information data which includes symptoms, diagnoses, medical histories, test results, and procedures.

What do you mean by monitoring and evaluation in HMIS?

HMIS MONITORING & EVALUATION Refers to the collection, analysis, and use of information gathered from programs for the purpose of learning from the acquired experiences, accounting the resources used both internal and external, and obtaining results and making decisions.

What role does monitoring and evaluation?

Monitoring and Evaluation (M&E) is a continuous management function to assess if progress is made in achieving expected results, to spot bottlenecks in implementation and to highlight whether there are any unintended effects (positive or negative) from an investment plan, programme or project (“project/plan”) and its ...

What is monitoring and evaluation in health?

Monitoring and Evaluation (M&E) is a critical element of any successful health program. M&E enables programs to track and demonstrate progress, as well as diagnose programs to enable better results.