ADDITIONAL RESOURCES

 evaluability checklist

Evaluability refers to the conditions under which a programme can be meaningfully evaluated and provide findings that contribute to recommendations for programme improvement.

The first consideration is that your programme is sufficiently mature for evaluation. This means that it has been running for enough time to be piloted and tweaked until you are satisfied that it is as good as you can make it. Any programme can be evaluated, but without these conditions being met, an evaluation will not provide good value for money.  Indeed it will be a waste of your money and time to undertake an evaluation if you are not ready for it!

There are four main criteria for evaluability:

  1. You must be able to specify your programme goals (e.g. the outcomes you expect for children), your objectives (steps along the way), external influences you cannot control and that may affect your outcomes; you must have information on your programme (e,g, its design, how it is run, frequency of sessions, who delivers it, their training).
  2. Programme goals and objectives are plausible. Ask yourself: Is my programme likely to realise its objectives and goals, and Is the design informed by research evidence and/or significant amounts of experience in delivering programmes such as ours?
  3. Relevant and sound data is available from the programme (e.g. sessions attended by children; their ages in months; relevant home background information).
  4. Those who have commissioned the evaluation agree on how the findings will be used.

Evaluability refers to the conditions under which a programme can be meaningfully evaluated and provide findings that contribute to recommendations for programme improvement.

The first consideration is that your programme is sufficiently mature for evaluation. This means that it has been running for enough time to be piloted and tweaked until you are satisfied that it is as good as you can make it. Any programme can be evaluated, but without these conditions being met, an evaluation will not provide good value for money.  Indeed it will be a waste of your money and time to undertake an evaluation if you are not ready for it! 

There are four main criteria for evaluability:

  1. You must be able to specify your programme goals (e.g. the outcomes you expect for children), your objectives (steps along the way), external influences you cannot control and that may affect your outcomes; you must have information on your programme (e,g, its design, how it is run, frequency of sessions, who delivers it, their training).
  2. Programme goals and objectives are plausible. Ask yourself: Is my programme likely to realise its objectives and goals, and Is the design informed by research evidence and/or significant amounts of experience in delivering programmes such as ours?
  3. Relevant and sound data is available from the programme (e.g. sessions attended by children; their ages in months; relevant home background information).
  4. Those who have commissioned the evaluation agree on how the findings will be used.

programme descriptors checklist

The questions in this checklist provide organisations with a succinct list of data to collect on their programme which would enable better data analysis and feedback on programme effectiveness. All ELOM users should endeavour to collect the following information in addition to their ELOM assessment data. 

 

Data Type

Example

Intended

Actual

1

What is your purpose statement?

We aim to improve expressive language skills in at-risk children in the Western Cape.

 

 

2

Who is your target?

Primary caregivers, ECD practitioners, ECD principals, children (in case of children, specify age, gender, quintile), children with disabilities.  Specify if more than one target (eg caregiver/child). For example, children between the ages of 50 and 60 months of age, in quintile 3 schools in Cape Town. In practice we had some crossover of younger children +-40 months (10 children).

 

 

3

What is the context in which you work?

Deep rural/ peri-urban informal area / urban formal housing/ commercial farming in which province.

 

 

4

What are the actual qualifications of your implementing agents?

NQF level 4 + additional training in Early Child Development.

 

 

5

What is the duration of your intervention?

6 months. 2 sessions/week, for a total target of 50 sessions.

 

 

6

What % of the duration is actually offered?

40 sessions (80%) were offered. Community unrest and a late start responsible.

 

 

7

Are the required materials available?

Yes – We have access to a safe venue, and tools for delivering the intervention.

 

 

8

What training do you offer?

We offer four (full time) weeks of workshops to all of our ECD workers before they interact with children.

 

 

9

What supervision do you provide?

Weekly debriefings are conducted with all ECD workers + Performance Management.

 

 

10

What is your enrolment for all beneficiaries?

75 children between the ages of 50 and 60 months of age.

 

 

11

What is your drop-out rate?

19 children (25%) left and did not return to the programme.

 

 

12

What is your average attendance as a proportion of total sessions offered? (If multiple components or different sessions, list for each).

40 children attended only 50% of the sessions delivered. 10 attended 75% of sessions. 6 children attended all 40 sessions.

 

 

13

How do you measure the quality of programme implementation?

We observe the delivery of 10% of our sessions, at random and run regular curriculum practice sessions.

 

 

Guidelines for establishing the effectiveness of your intervention using the 

It is best evaluation practice to conduct a randomised trial to establish programme effectiveness. This is a rather costly and lengthy venture. If you have the funds, then do it if you think your programme is mature enough. There is another valid route, however, to establishing whether your programme improves child outcomes more than would have been likely had there not been a programme available.

This is the case when you have:

  • established the performance  of the same population from which your programme children are derived, and on the same measure as you will use to measure outcome (the ELOM); and
  • when the measure used (in this case the ELOM), has a true zero on the scale (this applies to the ELOM Standard Scores on each Domain and the Total).

So, we are fortunate that the performance of children in your programme can be compared with the performance of the children in the ELOM Age Validation sample for whom we have established Early Learning Development Standards (60th percentile on each ELOM domain). This means you do not have to enrol a control group to test the effectiveness of your programme.

Why is this the case?

  • The ELOM has been normed on a random sample of children between 50 and 69 months.
  • We have established the Medians (distribution mid points) and Means (average scores) for children in School Quintiles, 1, 2&3 and 4&5.
  • We have defined the Early Learning Development Standards that we would hope children should achieve.

Note well that to use this approach we must be sure that your programme children fall within the age range of the ELOM age validation sample, that their Quintile can be identified, and that they do not suffer any disability or illness that would affect their performance on the ELOM. Of course, your assessors must be well trained, certified and highly competent otherwise the assessments will be invalid.

In the evaluation you would compare your baseline intervention group ELOM performance with that of the ELOM performance of the age validation sample age group and Quintile. You would do the same again at the end of your programme. The purpose is to establish the plausibility that any change over time in the early learning intervention children (ELIC) is due to programme effects. 

In the design, the intervention Group is ELIC.

The Comparison Groups are:

  1. Baseline: A sample of the Quintile 1 Age Validation children matched for age and Quintile. Reference Group 1 (RG1).
  2. Endline: The Age Validation children matched for age and Quintile. Reference Group 2 (RG2).

GROUP

BASELINE

TREATMENT

ENDLINE

ELIC: INTERVENTION GROUP

O1 (ELOM)

EARLY LEARNING  PROGRAMME

O2 (ELOM)

ELOM AGE VALIDATION REFERENCE GROUPS

 

RG1 O1 (ELOM)

 

NIL

 

RG2 O2 (ELOM)

Assume the groups do not differ at baseline; (RG1 ELOM = ELIC ELOM Baseline) If they do that is OK – we note by how much and measure the difference at baseline and endline. If the treatment works, ELIC performs better than the Q1 age matched Reference Group at endline (RG2). (ELIC ELOM scores at Endline > RG2 ELOM) 

For example, let us assume that at baseline, there is no difference between the groups on the Emergent Numeracy and Mathematics Domain (20% of both groups are achieving the standard. But at endline, imagine 40% of your programme children achieve the standard, while the Reference Group population (same age) remains at 20%. Then you have made a huge difference.

Another way would be to compare the Standard Scores on this domain with those of the Reference Group.

Resources for the different  domains

Note that some of these sites cover the period from infancy to the early school years, so be careful to select preschool age activities.

Gross and Fine Motor/Visual Motor Integration Development

Executive Functioning

Emergent Literacy and Language

Emergent Numeracy and Mathematics

Social Relations and Emotional Functioning

 Using your  data for programme improvement

When Children are Falling Behind or are in the At Risk Range

If ELOM data shows that children in your early learning programme are in the ‘at risk group’ or lagging in a particular domain compared with the median of their quintile reference group, consider the following

  • Does the learning programme include a focus on the domains where children are ‘at risk’ or behind their quintile group median?
  • Does the learning programme include activities that cover a full range of skills in this domain?
  • Is the curriculum being followed and learning activities provided for this domain?
  • Are there sufficient teaching and learning materials to support children’s learning in this domain?
  • Do the ECD practitioners understand the skills children need to develop in this domain?

Refer to the National Curriculum Framework for Children Birth to Four. Towards Grade R age band, and check that the early learning programme covers all the aims under the relevant early learning development area (ELDA).  If your organisation is using a well-designed early learning programme with age appropriate activities and materials for developing children’s capacity in a particular domain, you need to be sure that the practitioners are implementing the programme as you designed it. 

  • Is the programme followed regularly?
  • Are varied and interesting activities and materials available?
  • Are the activities age appropriate?
  • Do the practitioners understand how to support learning through discussing and questioning with children, joining in their play, offering activities that link to their interests?

All that may be needed is a focus on improving implementation of the existing learning programme.  But if you need to strengthen what is offered, we have identified some resources, organised by domain, which could help you with new activities.

What about children whose overall performance is lower than the average in the sample tested?

Some children’s overall performance may be much lower than the average in the sample (their scores are in the bottom 20% of the group).

First, ask the child’s teacher if this child is known to be slower than others or has a known problem that could have contributed to this result. If that does not assist you to understand the child’s performance, there could be several other reasons for such findings:

  1. Assessment environment was not satisfactory. This produces unreliable results (assessors would normally make notes on this). There are two main possibilities:
    1. the assessment environment was not as it should be – there were distractions. Make sure this was not the case.
    2. the child was ill, hungry, or did not engage well in the testing situation for some reason.

In this case the child/ren should be reassessed. 

  1. A health condition or disability (speech, vision, hearing, intellectual, physical limitation) that affects their performance on the ELOM as a whole, or in particular domains.
  2. A mental health problem such as high levels of anxiety, depression, or an attention deficit disorder may compromise the child’s ability to perform optimally.
  3. Malnourished children are less likely to perform well on ELOM. They may be chronically malnourished (growth stunted for their age) or acutely malnourished (underweight).

If the likely cause is not point 1 above, then points 2-4 should be explored through referral to an appropriate specialist.