Monitoring and Evaluation

Services:

  • Evidence-based evaluations
  • Results-based evaluations
  • Monitoring & Evaluation (M&E) Plans
  • Monitoring systems, EMIS, HMIS, databases
  • MEAL, MEL – Monitoring, Evaluation and Learning
  • Performance evaluations
  • Impact evaluations
  • Situational analyses
  • Comparative analyses
  • Client satisfaction surveys
  • Longitudinal studies
  • Case studies
  • Qualitative, quantitative and mixed methods
  • Participatory reviews

University qualifications in information & communications technology (ICT), computing, mathematics, and engineering (STEM subjects) culminating in a Master of Sciences Communication.

Qualitative and quantitative (and mixed methods) baselines, mid-term performance evaluations, final evaluations, impact evaluations, longitudinal studies, gender studies and evauations, financial assessments (rates of return, cost-effectiveness, supply and demand, and financial modelling), ex-post evaluations, household studies, an anthropometric health and nutrition study, and data qualtity assessments (DQA).

Data collection methodologies, such as observation, focus group discussions, key informant inteviews, in-depth interviews, survey questionnaires, checklists, household surveys, transect walks, case studies, desk reviews, literature reviews, and consolidation/synthesis of data.

Data analysis techniques, such as time series, gender analysis, content and context, validations and triangulations, situational analysis, comparisons, control vs treatment groups, counterfactuals, cross-correlations, contribution and attribution analysis, trends, gap analysis, SWAPs (sector-wide approach), and SWOTs (strengths, weaknesses, opportunities, and threats).

Education projects:

Conducted and assembled baseline data for monitoring indicators and conducted extensive surveys. As a sample, this included: enrolments, retention rates, dropout rates, completion rates, gender statistics, age, performance, teacher statistics (age, gender, qualification, professional development training, salary, and tenure status), training data, infrastructure data (school numbers, refurbished, renovated, physical status—latrines, safe water, etc.), textbook production and distribution, school management committees, parent-teacher associations, learning materials, libraries and resource centres, literacy programs, adult/community programs, grants, scholarships, and GPS distance marking (distance of school to facilities and distance of students/teachers to schools).

Example: 2006 South Sudan USAID Equip2 Technical Advisors Project, as Chief of Party (Project Director) and direct adviser to the Minister of Education to support the rebuilding of the ministry in conjunction with the Government of Sudan and the Government of South Sudan. The first step resulted in the “Annual Education Census and EMIS Situational Analysis” for the Ministry of Education. This involved a school and household census.

Example: 2003 Iraq RISE baseline study included a door-to-door survey of over 4,500 out-of-school school-aged children which was analysed to determine and design project interventions, locations, scope, resources, targeted primary grades, and targeted literacy for early grades, and for their monitoring against project targets and indicators.

Mentoring and Training:

Example: Consultancy in M&E and Research Training and Capacity Building from August 2014 to July 2015 to build the skills of the Independent Monitoring Unit (IMU) for the USAID-funded Pakistan Expanded Regional Stabilization Initiative (PERSI) to counter violent extremism. The consultancy included four trips to Pakistan for training workshops, as well as on-going remote advice, to develop a results-based logframe and M&E Plan, review a survey questionnaire database, advice on data collection methodologies and instrument design, sampling techniques, analysis techniques, data quality assurance, and reporting formats. Reviewed the results of the 2015 comparison survey with non-participants as a control group, and documented 3 Pakistan Police Survey reports (two quantitative and one qualitative), which were consolidated into one end-line report. As the IMU was developing and changing rapidly, with increasing workloads, training included leadership, time management, task management, and communications, as well as the restructure of work teams and their duties.

 

Data Analysis:

Expertise in qualitative and quantitative evaluation data analysis programs, such as Access, Excel, Standard Query Language (SQL), Statistical Package for the Social Sciences (SPSS) – generally referred to as predictive analytics software (PASW), Statistica, Cognos Database, NVivo, and MAXQDA for research and evaluations.

These are used quantitatively for frequency tables, graphs, and charts, as well as all forms of data mining, such as cross tabuations, correlations, statistical signifance levels, ANOVA, linear regression and factor analysis.

For qualitative data – unstructured (usually text) data – a range of frequency, coding, categorization, and themed analysis for graphics, text, literature reviews, open-ended surveys, in-depth interviews (IDI), focus group discussions (FGD), roundtables, and bibliographic presentations. These tools not only help to analyze data, but to present data in a range of ways for different audiences.

 

Evaluability Assessments:

An Evaluability Assessment (EA) is broadly accepted as being a systematic process for describing the structure of a project or program, analysing the plausibility and feasibility of its objectives, and clarifying whether it is suitable for an in-depth evaluation and how. In doing so, it also establishes the validity of project or program logic, its acceptability to key stakeholders, and its implementation.

The process involves: (1) clarifying the program logic or theory of change (how is it supposed to work), (2) exploring how the program works in reality (what is actually happening), and (3) considering priorities (how it can be improved, and how/will an evaluation support this).

The key EA questions for an impact evaluation are: (1) is it plausible to expect impacts? Do stakeholders share a clear understanding of how the program operates and are there logical links from program activities to intended impacts?, (2) How can impacts be measured? How to measure the intended impacts, given the resources available for the impact evaluation and the program implementation strategy?, and (3) How can the impact evaluation be most useful? Are there specific needs that the evaluation will satisfy and can it be designed to meet those needs?

Conducted two evaluability assessments: one for MCC Morocco in 2015 to determine the methodology and approach for an ex-post evaluation, and one for Danida in Afghanistan in 2011 to determine the extent to which an impact assessment could be undertaken under conflict situations. In Afghanistan the predominant interventions were in the Helmand Province which necessitated a conflict-sensitive evaluation methodology.

The objective of the EA Consultancies was to determine how the project could be evaluated in a reliable and credible manner. The two core purposes are: (1) to assess the availability of relevant information, and (2) to determine the most useful and practical focus of an evaluation, given the nature of the project and its context. A supplementary purpose is to propose options for an evaluation design, including evaluation questions, methods, resources, and expertise.

In fulfilling this purpose, the EA also provides evidence and opportunities that can be harnessed to support the work of conflict sensitive programming. The EA provides: (1) evidence to test and inform the programming theory of change, (2) opportunity for the donor to learn from the analysis undertaken, and (3) interim advice, prior to the proposed evaluation, to inform policy and program development.