By continuing to browse this site, you agree to our use of cookies. Read our privacy policy

Evaluation Consultant

Multiple locations

  • Organization: IRC - International Rescue Committee
  • Location: Multiple locations
  • Grade: Consultancy - Consultant - Contractors Agreement
  • Occupational Groups:
    • Monitoring and Evaluation
    • Evaluation & Learning
  • Closing Date: Closed

Requisition ID: req7584

Job Title: Evaluation Consultant

Sector: Evaluation & Learning

Employment Category: Consultant

Employment Type: Full-Time

Open to Expatriates: No

Location: IRC Global

Job Description

The International Rescue Committee (IRC) helps people whose lives and livelihoods are shattered by conflict and disaster to survive, recover, and gain control of their future. Founded in 1933 at the request of Albert Einstein, the IRC works with people forced to flee from war, conflict and disaster and the host communities that support them, as well as with those who remain within their homes and communities. At work today in over 40 countries and 22 U.S. cities, we restore safety, dignity and hope to millions who are uprooted and struggling to endure.

 However, the needs of our clients far outweigh the resources to meet them: in 2017 only 50 percent of Humanitarian Response Plan appeals were met, leaving a gap of $11.6 billion. Implementing agencies are not able to meaningfully understand which program activities make the most progress per dollar spent, because existing methods to calculate cost-efficiency are inconsistent, time-consuming, and create opportunity for error. The majority of data currently available cannot be meaningfully compared to inform actions, creating missed opportunities for learning and program improvement.

 The Systematic Cost Analysis (SCAN) Tool is a software that was first developed in-house by the IRC in 2016, building on existing accounting data to make rigorous analysis of cost-efficiency fast and easy. In 2018, the IRC joined with Action Against Hunger, CARE, Mercy Corps, and Save the Children to build SCAN 2.0—an improved version of the software which would be compatible with any NGOs finance system. With funding from the Dutch Relief Alliance (DRA) the Consortium has hired a software developer and built SCAN 2.0, and we are in the process of conducting pilots within IRC, Mercy Corps, and Save the Children country offices. This TOR is for a consultant to conduct an evaluation of the DRA-funded project.

 

 Evaluation Objectives & Questions

 The purpose of this evaluation is not only to assess the quality of project management, but also to document how different NGOs are approaching adoption of the SCAN tool, and assess the usability of the tool to field staff. It is hoped that the evaluation will inform planning for a broader rollout of SCAN, both by improving our understanding of where SCAN provides more or less value to an NGO and by documenting the resources necessary to install and configure the system. Key questions to address are:

 

Questions for HQ Stakeholders

  •  What conditions are necessary, in terms of commitments to data security or transparency, to make implementing agencies comfortable signing on to a project of this kind? What factors make implementing agencies more/less willing to share SCAN results publicly in sector forums (e.g. cluster coordination groups, technical working groups)?
  • Where do stakeholders think that SCAN could provide the greatest value to their organization in the future? In what situations, types of programming, or sectors should SCAN be emphasized or de-emphasized?

 Questions for Field-Based Stakeholders:

  •  How much training did stakeholders receive on the SCAN tool? Did they feel this was adequate? How long did it take them to complete an analysis, during the field pilot in which they participated?
  • What is the level of satisfaction with SCAN among users, in terms of (1) the usability of the tool itself, and (2) the utility of the results to inform decisions. How does this user satisfaction change based on the amount of training and support people receive? How does this user satisfaction differ across different agencies, countries, or sectoral teams?
  • What incentives prompt country programs to want to use SCAN (e.g. Donor reporting requirements? Donor requirements around adaptive programming? HQ-based push? Clear learning/planning needs?)
  • Did SCAN results enable staff to plan for changes to their programs that would increase reach or impact per dollar spent? (Given the short time between pilots and this evaluation, finalized changes to programs are unlikely to have occurred yet.) If such changes were not possible, what were the perceived barriers?
  • Where do stakeholders think that SCAN could provide the greatest value to their organization in the future? In what situations, types of programming, or sectors should SCAN be emphasized or de-emphasized?

 Evaluation Process

 At a minimum, this evaluation should address the indicators and questions listed above (which are consistent with the project log frame). Those quantitative indicators include Net Promoter Score for stakeholders who directly used the SCAN tool, and the time necessary to complete an analysis in SCAN. Where the evaluator sees opportunities to address the objectives above by including additional indicators and questions, we would welcome such proposals. Consistent with those objectives, the evaluation should make use of:

  •  Document review of all relevant Consortium materials, potentially including SCAN analyses or dashboards from piloting NGOs. Deriving lessons for programming from SCAN results is not considered part of this evaluation, as this will be undertaken by the Consortium members through other project activities.
  • Key informant interviews among both field and HQ staff involved in the SCAN project at all participating NGOs. Given the number of stakeholders, we anticipate that only a targeted sample will be interviewed, drawing from the master list of SCAN stakeholders maintained by the IRC.
  • A survey on user satisfaction with the SCAN tool (i.e. “net promoter scores”), and the time necessary to conduct analyses.This sample should cover all individuals who used the SCAN tool during field pilots, with names and email addresses supplied by the Consortium.   We anticipate that this survey will be done remotely.

 Deliverables

 The consultant shall provide intermediate deliverables for review, including:

  1.  Draft survey instrument [First milestone]
  2. Key informant interview guide [First milestone]
  3. Cleaned survey data [Second milestone]
  4. Coded qualitative data [Second milestone]
  5. Final report [Third milestone]

 At the close of the project, the consultant shall provide a final report with several sections:

  1. Process mapping for SCAN installation and configuration, including recommendations to improve the process for other NGOs in the future.
  2. Process mapping of the time, training, and resources necessary to conduct SCAN analyses.
  3. User satisfaction assessment for SCAN 2.0, including net promoter scores and qualitative feedback. Analysis of patterns of user perception based on agency, location, or other respondent characteristics.
  4. Summary of perceived benefits and barriers to adoption of the SCAN tool, and sharing of results. Recommendations for areas in which to strategically emphasize or de-emphasize SCAN in the future.
  5. Summary of opportunities and challenges to applying SCAN results to improve field programs.

 Timeline & Payment Schedule

 Review activities will ideally be conducted in February and March 2020. Field pilots for SCAN 2.0 began in November 2019 and will continue through February 2020, so this time frame should allow for interviews and surveys of field users within 8-10 weeks of their encountering the SCAN tool. The final report must be submitted by the end of April 2020.

 The consultant will receive payment in the following disbursements:

  1.  Submission of data collection and analysis plan, February 7 (25%)
  2. Submission of cleaned survey data and interview responses, March 27 (25%)
  3. Submission of report, April 24 (50%)

Qualifications

Candidate Qualifications

Education: Bachelor’s degree (international development or related preferred)

Work Experience:

  • Previous experience conducting project evaluations
  • Familiarity with value-for-money analysis concepts and methods

Demonstrated Skills and Competencies:

  • Excellent communication and writing skills
  • Excellent problem solving skills
  • Ability to work independently

Language Skills: Proficiency in English and French

Application Process

Interested applicants are requested to submit an application no later than January 10, 2019. The following materials should be included:

  1. Applicant’s CV
  2. Cover letter
  3. Proposed workplan
  4. Proposed budget



This vacancy is now closed.
However, we have found similar vacancies for you: