You are here: Home Resources Training Evaluation Framework and Tools (TEFT) How the TEFT Was Developed

How the TEFT Was Developed

Overview of the methods used to guide work on the TEFT project.

Developing the TEFT

The Framework and accompanying materials are the result of the following activities:

The TEFT team searched literature published between 2000 and 2011 in computerized databases (including PubMed, MANTIS, CINAHL, and Scopus) to identify articles in peer-reviewed journals reporting training outcome evaluations. The key search terms included training, in-service, health systems, and skills, combined with the words evaluation, impact, assessment, improvement, strengthening, outcomes, health outcomes, health worker, adoption, integration, and learning theory.

Three reviewers collected and read the articles that were retrieved. In many of the papers’ references sections, other papers of interest were identified and subsequently retrieved. After excluding papers for non-relevance, and once the point was reached where no new categories emerged from the review ("theoretrical saturation") a total of 70 articles remained.These articles were categorized by the types of outcomes that were reported, including outcomes at several levels:

  • At the individual level: (1) Health care worker knowledge, attitude, or skill; (2) health care worker performance; or (3) patient health.
  • At the facility/organization level: (4) Organizational performance improvements and/or system improvements, or organizational-level health improvements.
  • At the health systems/population level: (5) Population level performance improvements and/or system improvements, and population level health improvements.
  • The articles were also reviewed for methods and designs used to evaluate outcomes at the various levels, and referenced in the narrative to provide guidance on selection and implementation of appropriate methods and designs. 
  • Key informant interviews  

Key informant interviews were conducted between June 2011 and December 2011. Snowball sampling resulted in a total of 15 key informants who have direct experience with programs engaged in training evaluation.  Interviewees included evaluators, technical advisors, United States government policy-makers, and program managers. Interviews were semi-structured and addressed the following topics:

  • Evolving needs for in-service training evaluation
  • Needs for technical assistance to programs around in-service training evaluation
  • Best approaches to obtaining outcome evaluation data
  • Barriers and facilitators to obtaining outcome-level data
  • Extent to which health outcomes can be attributed to training interventions
  • Practical uses for outcome evaluation findings
  • Existing resources for supporting in-service training outcome evaluation

Following completion of the interviews, transcripts and notes were coded by two independent reviewers. Emerging themes were compared between coders, and an iterative process of reading, coding, and revising resulted in a set of themes with representative quotes from the interviews.

  • Vetting of draft versions of the Framework with stakeholders

Over the course of a nine-month development process, versions of the draft Training Evaluation Framework were shared with stakeholders, including staff from international training organizations and the United States government. In an iterative process, revisions and improvements were made, and versions were again shared and revised. Tools were piloted by running them through evaluation planning scenarios with training program staff, and similarly revised in an iterative process. 


 

Document Actions