NJJEC Glossary  (A - M     N - Z)

Baseline Data
Initial information on a program or program components collected prior to receipt of services or participation activities. Baseline data are used later for comparing measures that determine changes in a program.

Control Group
A group of individuals whose characteristics are similar to those of the program participants but who do not receive the program services, products, or activities being evaluated. Participants are randomly assigned to either the experimental group (those receiving program services) or the control group. A control group is used to assess the effect of program activities on participants who are receiving the services, products, or activities being evaluated. The same information is collected for people in the control group and those in the experimental group

Evaluability Assessment
An evaluability assessment is a systematic process used to determine the feasibility of a program evaluation. It also helps determine whether conducting a program evaluation will provide useful information that will help improve the management of a program and its overall performance

Evaluation
Evaluation has several distinguishing characteristics relating to focus, methodology, and function. Evaluation (1) assesses the effectiveness of an ongoing program in achieving its objectives, (2) relies on the standards of project design to distinguish a program's effects from those of other forces, and (3) aims at program improvement through a modification of current operations

Evidence-based programming
Programs or practices that have demonstrated success by credible evaluation; sometimes called research-based.

Experimental Design
A research design in which the researcher has control over the selection of participants in the study, and these participants are randomly assigned to treatment and control groups

External Validity
The extent to which a finding applies (or can be generalized) to persons, objects, settings, or times other than those that were the subject of study

Fidelity
How well a program is being implemented in an organization or community compared to the original program design

Goal
A desired state of affairs that outlines the ultimate purpose of a program. This is the end toward which program efforts are directed

Impact Evaluation
A type of outcome evaluation that focuses on the broad, long-term impacts or results of program activities

Implementation
Development of a program; process of putting all program functions and activities into place

Indicator
A measure that consists of ordered categories arranged in ascending or descending order of desirability

Informed Consent
A written agreement by the program participants to voluntarily participate in an evaluation or study after having been advised of the purpose of the study, the type of the information being collected, and how information will be used

Innovative Programs/Practices
Programs and practices derived from evidence-based initiatives that have not been evaluated yet

Internal Validity
The degree to which observed changes can be attributed to your program or intervention (i.e., the cause) and not to other possible causes (sometimes described as "alternative explanations" for the outcome)

Interrater Reliability
The extent to which two different researchers obtain the same result when using the same instrument to measure a concept

Logic Model
Describes how a program should work, presents the planned activities for the program, and focuses on anticipated outcomes. While logic models present a theory about the expected program outcome, they do not demonstrate whether the program caused the observed outcome. Diagrams or pictures that illustrate the logical relationship among key program elements through a sequence of "if-then" statements are often used when presenting logic models.

Measurement Error
The difference between a measured value and a true value

Meta-analysis
The systematic analysis of a set of existing evaluations of similar programs in order to draw general conclusions, develop support for hypotheses, and/or produce an estimate of overall program effectiveness

Model Programs/Practices
Clear evidence of effectiveness with multiple, rigorous evaluations; sometimes called exemplary programs/practices.

Non-experimental Data
Data not produced by an experiment or quasi-experiment

Objectives
Specific results or effects of a program's activities that must be achieved in pursuing the program's ultimate goals. For example, to reduce school truancy.

Operationalize
To define a concept in a way that can be measured. In evaluation research, to translate program inputs, outputs, objectives, and goals into specific measurable variables.

Outcome Evaluation
An evaluation used to identify the results of a program's/initiative's effort. It seeks to answer the question, "What difference did the program make?" It provides information about effects of a program after a specified period of operation. This type of evaluation typically provides knowledge about: (1) the extent to which the problems and needs that gave rise to the program still exist, (2) ways to ameliorate adverse impacts and enhance desirable impacts, and (3) program design adjustments that may be indicated for the future. It uses methods to determine whether achievements can be attributed to the program/initiative or other factors. Outcome evaluation is sometimes referred to as impact evaluation.

Outcome(s)
The results of program operations or activities. May include intended or unintended consequences. There are 3 types:

  • Initial - immediate results of the program;
  • Intermediate - results following initial outcomes; and
  • Long Term - Ultimate impact of program; relates to achievement of goal

Outcome Evaluation
An evaluation used by management to identify the results of a program's effort. It seeks to answer management's question, "What difference did the program make?" It provides management with a statement about the net effects of a program after a specified period of operation. This type of evaluation provides management with knowledge about: (1) the extent to which the problems and needs that gave rise to the program still exist, (2) ways to ameliorate adverse impacts and enhance desirable impacts, and (3) program design adjustments that may be indicated for the future.

Outcome Measures
Data used to measure achievement of objectives and goal(s).

Performance Measurement
Involves ongoing data collection to determine if a program is implementing activities and achieving objectives. It measures inputs, outputs, and outcomes over time. In general, pre-post comparisons are used to assess change.

Performance Measures
Ways to objectively measure the degree of success a program has had in achieving its stated objectives, goals, and planned program activities.

Problem Statement
Description of the problem, its causes, and potential approaches or solutions; conveys the importance of a program to address the problem

Process Evaluation
Evaluation that identifies the procedures undertaken and the decisions made in developing the program; describes how the program operates, the services it delivers, and the functions it carries out; documents development and operation for assessment of the reasons for successful or unsuccessful performance

Process Measures
Data used to demonstrate the implementation of activities; includes products of activities and indicators of services provided

Program Activities
Services or functions carried out by a program

Program Model
A flowchart or model which identifies the objectives and goals of a program, as well as their relationship to program activities intended to achieve these outcomes

Program Monitoring
Continuous collection of information about the activities and operation of a program (inputs and outputs)

Promising Programs/Practices
A program or practice with some evidence of success (i.e., evidence-based); some questions remain; requires further rigorous evaluation to be considered exemplary or model.

Quasi-Experimental Design
A research design with some, but not all, of the characteristics of an experimental design. While comparison groups are available and maximum controls are used to minimize threats to validity, random selection is typically not possible or practical

Random Assignment
Assignment of individuals in the pool of all potential participants to either the experimental (treatment) group or the control group in such a manner that their assignment to a group is determined entirely by chance

Randomized Controlled Trial
In a randomized controlled trial, the impact of a program is determined by randomly assigning individuals to an intervention group or control group

Reliability
The extent to which a measurement instrument yields consistent, stable, and uniform results over repeated observations or measurements under the same conditions each time

Replication
Duplication of an experiment or program

Research Design
A plan of what data to gather, from whom, how and when to collect the data, and how to analyze the data obtained

Resources
Means available to achieve objectives (e.g. money, staff)

Sample
A subset of the population. Elements are selected intentionally as a representation of the population being studied

Secondary Data
Data that has been collected for another purpose, but may be reanalyzed in a subsequent study

Statistical Significance
The degree to which a value is greater or smaller than would be expected by chance. Typically, a relationship is considered statistically significant when the probability of obtaining that result by chance is less than 5% if there were, in fact, no relationship in the population

Sustainability
The capacity of an evidence-based program to be maintained or endure over time

Systematic Review
A synthesis of the research evidence on a particular topic, such as drug court effectiveness, obtained through an exhaustive literature search for all relevant studies using scientific strategies to minimize error associated with appraising the design and results of studies. A systematic review is more thorough than a literature review, but does not use the statistical techniques of a meta-analysis

Validity
The extent to which a measurement instrument or test accurately measures what it is supposed to measure