M&E >> Definitions and Concepts
'Monitoring' and 'evaluation' are discreet, symbiotic and complementary processes. Concepts relating to HIV and AIDS M&E systems are defined hereunder:
A-E Definitions and Concepts Index
Accountability: Obligation to demonstrate that work has been conducted in compliance with agreed rules and standards or to report fairly and accurately on performance results vis-a-vis mandated roles and or plans. This may require a careful, even legally defensible, demonstration that the work is consistent with the contract terms. Note: accountability in development may refer to the obligations of partners to act according to clearly defined responsibilities, roles and performance expectations with respect to prudent use of resources. For evaluation it connotes the responsibility to provide accurate, fair, and credible monitoring reports and performance assessments. For public sector managers and policy makers accountability is to tax payers and citizens.
Baseline: The status of services and outcome-related measures such as knowledge, attitudes, norms, behaviours, and conditions before an intervention.
Capacity: The knowledge, organization, and resources needed to perform a function.
Case study: A methodological approach to describing a situation, individual, or the like that typically incorporates data-gathering activities such as interviews, observations, questionnaires, at selected sites or programmes. The findings are then used to report to stakeholders, make recommendations for programme improvement, and share lessons with other countries.
Coverage: The extent to which a programme reaches its intended target population, institution, or geographical area.
Data Sources: Data sources are tangible sets of information, usually in the form of reports, survey results, monitoring forms from the field, or official government data sets. Data sources provide the values of the indicators at a specific point in time.
Data: Are raw, un-summarized and unanalyzed facts. Data are of little use to decision makers as they contain far too much detail. Before data can be used they have to be converted into information.
Effectiveness: The extent to which the development intervention's objectives were achieved or are expected to be achieved, taking into account their relative importance. Note: also used as an aggregate measure for judgment about the merit or worth of an activity, i.e., the extent to which an intervention has attained, or is expected to attain, its major relevant objectives efficiently in a sustainable fashion with a positive institutional development impact. Also a measure of the extent to which a programme achieves its planned results (outputs, outcomes, and goals).
Evaluation: The systematic and objective assessment of an on-going or completed project, programme or policy, its design, implementation, and results. The aim is to determine the relevance and fulfilment of objectives, development efficacy, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful enabling the incorporation of lessons learnt and the decision making process of both recipients and donors. Evaluation also refers to the process of determining the worth or significance of an activity, policy, or programme. An assessment is systematic and objective as possible of a planned, on-going, or completed development intervention. Note: evaluation in some instances involves the definition of appropriate standards, the examination of performance against those standards, an assessment of actual and expected results, and the identification of relevant lessons or related term review.
F-K Definitions and Concepts Index
Facility survey: A site inventory of all elements required to deliver services, such as basic infrastructure, drugs, equipment, test kits, registers, and staff trained in the delivery of the service; the units of observation are facilities of various types and levels in the health system and normally include both public and private facilities in the sample frame of sites; may also be referred to as a service provision assessment.
Impact evaluation: Looks at the rise and fall of disease incidence and prevalence as a function of HIV/AIDS programmes; the effects (impact) on entire populations seldom can be attributed to a single programme or even several programmes; therefore, evaluations of impact on populations usually entail a rigorous evaluation design that includes the combined effects of a number of programmes on at-risk populations.
Impact monitoring: In the field of public health is usually referred to as disease surveillance (see above) and is concerned with the monitoring of disease prevalence or incidence; with this type of monitoring, data are collected at the jurisdictional, regional, and national levels.
Impact: The longer-range, cumulative effect of programmes over time, such as change in HIV infection, morbidity, and mortality; impacts are rarely, if ever, attributable to a single programme but a programme may, with other programmes, contribute to impacts on a population.
Indicator: Quantitative or qualitative factor or verifiable that provides a simple and reliable means to measure achievement to reflect the changes connected to an intervention or to help assess the performance of a development actor. In order for indicators to be useful for monitoring and evaluating programme results, it is important to identify indicators that are direct, objective, practical, and adequate, and to regularly update them.
Information Products: An information product is a standard report/document that the NERCHA produces at regular intervals after receiving data sources and analyzing these data sources. Reporting usually takes place through an information product.
Information: Data that has been processed into meaningful form.
Input and output monitoring: Involves the basic tracking of information about programme inputs or resources that go into a programme and about outputs of the programme activities; data sources for monitoring inputs and outputs usually exist naturally in programme documentation, such as activity reports and logs, and client records, which offer details about the time, place, and amount of services delivered, as well as the types of clients receiving services.
Input: A resource used in a programme; includes monetary and personnel resources that come from a variety of sources, as well as curricula and materials.
L-P Definitions and Concepts Index
Learning: Reflecting on experience to identify how a situation or future actions could be improved and then using this knowledge to make actual improvements. This can be individual or group based. Learning may be applying lessons learnt to future actions which then provide the basis for another cycle of learning.
M&E Work Plan: An M&E work plan describes the priority M&E activities for the year with defined responsibilities for implementation, costs for each activity, identified funding, and a clear timeline for delivery of outputs. This work plan enables the NAC and the national M&E TWG to ensure that financial and human resources are mobilized and allows for monitoring progress towards implementation of one national HIV M&E system.
Management information system (MIS): A data system, usually computerized, that routinely collects and reports information about the delivery of services, costs, demographical and health information, and results status.
Methodology: A description of how something will be done. A set of analytical methods, procedures and techniques used to collect and analyze information products appropriate for evaluation of the particular programme, component, or activity.
Monitoring: The routine tracking and reporting of priority information about a programme and its intended outputs and outcomes. Monitoring asks: what have we achieved?
Monitoring and evaluation (M&E) plan: A comprehensive planning document for all M&E activities, it documents the key M&E questions to be addressed, what indicators are collected, how, how often, from where and why they will be collected; baselines, targets and assumptions; how they are going to be analyzed or interpreted, and how or how often reports will be developed and distributed on these indicators.
National-level reports: Various sources of information that are used to describe programme inputs and programme-related, project-level activities countrywide; examples include reports of nongovernmental agencies and national reports on programme progress, performance, strategies, and plans.
Objective: A statement of desired, specific, realistic, and measurable programme results.
Operations research: Applies systematic research techniques to improve service delivery; this type of research and evaluation analyzes only factors that are under the control of programme managers, such as improving the quality of services, increasing training and supervision of staff, and adding new service components; it is designed to assess the accessibility, availability, quality, and sustainability of programmes.
Outcome evaluation: A type of evaluation that is concerned with determining if, and by how much, programme activities or services achieved their intended outcomes; whereas outcome monitoring is helpful and necessary in knowing whether outcomes were attained, outcome evaluation attempts to attribute observed change to the intervention tested, describe the extent or scope of programme outcomes, and indicate what might happen in the absence of the programme; it is methodologically rigorous and requires a comparative element in design, such as a control or comparison group.
Outcome monitoring: The basic tracking of variables that have been adopted as measures or "indicators" of the desired programme outcomes; with national AIDS programmes, it is typically conducted through population-based surveys to track whether desired outcomes have been reached; it may also track information directly related to programme clients, such as change in knowledge, attitudes, beliefs, skills, behaviours, access to services, policies, and environmental conditions.
Outcome: The effect of programme activities on target audiences or populations, such as change in knowledge, attitudes, beliefs, skills, behaviours, access to services, policies, and environmental conditions.
Output: The results of programme activities; relates to the direct products or deliverables of programme activities, such as number of counselling sessions completed, number of people reached, and number of materials distributed.
Partners: The individuals and or organizations that collaborate to achieve mutually agreed upon objectives. Note: the concept of partnering connotes shared goals, common responsibility for outcomes, distinct accountabilities and reciprocal obligations. Partners may include governments, bilateral and multilateral organizations, the private sector, etc. Performance: The degree to which a development intervention or a development partner operates according to specific criteria, standards, guidelines or achieves results in accordance with stated plans.
Population-based surveys: A large-scale national health survey, such as the Demographic and Health Survey.
Process evaluation: Type of evaluation that focuses on programme implementation, adding a dimension to the information that was tracked in input and output monitoring; usually focuses on a single programme and uses largely qualitative methods to describe programme activities and perceptions, especially during the developmental stages and early implementation of a programme; may also include some quantitative approaches, such as surveys about client satisfaction and perceptions about needs and services; in addition, might provide understanding about a programme's cultural, socio-political, legal, and economic contexts that affect programmes
Programme records: Various sources of information that are used to describe programme inputs and programme-related, project-level activities; examples include budget and expenditure records and logs of commodities.
Purpose: The publicly stated objective of the development programme or project.
Q-U Definitions and Concepts Index
Reliability: Consistency and dependability of data collected through repeated use of a scientific instrument or data collection procedure used under the same conditions; is independent of data validity, that is, a data collection method may produce consistent data but not measure what is intended to be measured.
Research: Focuses primarily on hypothesis testing in a controlled environment; it typically attempts to make statements about the relationships among specific variables under controlled circumstances and at a given point in time.
Result: The output, outcome or impact (intended or unintended, positive or negative) of a development intervention. Sentinel surveillance. See Disease surveillance.
Stakeholders: People, groups or entities that have a role and interest in the aims, implementation, monitoring and evaluation of an organization, project or programme. They include the community whose situation the programme seeks to change, field staff who implement the activities, programme manager who oversees implementation, development partners and other decision-makers who influence or decide the course of action related to the programmes and its supporters, critics, and other persons who influence the programme environment. Sustainability (of a programme). Sufficient likelihood that political and financial support will exist to maintain the programme.
Target: Quantifiable levels of the indicator that a country or organisation wants to achieve at a given point in time.
V-Z Definitions and Concepts Index
Validity: The extent to which a measurement or test accurately measures what is intended to be measured.
Follow Us on Facebook
Business Hours
Days Hours
 Mon - Frid  08:00 - 16:45
 Lunch Time  13:00 - 14:00
* We shall observe all holidays as stipulated by the Government of Swaziland
Contacts
Physical Address: Cooper Centre Office 106, Mbabane, Swaziland

Postal Address: P.O Box 5 Mbabane, Swaziland

Phone: 2404 7712

Fax: none
M&E Location