Monitoring and Evaluation in Public and Project Management – Introduction

This article is part 1 of 3 in the issue Monitoring and Evaluation in Public and Project Management

Introduction

After reading this succession of articles, you will gain a far greater understanding of how monitoring and evaluation (M&E) is used in the field of public and project management. You’ll also see how it directly affects the quality of the country’s public services, and how project managers in private organizations demonstrate efficiency by the use of it. The effective deployment of M&E across sectors certainly improves the lives and livelihoods of the common people/ citizens who are the ultimate beneficiaries.

Monitoring and evaluation is an essential tool in public management because it is integral to effective service delivery. Although it was often neglected by governments in the past, it has now become a critical part of the functioning of a democratic society. Similarly in the private sector, monitoring and evaluating the outcomes of a program or project allows managers to determine how successful they have been in achieving desired goals. By applying the process of monitoring and evaluation, both public and private-sector organizations are able to determine how successful a project or program is. Failure to implement a monitoring and evaluation plan can result in a massive waste of money, time and effort. A shrewd application of the M&E process also leads to better transparency and accountability, which are increasingly sought-after in modern times, within both public and private sectors.

Definitions

Monitoring is the process of continuous tracking of a project (e.g., seasonal farming or a road construction activity), program (e.g., staff training or policy development), or service delivery (e.g., formal education, health, water and sanitation) to see how money spent on each of them can positively change in magnitude (output, value, proportion) and direction (behavioral, attitudinal, reversal change or interim outcomes), and yield a bundle of benefits (impact or ultimate results). It is synonymous with step-by-step surveillance. It also involves a further element of reporting back on what was observed, by comparing the actual performance against what was targeted/ planned or expected.

A number of indicators are designed to raise the red-flag, signaling the change that has occurred in the implementation process; and this will enable managers and administrators to proactively respond and correct the problems that have been detected. Thus, monitoring aims to provide vital information, with regular (real-time) feedback mechanisms needed for adjusting the results in the right direction.

Examples of monitoring can be taken from many different fields. You can monitor the performance result of services rendered by government institutions, on a sustainable basis, in order to see if the monies spent on them year-on-year are worthwhile. A lot of effort is spent on budget formulation and preparation, but unfortunately, less time is spent on tracking how these monies are spent at each stage of the implementation process, and the results they yield.

You can also monitor private-sector projects and programs from beginning (design or input stages), through the mid-points (the implementation output and outcome stages) to the end-of-project (impact or conclusive stages). A good number of development missions, especially those funded by USAID (e.g., governance and economic growth projects) and European Union (e.g., election monitoring programs), are doing very well owing to the fact that they are effectively utilizing monitoring, and by extension, evaluation mechanisms.

Evaluation is the second part of the M&E process. It entails further interrogation of the information that was gathered during the earlier monitoring stages. Evaluation is utilized to assess or appraise how, where, when, and by whom public-sector ministries, departments and agencies, or private-sector projects, programs or development schemes, have effectively and efficiently utilized the monies that were allocated to them in the budget process within the medium term expenditure framework (MTEF). The findings of the end-term evaluation process, based on the final outcomes (impact) or benefits derived from the services that were included in the budget, will determine if the project or scheme deserves to be continued (sustained) or not. In other words, evaluation is usually carried out mid-term and end-term.

In Nigeria, relatively little time is usually spent on assessing project performance or the quality of service-delivery. As implementation progresses, the administrators or project managers who should be responsible for carrying out the necessary internal checks and balances hardly ever do so. Today, if you take a close look at the many Development Plans of government, both at the federal and state levels, you would observe that even when an elaborate M&E framework had been put in place – albeit theoretically – there is virtual absence of tracking and assessment records; which is a clear proof that nobody is held accountable for anything that goes wrong within the system, a development that fans the embers of systemic corruption.

Perhaps, the main reason why M&E is not effectively carried out in government ministries, departments and agencies is because many of the administrators and managers that are responsible for these tasks are not adequately equipped with the requisite knowledge and skills. That is why if you ask them to explain the extent to which the objectives of the project entrusted to them were generally met, they may not be able to do so; neither would they be able to explain the extent to which the quantified set-out targets were attained or standards met. Where there was negligence, they could hardly attribute any project abandonment to the right causal factor; for instance, was it as a result of poor budget performance or as a result of recklessness on the part of the contractor handling the project or laziness on the part of the workforce; or was it due to forces beyond the control of anyone (force majeure)?

Evaluation often requires a set of criteria or parameters by which the information must be assessed. What those criteria are will be determined by the thrusts (objectives) and goals of that particular project, process or policy.

Quantitative evaluation can be used to rank or numerically assess different things. For example, in sports, the 3 fastest/ award-winning athletes over a 100-meter or similar distance at an athletic event could be ranked by evaluating the race-time (in seconds or minutes as the case may be) using stop-watches.

Another illustration may be cited by using certain indicators to gauge the socio-economic condition of a nation. ‘Inflation rate’, for instance, may be determined by comparing the current prices of a given basket of commodities with the prices that prevailed over a similar period during the past year. The ‘poverty level’ of a nation could be evaluated by benchmarking the incomes of the majority of the populace against a certain acceptable minimum that can guarantee the basic necessities of life. ‘Access to medical treatment’ could, in like manner, be proportionately determined by comparing the number of people that consult doctors at the hospitals with the number that patronize medicine-stores and quacks; etc.

In real estate, property investors might instead evaluate property sales figures to work out which suburbs are experiencing the fastest growth in a city.

In the public works department (PWD), comparison could be drawn across countries, with respect to the amount of monies and standard materials spent on road construction over equivalent length.

 

Qualitative evaluation/ analysis is by nature much more subjective and the criteria need to be very carefully set out. With particular reference to the road construction example mentioned above, qualitative evaluation can become far more complex. For example, it is unlikely that the topographical and socio-economic conditions would ever remain the same across countries; hence, much as quantitative amounts of money may be comparatively stated, it would not be realistic to equate the money spent on roads in a particular jurisdiction with that spent on a similar road project in another jurisdiction. Certainly, the soil conditions will not be the same; slopes may differ; rivers may crisscross and compel bridges to be erected; labour and material costs may differ, and so on and so forth. Therefore, it would be important always to take supplementary, copious, and precautionary qualitative analytical notes incorporating the basic underlying assumptions, before arriving at any final quantitative analytical conclusions.

Series NavigationThe M&E Framework >>

Share

Add Your Comments

Your email address will not be published. Required fields are marked *