Recursos
Patterns and influences in the supply and demand of evaluation and monitoring in Uganda’s public sector over the past two decades

Posts | The paucity of rigorous public sector evaluations has been identified as a constraint to improving the culture of debating empirical evidence in public policy. To address this, the Office of the Prime Minister in Uganda began an initiative to strengthen the framework and production of rigorous evaluations across the public service

[Editor’s Note: This article was written by David Rider Smith, Performance Measurement and Evaluation Manager for the CGIAR Global Research Program on Water, Lands and Ecosystems, led by the International Water Management Institute. The author also worked for UK Department for International Development and the Government of Uganda over the period 2007-2015. On this occasion, he reflects on the conditions enabling the development of monitoring and evaluation practices, the evolution and results of Uganda’s Poverty Eradication Action Plan regarding M&E and the present situation of the matter.]  

 

 

 

 

 

 

 

 

 

Context

 

Over the past two decades, considerable efforts have been made to establish a strong and robust basis for assessing public spending and its effects on the development of Uganda and its citizens.  To better understand the linkages, patterns and constraints to growth and change, substantial resources have been spent on: establishing good time-series and qualitative data on key socio-economic indicators; public accounts; regular monitoring of public policy interventions; and, on policy relevant research, analysis and evaluation.

 

In order to understand where this investment in monitoring, evaluative research[1] and evaluation has had the biggest impact on public policy and accountability, it is necessary to examine the relationships between policy, institutions and individuals in the public sphere.  Evidence suggests that only when the environment in each of these spheres is conducive and interfaced, has the linkage between assessment and policy change been productive[2].

 

 

 

 

Normative Framework

 

Relevant literature[3]  points to the critical prerequisite of a variety of demand-side elements in the use of evaluation and evaluative research in public policy and programmatic decision making.  These elements can be categorized into four that pertain to the openness of the political system to evidence and argument; organizational systems that have elements of performance measurement and analysis embedded within; individual leadership where relevant policy makers have an interest in analytical work and/or knowledge of relevant subject matter; and individual evaluation factors relating to the timing and focus of evaluations. While there is no empirical evidence on the depth or extent to which these factors need to be present or active, nor in terms of the combination of which factors may prove to combine most effectively with others to create demand, elements of each are considered to be necessary for effective uptake.

 

On the supply side, critical factors can also be categorized into four, including a framework of legal or administrative policy for evaluation or public policy research; the presence of systems for designing, commissioning and/or conducting and disseminating evaluations or research; the capacity to evaluation through a strong social science academia and consultancy sector; and in the nascent phases of economic development, the presence of external assistance to finance such analytical work.

 

Supply and demand elements are not mutually exclusive. While demand is considered critical to uptake, it also relies upon, and can be strengthened by an adequate framework and system for supply[4].

 

 

 

 

Poverty analysis and its impact on public policy

 

The Government of Uganda started to produce poverty monitoring data in 1992, through the Uganda National Household Survey (UNHS) reports, and has since updated this information every two to three years.  This data, however, did not play a vital role in assessing public policy until the launch of the Poverty Eradication Action Plan (PEAP) in 1997. Heavy investments in education and health service delivery through the PEAP made it necessary for the Government to assess closely the usefulness of these interventions in terms of transforming society welfare.  Hence, the policy environment and actors were open to the production of reports that would, in effect, illustrate how far the national policy was being effective.

 

In 1999, the Government of Uganda designed a poverty monitoring strategy that guided the production of biannual poverty status reports, and associated monitoring reports and publications. The Poverty Status Reports (PSRs) were high quality analytical pieces which drew upon quantitative and qualitative household and facilities survey data to determine the patterns and changes in rural and urban poverty.

 

The institutions responsible for generating, analyzing and reporting poverty data were critical to this process, not only their effectiveness as stand-alone institutions, but also for the inter-relationships between these agencies. The PSRs were reliant on good, regular statistical data production, and the work of the Uganda Bureau of Statistics (UBOS) was critical in this.  UBOS, with substantial financing from the World Bank, initiated a process of producing regular nationwide household surveys on household income and poverty, health status, population trends, and then later, on other economic and public policy issues.

 

Within the Ministry of Finance, Planning and Economic Development (MFPED), a Poverty Monitoring and Analysis Unit (PMAU) was established, to monitor, analyse and report on data generated on poverty and related issues, including the preparation of PSRs (with financing from UK).  This work was supported by the Government-sponsored Economic Policy Research Centre (EPRC). The Uganda Participatory Poverty Assessment Process (UPPAP) was also established in MFPED to provide qualitative data on key social economic indicators and the impacts of selected Government policies on the citizenry. The qualitative data was an integral part of the information used to prepare PSRs. 

 

Each of these establishments: UBOS, PMAU, UPPAP and EPRC had highly qualified, committed and motivated individuals in key positions.  The ability to produce high quality monitoring reports in a timely manner for political and administrative consumption reflected not only their individual abilities, but their willingness to work together to deliver demand driven monitoring reports. Within MFPED, a strong working relationship between PMAU and the top management of the Ministry meant that findings from these products made their way into policy and allocative decisions, and in turn, attracted increased official development assistance.  Many of the issues raised in the PSRs influenced decision-making at both at the Cabinet and the Parliamentary level, and helped in focusing expenditures in areas that were most meaningful for poverty reduction.

 

The window of opportunity and practice in the production of PSRs in Uganda reflected the priority and progress made in poverty reduction from the President down, and the relationships and abilities of the institutions and individuals involved.  This took place throughout the first and the second PEAP (1997-2000; 2000-2003) but, declined with the third PEAP (2004-07).  Analysis of the extent to which the second PEAP reflected the findings of evaluative research was produced in the second UPPAP carried out in 2002, finding that the PEAP did indeed place greater emphasis on cross-cutting issues, such as environment, and recognized the heterogeneity of the poor in the greater emphasis placed on the decentralized delivery of services, and on district level plans. In a more subtle manner, while the 1998/99 UPPAP raised concern over the negative impact on the poor of cost sharing in health services, this was not addressed in the PEAP of 2000, but was announced by the Government in 2001 during the election campaign (OPM, 2008[5]). More broadly, the timing of the PEAP cycles, revised in 2000 and 2004, did not match with the election cycles of 2001 and 2006, which has resulted in some analytically driven analysis appearing in the PEAP or emanating during or following an election, such as the Strategic Export Initiative, which was initiated in 2001 following the election, but was not evident in the PEAP.

 

The decline in the PEAP’s influence during its third phase (2004-07) occurred as the public policy debate on development within the country and amongst international stakeholders shifted towards economic growth and enhancing the accountability of the state, in the face of evidence of malpractice and corruption in the use of the state’s resources. 

 

 

 

 

Shift to budget and performance monitoring for accountability

 

Whilst the evidence from household surveys and PSRs began to reveal that the overall poverty headcount was reducing in the country, it was also being noted that growth and development was becoming increasingly imbalanced (MFPED, 2005; UBOS, 2006[6]). As the public purse expanded, based both on a strong and stable economic growth rate, relatively low inflation, and a considerable rise in official development assistance, so too did concerns on the application and accountability for public spending.  Efforts to strengthen public financial management included the recognized need to understand how public resources were being applied centrally, and locally under the decentralized system of government and public programme implementation, introduced in 1997.

 

The emphasis on monitoring shifted away from periodic analysis of poverty trends and causes, and towards the monitoring of budget spending.  During the latter half of the last decade, the MFPED introduced a series of reforms to enable Ministries, Departments and Agencies (MDAs) and Local Governments (LGs) to plan and budget annually according to clear budget lines, and against the provision of products and services.  Systems have been introduced requiring all MDAs and LGs to report quarterly on spending and progress towards stated output (product and service) targets, as the basis for future financial releases. 

 

This massive reorganization and growth in the administration of Government placed increasing attention on the generation and use of administrative data and statistics, and on the monitoring and oversight mechanisms in place to capture and report on performance information.

 

The political interest and pressure to monitor spending and results has increased since the re-introduction of multi-party politics in Uganda in 2006, and the growing attention of the domestic media and international community to unequal growth, and the incidence of corruption in the use of public resources.  The President and other senior policy makers have taken cognizance of these issues, and have placed increasing demands on the public service to improve its stewardship of resources and ensure effective development. 

 

The impact of this on public institutions is still unfolding. Efforts to improve oversight in key service delivery institutions (through regular implementation and budget monitoring), and through inspection of service delivery, have increased, though not in a uniform or consistent manner.  The former PMAU in the MFPED has been transformed into a Budget Monitoring and Accountability Unit (BMAU) to track expenditure and outputs against budgets and planned activities in a sample of frontline institutions, through direct field monitoring and reporting. Monitoring is focused on the outputs that are agreed and signed up in the Performance Contracts between the Permanent Secretary of MFPED and the implementing agencies. Efforts to reorganize the overall inspection function of Government are underway.  A Presidential directive to initiate public fora to hold local Government to account (so called public ‘barazas’) has been initiated by the Office of the Prime Minister, and the UBOS is seeking to expand its mandate to improve the quality of administrative statistics.

 

Simultaneously, the Office of the Prime Minister (OPM) has reinvigorated its own constitutional role on coordinating the implementation of public policies and programmes through establishing a robust monitoring coordination and oversight function. Building on an early attempt at producing an outcome-based review of the PEAP in 2007, bi-annual Government Performance Reports were initiated in the Financial Year 2008/09 and two-day retreats of all Ministers, Permanent Secretaries established under the President to discuss the performance report, holder portfolio Ministers to account, and propose corrective measures. Since 2011, these bi-annual retreats have been expanded to include all Local Government Council Chairpersons and Chief Administrative Officers.  This has expanded the basis of debate around public service delivery.

 

In this new environment, accountability has become the central concern, with the consequent de-emphasis on generating information for the purposes of understanding causes and generating policy lessons.  The considerable differences in practices across Government in the monitoring and inspection of public investments reflects the balance of priorities, incentives and capacities across the sectors, as influenced in-part by the international community who continue to invest in certain sectors over others (notably the front line services such as health, education, water and increasingly roads).

 

The effectiveness of the increased monitoring of public spending has yet to be born out.  The regular public presentation of information on the performance of Government has not yet appeared to have impacted on public policy, but has resulted in the greater public focus on the need to enforce accountabilities, and significantly has also revealed the widespread misuse of funds.. However, given the breadth and depth of evidence on challenges to public service delivery, the political class and legislative arm have still to make best use of this information in shifting policy directions, reallocating resources to more efficient areas, or in cases of misuse of resources, holding culprits to account.

 

 

 

 

Efforts to strengthen the analytical and the evaluative

 

The concerted efforts to strengthen monitoring have come at a cost. The practice of public sector evaluation has never been institutionalized in the country, but was reasonably well serviced in the late 1990s and early 2000s through the PSRs and other analytical tools and products. This has led to a deficit in the analysis of results and constraints, and in the identification of policy lessons and choices. Monitoring itself does not answer these questions or address these issues.

 

Between 2005 and 2008, a review of evaluation in Uganda found eighty-five evaluations commissioned, of which ten were commissioned or co-managed by the Government (OPM, 2009[7]). When reviewing these ten reports in detail, several were found not to meet basic quality standards for evaluation, and were subsequently reclassified as performance assessments or reviews. In terms of coverage of public investments, Government estimated in 2009 that less than 10 percent of projects over the period 2005-08[8] were being subjected to evaluation.  In a sample of Ministries, Departments and Agencies, the same review found little explicit demand for evaluations, aligned with weak organizational capacity and limited use of those that were conducted. In an apparent contradiction, it was also found that was a perceived need for ‘more evaluation’ in principal (ibid), reflecting not weak demand per se, but the lack of a clear policy, incentives and resources to evaluate.

 

Of the evaluations that were conducted during that period, there is little evidence of their impact due to a lack of appraisal by Government or the international partners on dissemination or use. An exception were the evaluations covering the agriculture sector, starting with one covering the Plan for the Modernization of Agriculture (PMA) in 2005, a second looking at the performance of the National Agricultural Advisory and Development Service (NAADS) in 2008, and a third being an impact evaluation also on NAADS in 2009.  Each of these independent evaluations gave a broadly positive assessment of progress, with the impact evaluation on 2009 showing positive results on adoption of improved technologies, productivity and per capita incomes. The study (Benin, 2009[9]) also found that between 2004-08, NAADS was associated with an average of 24-53 percent increase in per capita agricultural income of the programme’s direct participants compared to their non-participant counterparts.  However, as noted by other commentators, that despite reported successes of NAADS, overall indicators for agricultural growth were not improving (Kjaer and Joughin, 2011[10]). This has presented a problem for Government, and resulted in changing strategies on agriculture, and NAADS, including the renationalization of extension workers, despite the finding from the 2008 evaluation that ‘a return to using public sector extension workers for the majority of services was not a viable option’ (Ibid, 2011). This raises two issues, one relates to the unit of analysis of the evaluation, where NAADS as an initiative may be seen to be relatively successful, but does not take into the broader context, which may be less positive, and hence points at more fundamental structural challenges. The second, reflects the use of evaluative evidence in cases whether the majority of the population, including policy makers, have direct personal knowledge as landowners and farmers, and where the majority of the electorate live of the land, and thus require evidence of efforts to improve their lot.

 

The paucity of rigorous public sector evaluations has been identified as a constraint to improving the culture of debating empirical evidence in public policy. To address this, the Office of the Prime Minister began an initiative to strengthen the framework and production of rigorous evaluations across the public service. Starting in 2008, OPM led on the design, implementation and dissemination of evaluations of the successes and failures of the PEAP over the decade of implementation, and two (2008 and 2011) on implementation of the Paris Declaration on Aid Effectiveness in Uganda. 

 

The PEAP evaluation process was important in that it was managed by a steering committee composed of representatives from OPM, MFPED and NPA, as well as from the funding donor agencies, and was a good example of how inter-ministry coordination can work, if there was a specific focus or task. It was also important in that OPM understood how the evaluation results could and should be disseminated and acted upon, managing a series of workshops for various stakeholders, writing a white paper for Cabinet, based on the evaluation results and government response. While somewhat supply driven in origin, the evaluation did find an audience amongst policy makers, with the findings and recommendations discussed twice by Cabinet, and have in turn influenced the shape of the PEAP’s successor, the five-year National Development Plan.

 

Subsequent efforts to strengthen evaluation practice include the development of a national policy on monitoring and evaluation which defines the roles, requirements and practices to embed into the public service, approved by Cabinet in May 2013. The Policy outlines the delineated functions of monitoring and evaluation, and provides a prescription for the establishment of a Government Evaluation Facility (GEF).  Preparation for the GEF began in parallel to the Policy in 2010, with the Facility launched in 2011.  The Secretariat of the GEF is located at the OPM, and includes the components of a rolling national evaluation agenda determined by Cabinet; a virtual fund to provide reliable financing for evaluations selected; and a national sub-committee on evaluation composed of representatives of Government, academia, NGO sector and the donor community to oversee the design, production, quality assurance and dissemination of the studies. 

 

As of July 2013, the GEF has completed two major public policy evaluations on the effectiveness of Government’s response to absenteeism in the public service and the effectiveness of public procurement and disposals authority and has a pipeline of six further major public policy evaluations, covering a variety of public service delivery related topics, including; the effectiveness of Government’s employment strategy, a comparative evaluation of public and private service delivery; the impact of the land act amendments on illegal land evictions, the impact of aspects of the Northern Uganda Social Action Fund,.  Each evaluation is managed by a central coordinating Ministry, either OPM, MFPED or NPA, with evaluations conducted in house, or outsourced to research or consultancy institutions depending on the topic and capacity. All evaluations are subjected to independent reference groups for quality assurance, and Cabinet papers are written containing the findings to facilitate uptake. Government responses are required to all evaluation, building on the experience of drafting a Cabinet White Paper to the independent evaluation of the PEAP in 2008, where the actions from the evaluation were rigorously followed-up.

 

The strengthening of supply is linked back to demand by senior civil servants and politicians to revitalized some of the coordination structures within Government. A change in Minister and Permanent Secretary in OPM in 2009 led to the renewal of the national coordination framework of committees established by Cabinet in 2003, but left dormant in the intervening period.  A three tier structure of committees link Cabinet with cross-sectoral technocrats, and provide a conduit for feeding Government-wide directives down to implementers, and feeding evidence from analysis back. A national monitoring and evaluation technical working group meets bi-monthly and wide representation from across Government, NGO and donor communities. Sub-committees on evaluation, and on monitoring and oversight take up much of the work which feeds back into the working group and on to Cabinet.

 

This strengthening of Cabinet lead coordination system is ongoing. A feature of Uganda’s public sector governance arrangements has historically been the power of the Presidency and the relative weakness of the Cabinet system. A practical consequence is that Ministers are not subjected to a collective government discipline. There is no clear means therefore of holding MDAs responsible and accountable for their performance. The main lever for collective discipline is the withholding of funding by MFPED, but it often cannot be applied to core government services, and is ineffective against MDAs with powerful political backing.  The moves by Ministers and subsequently the appointment of a new and politically powerful Prime Minister following the national elections of 2011 have been important moves by the Government to fill out the role provided for the Prime Minister in the Constitution. The Committee structure is a major step forward in strengthening the Cabinet’s role in maintaining a strategic and collective demand for performance to which portfolio Ministers and their MDAs are subordinate. It thus provides the space for empirical evidence to be considered and discussed at a high level, and thus give a greater chance for uptake in public policy and implementation.

 

 

 

 

The withdrawal of donor financing and the increasing role of politics in civil administration

 

These reforms took a backward step in late 2012 when a case of grand corruption was identified in the Office of the Prime Minister and other parts of Government, resulting in a large-scale withdrawal of donor financial aid and budget support.  This had two immediate impacts on the evaluation agenda. First, the majority share of the recurrent and development budget of the monitoring and evaluation activities was financed by donors, and thus the aid freeze had an immediate and big impact on staffing with numbers reducing by over half in the department[11] and the majority of activities stopped, including several evaluations. Second, the credibility of the Office came into question with investigations into senior and mid-level staff. During this period of investigation, court cases and internal reorganization, the instability and lack of finances resulted in the stalling of the evaluation agenda.

 

Wider questions regarding the legitimacy of the OPM to effectively plays its constitutionally mandated role of leading government business and coordinating government policies and programmes have been raised – though to a greater extent by donors than internally within the public service.  With the movement of senior civil servants, including the Permanent Secretary, and progress made Government-wide in addressing a number of financial controls and accountability issues, support is now returning. This includes to the evaluation and evidence agenda. However, the form of this assistance is likely to change. Financial aid is likely to be provided in exceptional circumstances where controls are strong and alternative modes of delivery inappropriate, so donors are likely to return to project-type support through self-managed procurement of technical assistance and services for evaluation. This is likely to achieve the necessary reduction in fiduciary risk, but also reduce public sector ownership, and possibly commitment.

 

As a break point, the corruption case and the freezing of aid enabled observers to see the extent to which the Government is committed to the monitoring and evaluation agenda, having to finance all operations from its own core budget. Government monitoring, the performance report and Cabinet retreats have continued, albeit with some challenges to quality posed by the substantial reduction in trained staff.  The evaluation agenda has suffered, with dwindling numbers attending the cross-Government evaluation committee and despite a healthy pipeline of evaluations, slow progress being registered.

 

The introduction of a new Permanent Secretary to the Office of the Prime Minister in June 2013 will change the landscape again and opportunities for evaluation in the public sector.  Early signs of promise are that she has the political support to shake-up the personnel and systems, with a number of changes in the leadership, audit and procurement staff.  It will take time to establish herself and tackle a series of issues before turning to efforts to strengthen the evaluation function.

 

The political space for evidence-informed debate in Uganda appears to be reducing. The open contestation of ideas within the ruling party, particularly amongst the younger members, have been controlled, and new legislation, such as the public order management bill, is reflective of attempts by the Government to control opposition and public rallies. Reforms within the public service remain piecemeal in sequencing and financing, and consequently there are major challenges in education, health and infrastructure provision.  In this context, decision-making is increasing based on allegiance or defensive bases, and not to harness innovation or new approaches.  This positions evaluation firmly on supply-side within the public sector, with limited opportunities for growth. However, greater demand may exist amongst non-state actors to amply the voices of the public.  Signs of this are coming with the recent re-establishment of the Ugandan Evaluation Association, whose membership is growing, and with visible signs of progress such as the organization of national evaluation week’s both in 2013 and 2014[12]

 

 

 

 

Regional Comparison

 

Little comprehensive analysis exists of the state of evaluation supply and demand across Africa, despite the considerable investment in evaluations of projects and programmes implemented in this region over the past two decades.  While there has been a shift in the focus of evaluation away from being an instrument of accountability for donor-financed initiatives to justify taxpayer spending towards a focus on learning where, when and why specific interventions do and don’t work, there has not been much in the way of a discernible shift in demand towards southern country governments and stakeholders, nor a sizeable improvements in the systems, capacities and activity levels of suppliers in these countries.  Evaluation thus remains something of a satellite to development itself.

 

There are exceptions to this, and the patterns of growth and change in Uganda provide evidence that the situation in 2014 is not the same is in 1994, with an increased demand for specific type of evidence within Government, and a growing supply of rigorous evaluation studies taking place in the country, albeit still primarily supply and northern agency-driven and hence not linked sufficiently into local decision-making practices. Two studies conducted over the period 2012-13 looked into the monitoring and evaluation systems[13] and investigated the demand and supply of evaluation[14] in a total of ten sub-Saharan Africa countries found some evidence of the formation of evaluation practices and systems through, inter alia, the establishment of units in central ministry mandates and practices, though typically under-resourced and utilised.  Evidence from countries including South Africa and Benin in Sub-Saharan Africa are particularly promising, while the conditions to stimulate greater demand and supply in countries like Rwanda, and in some respects, Ethiopia, were also noted.

 

Growth among non-state actors is also evidenced by the proliferation of voluntary organisations. However, mandating evaluation, within public institutions in particular, has been found to have mixed effects in terms of building capacity or increasing the use of evaluation evidence. As noted by McDonald et al “Making evaluation mandatory could promote a culture of token compliance, but voluntary adoption is much slower to take effect”[15]. Analysis of the political economy around the use of evidence from evaluation to inform policy making provided some explanation for why resourcing and capacity do not always translate into policy influence, and how a more nuanced approach in each country might have a greater impact. 

 

The research by CLEAR (op cit), for example, illustrates that in states with strong centralised governments that have a clear focus on improving services for citizens, the opportunity to strengthen evaluation exists through established channels of accountability, as long as it does not challenge areas sensitive to the incumbent party.  In countries were power is more decentralised, opportunities to strengthen evaluation are more varied, but less likely to impact on the overall development trajectory.  The challenge in all cases is being clear about whether investment in evaluation is likely to strengthen progressive policy choices and democratic debate, or reinforce authoritarianism and rent seeking behaviour[16] where it exists.  Hence, the basis of decision making greatly affects the source and type of evidence demanded. 

 

The growing number and reach of national evaluation societies (NES) and VOPES also appears to reflect a supply response to an increasing demand for evaluation in a diverse range of countries[17]. The basis for the rise of networks of evaluators has not, as far as the author is aware, been researched, but is possibly a lagged response to the drive for results, and the consequent focus on measurement and evaluation, stimulated by the Paris Declaration. 

 

 

 

 

Conclusion

 

The experience of Uganda over the past two decades has illustrated that the establishment and effectiveness of monitoring and evaluation practices are most acute when policies, institutions and individual actors come together. The composition and balance of these factors shift over time. The poverty-focused analytical work was pre-eminent in the late 1990s/ early 2000s at a time when there was political consensus over the required direction of change within the new Government and with strong support from the donor community. Thus, the analytical work had a willing audience within the political class and amongst the country’s supporters.

 

As the PEAP began to lose traction, and the shift in Government policy moved towards economic growth, so too there was a reduction in the supply and demand of poverty analysis, and a tactical shift with MFPED towards monitoring and expenditure tracking. The number of agencies involved in over monitoring and oversight has proliferated, as demand pressures and supply opportunities within the public sector have increased. 

 

In the last three years, notwithstanding the continued focus on monitoring for accountability, the OPM has led the drive towards reintroducing more analytical work in the policy debate through the establishment of the Government Evaluation Facility. Shifts in the political economy of the Cabinet, with a new and powerful Prime Minister and a reinvigorated coordination mechanism appeared to providing a basis and structure through which demand for evidence could be elicited to inform public service delivery.

 

However, the demand-side conditions considered prerequisites for effective evaluation uptake and use seem to be decreasing more recently in Uganda.  The openness of the political space for debate is reducing, as the move towards the next election campaign in advance of 2016 nears. The organizational systems and the individual champions are in some cases still present, but are unable thrive or function as effectively as they should when undermined by corruption cases, potential loss of legitimacy and in the context of continuingly poorly performing public services. The opportunities for evaluation to influence decisions may now lie primarily outside the public sector and with the public themselves, supporting citizens to demand better services and rights. Efforts to address this are coming, such as the reinvigorated Uganda Evaluation Association, but will need nurturing and support over many years to effectively play this role.



[1] In this paper, evaluative research conducted in the public sector, i.e. led by public or quasi-public sector institutions, refers to analysis not only of trends, but also of causes and potential policy responses. These include the Poverty Status Reports (PSRs) and related analytical products.

[2] This paper builds on one presented at the 2012 American Evaluation Association Conference entitled ‘political economy of evaluation in Uganda’s public sector’ and the article by Rider Smith, Nuwamanya and Nabbumba Nayenga, 2010, Policies, Institutions and Personalities: Lessons for Uganda’s experience in Monitoring and Evaluation in  From Policies to Results: Developing capacities for country monitoring and evaluation systems, UNICEF

[3] Weiss, 1999, The interface between evaluation and public policy, Evaluation 1999 5:468; Bamburger, 2009, Institutionalizing Impact Evaluation within the Framework of the Monitoring and Evaluation System, World Bank; Gaarder and Briceno, 2010, Institutionalization of Government Evaluation: Balancing Trade-Offs, Working Paper 8, International Initiative for Impact Evaluation; Weyrauch and Langou, 2011, Sound expectations: from impact evaluations to policy change, Working Paper 12, International Initiative for Impact Evaluation

 

[4] The full normative framework can be found at the paper presented at the 2012 American Evaluation Association of the same name.

[5][5] Office of Prime Minister, 2008, Independent Evaluation of Uganda’s Poverty Eradication Action Plan 1997-2007, Vol 2: Political Economy, Oxford Policy Management Ltd. Government of Uganda.

[6] Ministry of Finance, Planning and Economic Development, 2005, Poverty Status Report 2005, Government of Uganda ; Uganda Bureau of Statistics, 2006, Uganda National Household Survey 2005/06- Socio-Economic Module Report, Government of Uganda.

[7] Office of the Prime Minister, 2009, Mapping evaluation practice, demand and related capacity, Ian C. Davies. Unpublished Report. Government of Uganda

[8] Including donor-financed projects implemented through the public sector.

[9] Benin S. (2009). Impacts of and Returns to Public Investment in Agricultural Extension: the Case of the NAADS Programme in Uganda. IFPRI Research Report. Washington: International Food Policy Research Institute

[10] Kjaer, A.M. & Joughin, J., 2012, The Reversal of Agricultural Reform in Uganda:  Ownership and Values, Policy and Society, Vol 31, Issue 4, November 2012: pp.319-330

[11] Contract staff represented approximately 80% of all staff in the Department of M&E, OPM . Of these, 70% had contracts with OPM, all of which were financed through donor funding. With the freeze of official aid to OPM, the Government cancelled the contracts of these staff after two months in December 2012.

[12] With considerable technical and financial support from the German Government/ GIZ.

[13] CLEAR (2012), “African Monitoring and Evaluation Systems: Exploratory Case Studies”. Graduate School of Public and Development Management, University of Witwatersrand, South Africa. www.wits.ac.za/files/glgah_826912991359647072.pdf

[14] CLEAR (2013). “Demand for and Supply of Evaluations in Selected Sub-Saharan African Countries”. Graduate School of Public and Development Management, University of Witwatersrand, South Africa. www.clear-aa.co.za/publications/

[15] McDonald, B., Rogers, P., and B. Keffurd (2003). “Teaching People to Fish? Building the Evaluation Capability of Public Sector Organizations.” Evaluation, vol. 19, no. 3, 9-29.

[16] ‘Rent seeking behaviour’ (post Anne Krueger, 1974) refers to exploiting ones position for personal gain.

[17] Ba Tall, O. (2009). “The role of national, regional and international evaluation organizations in strengthening country-led monitoring and evaluation systems”. In M. Segone (Ed.), Country-led monitoring and evaluation systems: Better evidence, better policies, better development results (pp. 119–143). New York, NY: UNICEF.

TAGS:
Evaluación
Monitoreo
Política Pública
SHARE:
BUSCADOR AVANZADO