News

Increasing knowledge uptake and aid effectiveness though better evaluations: Actions and shifting paradigms for complex times

Istock 1090652224

Speaking at the 14th European Evaluation Society Biennial Conference in June, NIRAS’ Monitoring, Evaluation and Learning team will present their insights on how to transform and adapt monitoring, evaluation and learning to respond to the challenges and opportunities of our times. Some of the topics we will address include the changing role of evaluators, the application of big data, and the fine balance between standards and accessibility.

May 30, 2022

NIRAS' contributions at the conference build on insights we have gleaned from different types of assignments. We hope we can offer new perspectives – grounded in practice – of how these shifts can be supported and contribute to transforming evaluation in ways that improve development aid and foster sustainable living for equitable and resilient communities.

The 2022 European Evaluation Society (EES) conference is gearing up to be an interesting platform for interdisciplinary debate on how to improve monitoring, learning and evaluation. As the conference organisers note, the evaluation sector is increasingly professionalised and systematised and may be at risk of falling into the trap of path dependency – a process whereby institutions or systems develop in certain ways as a result of their structure or past beliefs and values.

With increasing uncertainty and complexity in the world, at NIRAS we regularly ask ourselves how our evaluation systems and approaches can support transformative evaluations that increase knowledge uptake and improve aid effectiveness. We aim to rethink the role of evaluators in making evaluation transformative by constantly reassessing our position as neutral observers, but also as advocates and champions of the aid effectiveness agenda. We encourage our clients to push the boundaries of evaluative thinking by taking an interdisciplinary approach, considering contextual factors and applying broader systems-thinking to assess the effectiveness of development aid. We always consider how development aid enables inclusive, equitable and resilient communities to thrive while preserving ecological balances and protecting biodiversity.

Several shifts are needed for the evaluation sector to respond to these challenges, seize opportunities and transform aid effectiveness – shifts in institutions, identity, content, and methodology. Our contributions to the conference build on insights we have gleaned from different types of assignments. We hope we can offer new perspectives – grounded in practice – of how these shifts can be supported and contribute to transforming evaluation in ways that improve development aid and foster sustainable living for equitable and resilient communities.

From neutral observers to advocates, truth speakers, and agents provocateurs: what role should evaluators play?

Evaluators can play different roles to ensure that research findings are useful to those involved. As evaluations respond to our transformative times, it is important for us to reflect on the shifting roles of evaluators and evaluations. In this EES session, we seek to further the discussion in a forum that brings together evaluators, commissioners, researchers and implementers. We will focus on the role of evaluations, the different roles evaluators can play, and how evaluators can wear different hats effectively in different circumstances.

We will also present the work NIRAS has been doing together with the Ford Foundation since 2018 that applies a Developmental Evaluation approach. [We recently completed this evaluation and the final report is available here.] The approach is well-suited to evaluating innovative programmes, or any process in the exploratory phase, where specific outcomes are not yet known or are, at best, vague. This approach assumes that social interventions often take place in a complex environment where problem-solving actions are uncertain. During this session, we will present the key principles of Developmental Evaluation and discuss how to do it in practice, its benefits, and limitations. We will share lessons from the implementation of two large, multi-country, multi-year Developmental Evaluation assignments and also provide useful tools and resources for anyone who is interested in planning a Developmental Evaluation. We will conclude the session with an open discussion regarding the role of the evaluator in ensuring that knowledge changes practice and why we need to redefine the role of the evaluator when evaluating amidst uncertainty.

Effect Evaluation NS 08110

How can evaluation make the most of big data and work more closely with data scientists to provide real-time evidence of what works and what does not?

Unstructured text (including speech recorded data) is arguably the most common type of data that real-world evaluators encounter, and at a dedicated EES session we will present text and speech analytics tools such as qualitative semi-structured interviews, project documents and newer big data sources like website-based text data sources. The tools for processing and analysing speech and text data have evolved rapidly over the past few years, based in part on advances in deep-learning and also the wider natural language processing field (the combination of computer science, linguistics and artificial intelligence disciplines). Text analytics tools offer huge opportunities for evaluators to transform evaluation systems through lowering the time and cost required to process and analyse expanding sources of text and speech data.

Important challenges however prevent evaluators from getting the best possible use from these tools. Key issues involve (1) evaluators not being aware of tools that they could use directly without having to rely overly on data scientists; (2) the need to better link machine and human-based intelligence: evaluators need to drive the use of the tools; and (3) challenges related to the deployment of these tools so that they can be used to support project and programme staff with real-time decision making and learning. These are some of the issues around text and speech analytics tools we hope to address at the EES event.

Can we redefine standards of rigor and at the same time make methods more accessible to agents of change and decision-makers?

In our last session, together with some collaborators, we will elaborate on the role of external evaluation quality assurance. We will discuss from the perspective of different actors involved in developing, managing and conducting evaluations: evaluation commissioners (interested in obtaining credible evaluation reports complying with international evaluation standards), evaluation teams/team leaders (interested in external evaluation quality advice in order to foster the quality of their evaluations), evaluation managers of consultancy firms contracted to conduct evaluations (interested to optimize evaluation quality by engaging external QA advisers) and QA advisors (tasked to review the evaluation products on the basis of the evaluation’s Terms of Reference, evaluation quality requirements of the evaluation commissioner/the consultant firm and international evaluation quality standards). During a panel session, we will highlight practical challenges in providing external evaluation quality advice aimed at fostering the quality, credibility, and utility of evaluation products.

The EES will be held on 6-10 June, more information is available at www.ees2022.eu. We look forward to seeing many of you in person at the conference. Save the date, join our sessions, ask questions and meet us afterwards!

Raphaëlle Bisiaux

Raphaëlle Bisiaux

PhD, Senior Consultant/ Evaluation, Research and L

Stockholm, Sweden

+46 8 545 533 26

Graham Haylor

Graham Haylor

Managing Consultant/Sector Lead

Edinburgh, United Kingdom

+44 (0)131 440 5500