Text

While making arrangements for evaluation in your project, you may want to consider what we came to call, ‘evaluation governance’. What do we mean by ‘evaluation governance’? Broadly speaking, we understand this term to encompass the following elements:

(1) people responsible for the evaluation, their roles and relations with other partners;
(2) processes that connect people (e.g. co-creation, communication and coordination mechanisms, planning);
(3) resources available for evaluation.

 

In the examination of UIA M&E case studies, we asked ourselves a number of questions related to evaluation governance. These may be the same questions that you are currently asking yourself:
Who should conduct the evaluation in my project? How should the evaluator be involved in my project? What skills are necessary to design and implement effective M&E? Should M&E be conducted continuously or at specific points in the project? The answers to these questions will also depend on the evaluation approach that you choose. So we also encourage you to look at the Evaluation approaches section.

The UIA M&E case studies show that in choosing the evaluator and establishing their position in the project, several issues should be considered, including: (1) the required competences and skills of the evaluator; (2) possible concerns over the evaluator’s judgement, their objectivity; (3) the evaluator’s relationship with the project partners, i.e. being an insider or outsider. Other matters worth examining go beyond the evaluator and concern the wider environment in which they will work. In addition to resources, this includes, for example, the internal project culture, the presence of a learning mindset or the willingness of the partnership to take the evaluator’s recommendations, e.g. about further project implementation, on board etc. These issues are woven into the lessons collected below.

While an authoritative, one-size-fits-all model cannot be drawn, there are observable advantages of choosing certain arrangements over others in your projects.

Lesson #1

The analysed UIA M&E case studies show the importance of designating a person, team, partner or subcontractor responsible for overseeing the evaluation. The case studies had either a designated evaluation focal point (e.g. CURANT, Steam City, BRIDGE, CALICO, U-RLP, CoRE) or a number of partners tasked with various aspects of the evaluation from the project initiation (e.g. B-MINCOME, OASIS). In Curing the Limbo, the evaluation was coordinated by the quality assurance team comprised of representatives of all partners. The Utrecht U-RLP project benefitted not only from the involvement of its lead evaluator, but also the presence of the academic Advisory Board. While in Vienna CoRE, the subcontractor leading the evaluation effort set up a think tank for this purpose.

The presence of a designated evaluator, evaluation team or partner adds weight to this component in the overall project scheme. As seen in the case studies, a committed evaluator can guarantee that evaluation receives enough attention in complex project environments. This can, for example, take the form of giving impetus to the development of a common approach (e.g. in U-RLP, Steam City), making sure that enough data is available from the start (e.g. in B-MINCOME, BRIDGE) or that results are communicated to the partnership and discussed (e.g. in CURANT or CALICO).

“Of course the project partners are focused on the implementation of their activities. […] And we are focused on the impact and monitoring all these activities.” (Source: STEAM City representative)

The managers or implementers will naturally focus on carrying out the activities, so the evaluation will likely not be their top priority. A designated evaluator will be there specifically to set up good research conditions and learn about whether, how and why the project worked to make that knowledge available to others.

“When you’re focused on executing the project, an evaluator can help you to continuously see the bigger picture. And you can only benefit from that.” (Source: BRIDGE representative)

This is not to say that evaluators’ work is or should be fully separate from that of project managers and implementing partners. On the contrary, the experiences of analysed M&E case studies, as discussed in other lessons, suggest that cooperation in evaluation pays off. Yet, some projects saw benefits in separating the function of project execution from that of the evaluation (e.g. B-MINCOME, CoRE, CURANT).

“We need to separate the team that executes the project from the team that is evaluating the results of the project. It does not mean that they do not communicate. In fact, it’s very important that the evaluators’ team were in contact with the executing team […] But the competences, the functions of both parts should be separated.” (Source: B-MINCOME representative')

'We rightly decided […] that we need somebody to – while we implement the activities – have an eye on how we are performing, where we have to adapt […] we need a kind of outside view too to tell us if we’re still in line or if we need to turn at some point and go another way.” (Source: CoRE representative)

The M&E case studies reveal an integrative potential in having a dedicated evaluation partner. This is particularly the case when the evaluation adopts a more participatory approach whereby implementation partners are also involved in evaluation. In the initial phases, an evaluator can assist in bringing together the multitude of perspectives and ideas of what the project tries to accomplish, contributing to the creation of a common vision (e.g. in CURANT, Curing the Limbo, U-RLP, Steam City, CALICO).

During the later stages, a designated evaluator collects data from all of the project partners, enabling them to capture a holistic view of the project, adding a fresh outsider perspective, one that can surely be valuable for managers throughout the project.

“While in a partnership every partner had some expertise […] nobody had a helicopter view. And that’s what CeMIS had, CeMIS had a helicopter view from all of the partners.” (Source: CURANT representative)

In complex and multidimensional evaluations, where more than one partner is involved in the evaluation, an ‘evaluation integrator’ may be necessary to pull together all of the evaluation’s results. In Barcelona B-MINCOME, the integration of results was undertaken after project completion. It was considered important enough to receive additional financing from the municipality beyond UIA support.

Since Athens Curing the Limbo did not have a clearly designated evaluator from the start, its experience is illustrative of the important role an evaluation focal point can play. The project was implemented by several partners with strong expertise and individual understanding of evaluation.

'But then we realised that we were not measuring the common goal of the programme.” (Source: Curing the Limbo representative)

It took time for the Athens Curing the Limbo partners to arrive at a shared interpretation of the project’s goals and how their achievement could realistically be evaluated. The role of recreating a common framework was undertaken by the quality assurance team composed of representatives of all partners. This was a new task assigned to this team during the project’s implementation.

It is hard to deny the value of a leader in any type of endeavour. This proves true in evaluation efforts in large-scale innovative urban interventions. It comes as no surprise that this role is particularly pronounced when evaluation is distributed across a larger number of project partners, when the integrative process proves crucial for generating a final understanding of the whole experience.

Lesson #2

In the analysed UIA M&E case studies, the evaluators were closely involved from the start, whether they were brought into the partnership as partners or subcontractors (see Lesson#1). This early involvement was positively perceived both by partners implementing activities and evaluators themselves.

“It’s so important that the university was implicated in the project from the beginning. They are with us, mostly every big meeting that we have. [They] are there to evaluate, monitor.” (Source: CALICO representative)

Engaging evaluators as early as the project development or inception phase creates an opportunity to come up with a better intervention design, which in turn translates to the project’s evaluability. For example, the analysed case studies show that evaluators can help partners to reach, if not common then, better understanding of their objectives, especially when interventions integrate multiple partners from various sectors (e.g. CURANT, Steam City, BRIDGE, U-RLP). While evaluators may feel that it is not their role to bring everyone exactly to the same page (e.g. in CURANT), they can certainly add value by bringing ideas closer, if involved from the start. Based on their expertise, evaluators can provide a different perspective on how much and what exactly is achievable within a given project, as well as what seems to fall beyond the project’s reach, setting realistic expectations for results and impact.

“So shaping monitoring and shaping the content of the project activities kind of went hand in hand. […] they [external evaluator] were also reflecting with us on what is realistic to implement, what is realistic to evaluate.” (Source: CoRE representative)

Evaluators can further elicit better understanding of causal pathways between specific actions, results and further outcomes. By supporting partners in improving the project design, evaluators help to increase the evaluability. After all, it is only possible to evaluate the project’s effectiveness or impact when they are well-defined. We describe the role of evaluators in finding a common ground in terms of project objectives when discussing theory-based approaches and the theory of change, specifically in the Evaluation approaches section (see Lesson #5), as this approach seems particularly well-suited for eliciting more clarity as to the project’s aspirations.

The evaluator’s early participation can further help to increase the project’s evaluability by facilitating the establishment of an appropriate M&E framework (e.g. Steam City, U-RLP). This means supporting partners in, for example: (1) developing project indicators, especially those related to higher-level outcomes which can be both time-consuming and challenging (e.g. experiences of CURANT, Steam City, OASIS); (2) defining the necessary data and its sources (e.g. B-MINCOME, BRIDGE); (3) setting up processes to ensure consistent data collection (e.g. Curing the Limbo). The evaluator’s support in ensuring the collection of relevant and quality data is important considering that data unavailability can be a challenging obstacle in evaluation (e.g. experiences of BRIDGE).

Finally, early engagement with evaluators, especially from project partners, creates an important opportunity for better alignment of the evaluation design with project activities and outcomes (see Lesson #2 in Evaluation approaches), as well as managing partners’ expectations in terms of evaluation results.

“It is often the case that the involvement of the evaluators comes pretty late, sometimes even when the project has already been implemented, which I think is highly problematic because this creates problems in terms of coordinating the intervention and evaluation design.” (Source: OASIS representative)

Bringing evaluation partners into the project early on affords the evaluators comprehensive insight into the work done from start to finish. This level of knowledge can be hard to recreate for someone that becomes involved at later stages, especially in complex and innovative interventions such as UIA’s.

“We’ve heard of projects where an evaluator was added to the project halfway through the execution and […] we cannot even consider how difficult that must be.” (Source: BRIDGE representative)

As the UIA M&E case studies show, evaluators’ early and consistent involvement positively influences the perception of the evaluator by other partners. This seems particularly important given that the evaluator is someone who, at the end of the day, expresses a judgement about the conducted work. Other partners are likely to treat the evaluator as someone who, essentially, knows what he or she is talking about when they express opinions about the project (e.g. Steam City, CoRE).

"It is important to feel this kind of view from the beginning to the end, and not just [that evaluation] gets done by somebody who makes an evaluation, but who is not so much involved in the main ideas you want to reach with this project.” (Source: CoRE representative)

“[Our valuator] is talking with each of us, trying to understand […] So she is trying to have a complete view, and then going peer to peer to each partner to get more data.” (Source: STEAM City representative)

During project implementation, as part of a consortium or through other forms of close cooperation, the evaluation partner is more likely to receive:

  • better access to information through established relations with other partners and easier access to target groups (e.g. the experiences of BRIDGE in accessing schools where assistance from partners was necessary);
  • resultant deeper knowledge of the project and possibly better understanding of its dynamics;
  • direct access to communication and coordination procedures (e.g. participation in meetings) which allow for feeding evaluation insights into the project itself (e.g. in BRIDGE, CURANT, U-RLP, CALICO).

“Every week we talk [with partners] and we know what the point is, what the development of the work is. […] We are inside of all of the development that the project has done.” (Source: STEAM City representative)

With better access to data, designated evaluators are able to produce more robust results. They can witness how the project unfolds in practice with all of the challenges and obstacles. The insider perspective provides evaluators with a good sense of what is feasible in terms of suggesting changes and improvements. All of these factors can contribute to easier uptake of evaluation results by other project partners.

Lesson #3

Most of the analysed UIA M&E case studies engaged universities (e.g. CURANT, B-MINCOME, CALICO, OASIS, BRIDGE, U-RLP). Some also engaged consultancies with a specific focus (B-MINCOME, CoRE, BRIDGE) or NGOs (B-MINCOME, Curing the Limbo) in their evaluation efforts. Their experiences show that the participation of such evaluators provided access to additional in-depth thematic expertise, an advanced research skillset and prior relevant experience.

The availability of know-how translates to better chances of selecting a more fitting evaluation approach suitable for complex innovative projects, taking into account ethical, political or practical constraints. In some UIA interventions, in addition to M&E plans, evaluators prepared groundwork documents or manuals which outlined the theoretical understanding of the adopted evaluation approach and a rationale for its selection. For examples of those, you can consult the ‘Groundwork for evaluation and literature study’ developed by CeMIS for Antwerp CURANT or the ‘Groundwork for evaluation and state-of-play’ developed by the evaluators from Vrije Universiteit Brussel for Brussels CALICO. So, with the multiplicity of approaches, paradigms and research methods available, an experienced evaluator can work as a guide to choose an approach sensitive enough to answer evaluation questions.

With theory-based approaches, evaluators with subject matter expertise were able to ground the projects’ theories of change in established science or, in fact, verify/validate them against current science. This was the case for the evaluations of Antwerp CURANT, Rotterdam BRIDGE and Utrecht U-RLP. The need for subject matter expertise was also underlined by the evaluators of Brussels CALICO who conducted the evaluation within the action research paradigm. The power of an action researcher to substantially steer the project in a specific direction should, in their view, be grounded in adequate substantive knowledge.

Experienced evaluators can also steer the partners through the difficulties of evaluation implementation, explaining the significance of various elements or steps of the research process. For example, in Rotterdam BRIDGE one of the project representatives admitted that the evaluator’s role became more central over time, e.g. in capturing the project’s aspirations.

“They had a very decisive role in explaining to us, continuously, what our project was doing […] what the effect was in the broader picture. They became much more the centre of the project than it was planned in the proposal stage.” (Source: BRIDGE representative)

In Utrecht U-RLP, the evaluator devoted a lot of effort to explaining the procedure and significance of recreating the theory of change to the partners. This process also unveiled that, in guiding the partners, an evaluator can also function as an educator. As such, in addition to subject matter and research expertise, the evaluator should be able to rely on a strong set of soft skills, including an ability to communicate effectively. This set of soft skills can certainly help with the evaluator’s main tasks, such as collecting the data from other partners and stakeholders and communicating results.

The availability of advanced research expertise enabled the implementation of tailored evaluation designs, making use of a wide array of data collection and analysis methods. Familiar with the long-standing debates about the advantages and disadvantages of data collection methods, experienced evaluators help offset the shortcomings of one with the strengths of another. Indeed, all analysed UIA M&E case studies proposed mixed-method research designs. Where experimental approaches did not prove feasible for practical, ethical or political reasons (i.e. CURANT, U-RLP, BRIDGE), evaluators were able to propose alternative non-experimental approaches.  

The availability of experienced evaluators enables the implementation of complex and multi-dimensional evaluations, aspiring to glimpse at the possible impact. For example, the Barcelona B-MINCOME project benefited from the extensive research experience of a number of partners. As a result, it is a rare example of a public policy intervention also designed from the start as a social experiment, involving various additional types of research. The combined expertise allowed partners to conduct evaluations of different types of impact (social and economic) at various levels (individual, community, institutional) including through applying a counterfactual approach. In addition to quantitative data collection and analysis methods, qualitative methods were also employed, including an ethnographic study.

“I think it’s very strategic to have these people that could understand and translate the aim of the project to the real application, in the real world with vulnerable people with social workers that are not used to  this kind of research.” (Source: B-MINCOME representative)

The Utrecht U-RLP project benefited not only from the knowledge and experience of its academic evaluator from the University College London, but that of the Advisory Board involving distinguished scholars and chaired by a representative of the University of Oxford. The Advisory Board met three times to discuss the methodology, interim findings and draft final report. With its multidisciplinary in-depth academic expertise and independence, it was able to provide subject matter expertise, but also challenge the team by questioning some of its assumptions or exposing biases. To have such support can be particularly helpful for evaluators who are brought into the partnership early on and work closely with other partners, becoming strongly involved in the project. The Advisory Board was in a position to provide guidance to the researchers in dynamic evaluation circumstances, when initial plans were not feasible due to changes external to the project. It was also able to add nuance to the findings.

“[…] to have that external body was very helpful because it really stopped that pro-innovation bias […] We were pushed by our Advisory Board to be much harder in our criticisms. […] having the Advisory Board there was good really to help us push back when we needed to.” (Source: U-RLP representative)

Similarly, in Antwerp CURANT, the evaluator CeMIS was also able to identify potential biases in project partners’ approaches and communicate these findings early enough to make changes.

Evaluators with a long-standing research background in working with various stakeholder groups can draw from their experience to employ techniques, facilitating the work with more vulnerable populations such as young children. For example, in Paris OASIS, in order to talk to children as young as five years old the evaluators used puppets to conduct the survey.  

The challenges of research among young children in Paris OASIS underlined the value of complementary expertise at the service of evaluation. As the partner responsible for evaluating social impact was fine-tuning its data collection instruments to children’s capacities, striving for simplicity, it also consulted with partners responsible for climatic impact to make sure that the instruments remained true to the scientific nature of the project (see the Data collection and Horizontal issues sections). In fact, evaluations such as those of Paris  OASIS, Barcelona B-MINCOME or Rotterdam BRIDGE, which aim to capture different kinds of impacts (e.g. social, economic or environmental), need to ensure different kinds of expertise, either combined in the capacities of one evaluator or seeking more than one evaluation partner.

Alongside their knowledge and skills, academic partners also contribute their reputation for scientific rigour which can add credibility to the final evaluation results, especially when the evaluated interventions are innovative and pioneering, and therefore risky. This can create a stronger argument for any follow-up interventions or upscaling. 

“It’s really important to have CeMIS on board to have objective results and to see whether what we thought was really right; to have some leverage to sustain the project and to disseminate project results.” (Source: CURANT representative)

 

 

 

Action research

Text

Action research means “research informed by social action and leading to social action. Action is taken to improve practice and the research generates new knowledge about how and why the improvements came about.”

Source: Curing the Limbo Project, Evaluation Handbook (V.3.1), Athens, 2019.

To learn more about this approach, you can consult e.g.:

  • Bradbury, H., The SAGE Handbook of Action Research, SAGE Publications Ltd., 2015.
  • Coghlan, D., Doing Action Research in Your Own Organization, SAGE Publications Ltd., 2014, and the accompanying website with tips and resources.

 

Counterfactual approach

Text

The counterfactual is a hypothetical situation describing “what would have happened had the project never taken place or what otherwise would have been true. For example, if a recent graduate of a labor training program becomes employed, is it a direct result of the program or would that individual have found work anyway? To determine the counterfactual, it is necessary to net out the effect of the interventions from other factors—a somewhat complex task.”

Source: Baker, J.L., Evaluating the Impact of Development Projects on Poverty. A Handbook for Practitioners, The World Bank, 2001.

Lesson #4

As evidenced by the analysed UIA M&E case studies, including evaluators in the partnership creates important opportunities (see Lesson #2), but the proximity to the project activities can also present challenges worth considering at the outset.

As in most research with human subjects, evaluators face the challenge of building trust with their target group (see the Data collection and Horizontal issues sections). UIA M&E case studies were no exception, with the issue being indicated e.g. in Antwerp CURANT, Brussels CALICO and Barcelona B-MINCOME. The fact that some of the studied evaluations involved vulnerable populations (e.g. refugees or children), entailed sharing sensitive information or required substantial participation of respondents made the challenge more visible. As a practical response to this challenge, the evaluators in some projects increased their presence during project activities to make sure that they were visible and familiar to their target group (e.g. CURANT, B-MINCOME). This was particularly the case in the first months of the projects.

“So some of the teams at the beginning [were] also involved in the first sessions […]. Just being there, looking at what was going on, taking notes and that’s all, but at least they saw us not only as researchers that were in the laboratory but as if we were persons.” (Source: B-MINCOME representative)

“Just to be part of the project, even if the researcher at first is not the most expert in the field, would have positive effect anyway in terms of trust.”  (Source: CALICO representative)

Yet too strong a presence during the activities can lead the beneficiaries to mistake the evaluators for implementing partners. In view of project specificities and roles the implementing partners can play, this may lead the beneficiaries to be either more or less willing to share information with the evaluator. If the evaluators balance their insider and outsider status within the project, they can try to maximise the benefits, for example those stemming from closer relations to partners and avoid some traps. The experiences of Antwerp CURANT show that striking a balance between being internal to the project and recognised as sufficiently independent can be key for building trust with young adult refugees who participated in the evaluation. It was desirable for refugees to recognise the researchers from CeMIS to freely share sensitive information, yet undesirable to confuse them e.g. with social workers who had the power to take negative actions towards the refugees in specific circumstances. So, while CeMIS initially maintained a stronger presence during activities, it did try to ascertain its independence more at later stage.

“Sometimes we also went into activities. […] I think we did that more at the beginning, but then the line between being a partner like the other partners and being an outside partner was not so clear for the participants. So in the end we took a bit more distance for them to know that we were really more an outside partner.”
(Source: CURANT representative)

In very open and participative evaluation models, such as in Athens Curing the Limbo or Brussels CALICO, by using the action research paradigm the partners can wear a ‘double hat’ of implementers and evaluators. This could pose challenges, such as doubts about their objectivity or, in relation to other stakeholders, doubts about the chances of gathering honest feedback if not enough trust was established. One of the ways to address these limitations is to develop a true learning culture within the partnership and foster open communication. Another possible solution is to make use of various data collection methods, including some guaranteeing anonymity of respondents.    

From our observations, there is no right answer when it comes to positioning the evaluator within the partnership. Even in a single project, in some context and for some specific research activities more integration may work, while in others independence will be of more importance. Either way, weigh your options and choose the best set-up for your circumstances.

Action research

Text

Action research means “research informed by social action and leading to social action. Action is taken to improve practice and the research generates new knowledge about how and why the improvements came about.”

Source: Curing the Limbo Project, Evaluation Handbook (V.3.1), Athens, 2019.

To learn more about this approach, you can consult e.g.:

  • Bradbury, H., The SAGE Handbook of Action Research, SAGE Publications Ltd., 2015.

Coghlan, D., Doing Action Research in Your Own Organization, SAGE Publications Ltd., 2014, and the accompanying website with tips and resources.

Lesson #5

The recommendation to involve the evaluator in the project partnership early on is explored in Lesson #2 and is mirrored here in the recommendation to involve partners in the evaluation. The experiences in the analysed UIA case studies show that it is beneficial to include partners in the evaluation, both at the design and implementation stages in various ways. Most importantly, project partners are the ones who determine what they want to achieve and thus what evaluation needs to ask about and measure.

“We will only be able to evaluate whatever the city council will design. And to evaluate in a rigorous way, it should be co-created such that they took into consideration the evaluation point of view and we took into consideration what they wanted to do. We didn’t want to answer questions that were not being asked in the first place.”  (Source: B-MINCOME representative)

Partners offer distinctive substantive expertise which can help to set up appropriate indicators, baseline and realistic target values, determine information sources (e.g. stakeholders who should be involved in the evaluation), etc.

“The project partners must all be engaged in the monitoring and evaluation process and must help construct the (…) model, because they know the activities, their limitations, constraints or potential and impact of those activities.”  (Source: STEAM City representative)

They can be involved in the evaluation in a number of ways, e.g. as providers of information about the project and its context (as in all analysed case studies), as gatekeepers to other respondents (which was important e.g. in BRIDGE), as data collectors (e.g. through monitoring activities), etc. In Antwerp CURANT, the evaluators developed the approach in consultations with the partners but left the actual decision about adopting it to the implementing partners.

“We collected the data to see what the wavelengths were, who has what type of goals and ideas. […], we presented this analysis […] but then we passed it on over to the partners and to the project manager to decide how to go further.”  (Source: Antwerp CURANT)

In projects such as Athens Curing the Limbo and Brussels CALICO, which chose action research as its paradigm, the partners essentially conduct the evaluation.

Involvement of partners at each stage strengthens their ownership of the evaluation exercise, builds relations with the evaluator and creates foundations for greater willingness to accept evaluation results (e.g. Steam City, CURANT).

Lesson #6

The review of UIA M&E case studies showed that in all of them partners worked in a very collaborative, open and learning-oriented manner, building a shared culture which would facilitate both the implementation and evaluation of bold and innovative endeavours.

“All the partners, they really see this as something new, so they can really learn a lot from this project. And they really wanted to learn and to know what will come out of it, what will go well and wrong. […] not only realise the project, but really learn from the project as well.” (Source: CALICO representative)

While sometimes this collaborative, creative and learning spirit was additionally stimulated by plans to continue or upscale the innovative solution later on (e.g. in CALICO, BRIDGE, B-MINCOME), the reviewed case studies seemed to also have been influenced by the culture of the UIA itself, a culture very much appreciated in the projects. This shows that the donor also can have an important role to play in fostering a certain type of project mindset.

UIA project partners shared an awareness that implementing innovative solutions would be venturing into the unknown. The learning-oriented mindset was expressed in shared attentiveness to activities’ performance and the dynamic changes of needs, as well as in a willingness to experiment with new solutions and observe their effectiveness.

“We didn’t know right from the beginning how things will go. We were trying and seeing what works, what doesn’t work. There was a deliberate trying of new ways.”  (Source: Curing the Limbo representative)

This kind of culture permeated the projects to the extent that sometimes partners would simply see it as natural. This is despite the fact that the UIA projects are partially implemented from European funds and led by public authorities, which operate in demanding and frequently rigid legal and policy frameworks.

“On a daily basis in the activities, you realise that needs change. […] It’s a normal thing […] It doesn’t need to say how do you monitor or make the evaluation for this because it’s just like a river, it flows.”  (Source: CoRE representative)

In practical terms collaboration between partners was facilitated by various opportunities to meet. These were emphasised throughout hearings with project representatives. Steering committees representing all partners as part of projects’ overall governance created a regular platform for exchanges of information and feedback. Those exchanges were supplemented by various other platforms, such as lower-level meetings, workshops and more one-on-one relations, which were particularly intensive in the initial stages of the project and evaluation development. In some projects, even more opportunities were created for the evaluators to participate in cooperation and exchange of information. For example, in Brussels CALICO, the evaluator also participated in a communication committee, strategic committee, community care committee, governance committee, other committees and ad hoc working groups. Importantly, these meeting opportunities created the conditions for reflection on what worked, but also what did not.

“After one year, we had a big partnership meeting, like a one-year evaluation. CeMIS got presenting their results. All the partners could say what they felt went wrong, what went well, if they could see that CeMIS had the same results.”  (Source: CURANT representative)

Approaches adopted by projects such as Athens Curing the Limbo or Brussels CALICO, based on the action research paradigm, proved particularly conducive to continuous learning. The action research paradigm almost ‘institutionalises’ the learning mindset as its very premise is continuous weaving together of research and action. It embeds feedback and learning loops in monitoring, evaluation and project management. The action researcher, in turn, does not function as an external observer and assessor, but rather as a participant in various activities, such as committee meetings, providing input and advocating for various actions.

“[The evaluator] is giving as well. We are presenting things. We are sharing. If there is a question, we’re not just observing and looking at who is giving answers, then looking at the dynamics, but also participating.” (Sources: CALICO representative)

The very hands-on, participatory approach to M&E adopted by Vienna CoRE also allowed the project to quickly adjust. The activities were regularly monitored through SWOT workshops and SWOT reports, which provided an immediate insight into their performance, including unmet needs. The results of the SWOT analysis gave rise to immediate modifications and adaptations, such as the introduction of childcare to encourage participation of women or the creation of ‘women’s café’ where they could cook and do crafts while discussing sensitive topics such as family law and divorce, which – as the analysis showed – they were interested in.

In other case studies, the fact that evaluations were planned in a number of phases with reports produced at several points in time allowed the evaluators to feed their interim results into the partnership while the projects were still ongoing (e.g. CURANT, BRIDGE, U-RLP) and created opportunities to introduce changes.

“Actually, it was really nice after one year, you had a part-time report with some findings.” (Source: CURANT representative)

“There were a number of interim reports in which interim results were discussed.” (Source: BRIDGE representative)

In Antwerp CURANT, changes were made to the activities offered when evaluation findings showed a mismatch with the refugees’ needs or expectations. For example, the project introduced shorter, tailor-made educational trajectories for young refugees who were eager to start working as soon as possible. When the evaluation results indicated that the refugees were overburdened with activities or appointments, their trajectories were adapted and made less intense. In Antwerp CURANT, the implementation of changes may have been facilitated by the fact that the evaluation results also often overlapped with the observations of partners themselves.

To complete the picture and very much in line with the famous quote from Eleanor Roosevelt that freedom makes a huge requirement of every human being, it needs to be said that maintaining a cooperative and learning culture can also be a challenge. To keep this attitude running goes against a very human desire to return to the comfort zone, as noted by representatives of Athens Curing the Limbo.

“UIA culture became embedded in the way we worked. This gave us freedom. It helped us adapt and adjust. On the other hand, sometimes it got exhausting, this gave room to endless discussions. When things are unpredictable and you feel insecure, you tend to go back to what you already know how to implement, what you know best. This freedom is not easy, it helps innovation, it is vital to innovation and gave us a lot of room to add expertise, but it was also exhausting.” (Source: Curing the Limbo representative)

The effort required to maintain such an attitude and readiness to challenge and be challenged should be acknowledged and taken into account when assigning resources to innovative projects in general and their evaluations specifically.

Lesson #7

The experiences of UIA M&E case studies confirm that the effort required for an evaluation corresponds with the scale of the evaluated intervention. The large-scale interventions that UIA funded predictably demanded a lot of resources. Even more so considering the projects’ pilot and innovative character. Such projects by definition serve evaluation purposes and seek to maximise learning. Consequently, their evaluation efforts required significant resources.

“Ultimately, the value of the project is in what you learn from it. It was an awful lot of value for the people who were affected, but that passes and then the ultimate value is in what you learn, so evaluation shouldn’t be an afterthought in terms of resources, but more central to that.” (Source: U-RLP representative)

The resources that should be ensured are both financial and human, and will depend on the scope and ambition of the evaluation. If the evaluation aspires to go beyond immediate results to glimpse at the early signs of impact, as in UIA M&E case studies, it will require more elaborate designs. This usually means a wider and repeated data collection and analysis, which generates higher costs. Here, in particular, experimental designs such as that in Barcelona B-MINCOME are known to require significant financial effort related to repeated large-scale quantitative data collection. If the evaluation additionally aims to provide feedback about the process, the ‘how’ and ‘why’ the project works or does not work, it will demand a stronger qualitative research component and deeper engagement with stakeholders, time-intensive for researchers. When further coupled with a limited timeframe, it may create a need for more people to be involved.

“To do interviews and to perform survey you need access to the target groups, so you need introductions, you need commitment, you need to follow up on the commitment and that’s also – time-wise – very costly.” (Source: BRIDGE representative)

Engagement with some groups of stakeholders, especially vulnerable target groups (e.g. refugees or children) will likely require additional resources. It may be necessary to ensure translation and interpretation to carry out research activities (e.g. B-MINCOME, CURANT, U-RLP) or to develop innovative interview techniques (e.g. OASIS). More effort will likely need to be devoted to building trust with respondents and securing their participation.     

The people involved in evaluation need to display an appropriate level of expertise, both in terms of research and specific project subject matter (see Lesson #3 under Evaluation governance). The value of that expertise should be reflected in the resources allocated for research.

“It’s not enough just having one researcher, for instance doing some desk research. It’s important to have someone that knows something about [these kinds of projects, and these kinds of monitoring and evaluation activities.” (Source: STEAM City representative)

Further still, each of the people engaged in the evaluation should be allocated enough time for their work, given the difficulty involved in evaluating innovation. The planning should – to the extent possible – create a financial buffer to enable dynamic responses to changes in the project and for the evaluation to be able to account for the unexpected.

Apart from the actual working hours, evaluations should be assigned enough time for implementation, including foreseeing enough time for development of the approach and design (see Lesson #3 under Evaluation approaches). The discussions held with representatives of UIA M&E case studies suggest that this may demand several months and even as long as half a year (e.g. U-RLP, B-MINCOME). In Antwerp CURANT, the ‘Groundwork for evaluation and literature study’ developed by CeMIS was published in May 2017, while the official project initiation took place in November 2016. A similar amount of time was necessary for the evaluators from Vrije Universiteit Brussel to develop the ‘Groundwork for evaluation and state-of-play’ for Brussels CALICO.

Our review shows that planning evaluation in time, preferably in a number of phases, is crucial for effective use of its findings. Learning loops are important and should be incorporated in the project structure (see Lesson #6 under Evaluation governance). Providing adequate space and opportunity during the project’s lifecycle to reflect on what is happening in implementation helps to react if something goes wrong or if anything unexpected happens, as well as to sufficiently capture the processual dimension of the project and innovation itself.

Additionally, as UIA funding ends at a time when project impact is rarely capturable, it would be a good practice to incorporate into the projects elements of planning for evaluation after the funding ends. This could take the form of developing tools for further M&E for further use by project partners and target groups (see the development of tools for future evaluation in CALICO).

About this resource

Author
UIA Permanent Secretariat & Ecorys
Report
About UIA
Urban Innovative Actions
Programme/Initiative
2014-2020
#SCEWC24 treasure hunt:
Reach the next level --> explore this page and find the button "Climate Adaptation", hidden in the "Green" part.

Then, you have to find an "Urban practice" located in Paris. 

 

The Urban Innovative Actions (UIA) is a European Union initiative that provided funding to urban areas across Europe to test new and unproven solutions to urban challenges. The initiative had a total ERDF budget of €372 million for 2014-2020.

Go to profile
More content from UIA
1170 resources
See all

Similar content