Text

A meaningful and effective evaluation of an innovative project requires robust, well-designed data collection. A combination of different information sources and methods for collecting and analysing information is necessary. It allows for both capturing broad and often unexpected ways in which the project has evolved and impacted direct beneficiaries, but also larger communities, as it facilitates data triangulation necessary for producing evidence-based recommendations on urban policies.

Triangulation

Text

Triangulation facilitates validation of data through cross verification from more than two sources. It tests the consistency of findings obtained through different instruments and increases the chance to control, or at least assess, some of the threats or multiple causes influencing our results. There are different types of triangulation: data (using different sources), investigator (involving different investigators), theory (using different theories for interpretation) and methodological (using different data collection methods).

Source: Better evaluation

Text

When it comes to project implementation, everything can be considered some sort of data that can help us to understand the different ways of (not) influencing reality on the ground. The key point in approaching this wealth of data is doing it in an organised, systemic way – making sure that the vital stakeholder groups are consulted and their inputs are accounted for. Similarly, making good use of already available data (including administrative and statistical) can significantly enhance evaluation effort. No matter what evaluation approach has been selected, carefully designed and implemented data collection is essential, and it needs to be tailored to the specific aspects of the project, intended impact and nature of activities implemented.

Old research has been so easy in social research when it was linear, when it was quantitative and additive, one variable next to another, then equation and that was it. This is not the way social intervention can be evaluated. We are grappling with something that is circular, not linear.

Source: Curing the Limbo representative

A number of different qualitative and quantitative data collection methods were applied across evaluations of UIA M&E case studies. These include in-depth interviews and focus group discussions, ethnographic observations (both qualitative and quantitative), surveys and analyses of administrative and statistical data. Some projects decided to take advantage of technological solutions such as the Internet of Things or distributing surveys online. Importantly, many projects point out the importance of attentive observation of the project implementation throughout to be able to witness changes as they evolve.

Internet of Things

Text

According to the European Commission’s 2016 Staff Working Document on Advancing the Internet of Things in Europe, “the IoT represents the next step towards digitisation where all objects and people can be interconnected through communication networks, in and across private, public and industrial spaces, and report about their status and/or about the status of the surrounding environment.”

To read more on the EU and the Internet of Things, visit the website of the European Commission

Text

A number of observations were made by representatives of the M&E case studies about the overall selection and application of data collection methods. These methods are listed below.

Lesson #1

Plan for an adequate preparatory or inception phase of the evaluation that will take place before the actual project activities begin. Since capturing initiatives’ impacts will often require comparison of the situation before (ex-ante) and after (ex-post) the project activities, or among two groups (counterfactual), establishing sound baseline data is essential. Experiences of Athens Curing the Limbo and Utrecht U-RLP show that a lack of data collection tools being developed and deployed prior to project activities makes measuring and accounting for change difficult at the individual level of the beneficiaries receiving services but also the broader communities.

The baseline was compromised so we were constantly playing catch up from the word go. We would have wanted to have been there at the beginning, exploring what the tenants’ aspirations and hopes were for the project, how much contact they had before with any refugees or asylum seekers, so that we could then build up a picture over the whole project.

Source: U-RLP project representative

counterfactual

At the same time, since Athens Curing the Limbo opted for evaluation based on the action research paradigm (which favours ongoing adjustments to the project implementation based on knowledge accumulated through monitoring and analysis activities), having a shared, commonly agreed understanding of the evaluation’s aim and key concepts appeared more important than methods and data collection planned in detail at the very beginning of the project.

If a city is choosing action research as a method, they cannot have the evaluation planned very accurately in the very beginning. They can have some ideas and discussions but in order to develop sound evaluation, they first need to start implementing activities. This is sort of a strategic acupuncture, where you do the activity first and then try to see what the consequences of this action are.

Source: Curing the Limbo representative

Whatever the content and purpose of the inception or the preparatory phase of evaluation (development of a detailed data collection plan and indicators matrix or establishment of shared direction and guiding principles), it is important to provide strong foundations for the whole process. It is particularly important in evaluations based on theory of change where data collection and analysis should clearly reflect and follow its logic. In any of the approaches selected, establishing the baseline scenario and accounting for the situation prior to the intervention’s launch is essential for discussing the project’s impact at the end of it.

Action research

Text

Action research means “research informed by social action and leading to social action. Action is taken to improve practice and the research generates new knowledge about how and why the improvements came about.”

Source: Curing the Limbo Project, Evaluation Handbook (V.3.1), Athens, 2019.

To learn more about this approach, you can consult e.g.:

  • Bradbury, H., The SAGE Handbook of Action Research, SAGE Publications Ltd., 2015.
  • Coghlan, D., Doing Action Research in Your Own Organization, SAGE Publications Ltd., 2014, and the accompanying website with tips and resources.

Lesson #2

Be aware of the limited time (and often resources) available for data collection. This is particularly important for projects aiming to achieve social change, empowering people and improving social cohesion. These processes take a long time and final rounds of data collection among the beneficiaries may at best reveal some hints or hope that the desired changes will occur. Often beneficiaries have been exposed to the actual project activities for too short a time to be able to fully reflect on their value and effectiveness. For this reason, data collection (such as surveys or qualitative interviews) should present questions reflective of the timing of the process, be modest in expectations and fine-tuned to the beneficiaries’ actual experiences rather than over-ambitious ideas of impact.

We don’t expect too much of this pre and post data collection because it is too close. It was understood that the actual process of household cohabitation would begin at the very end of the project implementation and consequently data collection about this part would be very limited on the living in the capacities.”

Source: CALICO representative

At the same time, while obtaining a large amount of data is ideal and allows for extensive analysis and evaluation conclusions, even in projects with a well-developed evaluation approach, the resources are still finite. As such, when designing data collection methods consciously prioritise what is truly necessary and what can be considered additional and helpful. One instrument (such as a survey) can be useful for different kinds of analysis when designed purposefully. Adding one or two additional questions to a survey can go a long way in offering new dimensions of analysis. A good example of this approach can be found in Barcelona B-MINCOME evaluation where one survey was constructed in such way that it constituted a basis for multiple sets of analysis. Strive to maximise each data collection tool, especially to avoid burdening beneficiaries with participation in too many research encounters and requests for inputs.

Text
Curing the Limbo project, credits to Levente Polyak
Curing the Limbo project, credits to Levente Polyak

Lesson #3

UIA projects evolve in dense, urban settings, directly or indirectly targeting multiple sets of stakeholders and often impacting communities beyond the defined principal beneficiary groups. This embeddedness of relations and dependencies in urban networks needs to be considered for effective evaluation of projects’ impacts. While evaluation and data collection inevitably require delineation and selection of stakeholder groups and individuals within them that will be consulted (as well as quantitative and administrative data), careful consideration of various groups in light of potentially different inputs they can offer enhances the evaluation’s chance at capturing multiple impacts.

We have tried to evaluate instructors, tried to evaluate ourselves, our beneficiaries. We tried to see all aspects of the process. In the end, you see the process, the development, the challenges.

Source: Curing the Limbo representative

Several UIA projects have recognised this and consequently planned or adjusted data collection efforts to include different views. For example, while Athens Curing the Limbo has been primarily targeting refugees and asylum seekers stranded in Athens with the aim of empowering them, one of the project’s activity components tackled housing, and evaluation considered landlords as a stakeholder group that ought to be consulted. Similarly, Vienna CoRE aimed primarily at empowering newly arrived refugees, but it equally sought to enhance the city’s ability to respond to their needs by building on the resources and approaches developed by informal activist groups. In order to capture the project’s impact, not only the refugees but also the activists have been subject of evaluation. Barcelona B-MINCOME set out to investigate the impact that the minimal guaranteed income (combined with other services) might have on the beneficiaries but it equally set out to evaluate this as a policy delivery model and this has been reflected in the evaluation design through a dedicated data collection and analysis component. Evaluators of Brussels CALICO were faced with a challenge coming from the fact that CALICO is part of a larger areal development initiative, where many new social housing entities are being erected. This large process will change and affect the neighbourhood. While in the original project proposal evaluation did not include a neighbourhood perspective, it was added later on through extensive research conducted in the neighbourhood around CALICO facilities. It did not focus on CALICO but on principles of community care and cohesion, providing an abundance of data about possible paths for activities and anticipated results.

We asked the people how they experience it and how they see it, what would they want to be changed or improved? This has been guiding our work on building strategy for community care and cohesion.

Source: CALICO representative

pic 1
Participative activity during CALICO days, credits to Brussels Capital Region

The experiences of Paris OASIS show that even when implementing (and evaluating) projects in defined settings (such as schools) careful deconstruction of the communities might improve the ability to capture project’s multiple effects. Paris  OASIS set out to transform selected schoolyards in Paris into cool, green islands filled with plants and natural elements, reducing noise and hopefully positively impacting pupils’ behaviour. Data collection for the evaluation targeted primarily pupils (through surveys and ethnographic observations), and to some extent teachers, with the aim of establishing how spending the breaks in a greener environment has influenced children’s actions in the yards. Such constructed evaluation can hopefully reveal a significant positive impact the project has had on pupils’ interaction in a green environment, but it can also omit the positive impact exposure to greenery (and reduced temperature in the summer) might have on the school staff (especially the teachers) and the quality of their work. Furthermore, structured consultations with the teachers and parents could potentially reveal the project’s impact on children’s behaviour (such as the ability to focus better during lessons rather than only during the breaks). As such, projects targeting fairly defined and stable communities (with a low risk of dropouts and clearly delineated boundaries) can benefit from looking at the potential impact of the activities on various members of these groups, rather than only selecting one of them (in this case children) or one aspect of their behaviour (such as actions during the breaks). Similarly, reflections on the evaluation in Antwerp CURANT point to the potential benefits of consulting, or meaningfully accounting for beneficiaries’ family members, even those located abroad, as factors possibly impacting project results.

The evaluation was central to the project, so it was basically in the very submission that it was already presented, some aspects of the evaluation.

Source: B-MINCOME representative

While resources for evaluation (and in particular data collection) are finite and priorities need to be decided, where some stakeholder or data groups are out, it is worth remembering that UIA projects are by definition innovative pilots, implemented with the objective of testing new policy instruments and solutions. As such, careful examination of their impact on multiple elements of the environment (human and physical) is essential. Broadening the spectrum of evaluation can help to capture unpredicted workings of the projects, as well as the potentially negative ones.

Informed decisions about which stakeholder groups or indicators are chosen for evaluation inputs and with what purpose (and what omission of others might mean to the results) are crucial for sound research. It is useful to begin designing evaluation with extensive mapping of all of the possible stakeholders and sources of information, with careful delineation of differences within similar groups (accounting for which different genders, ethnic or cultural backgrounds or age groups are represented) inside the project (direct beneficiaries of the activities and stakeholders immediately involved) and outside the project. Once extensive mapping is complete, showing webs of connections, possible influences and dependencies, evaluators may move on to choosing who and in what way will be the subject of evaluation and data collection.

Lesson #4

Change may be necessary in the course of M&E to better reflect the situation on the ground and fine-tune your approach based on new knowledge of the beneficiaries or the changing aspects of the intervention. Projects are dynamic and may be modified due to better understanding of the problem tackled, or the external circumstances (such as the COVID-19 pandemic, social unrest or protests). Ethnographic observations might be replaced with interviews, and interviews with online surveys. In order to truly capture the impact, you may decide that initially planned quantitative data collection will be better done through a qualitative approach. This was for instance the case in Utrecht U-RLP, where the evaluation team made use of pre-existing quantitative baseline assessments completed by asylum seekers. The initial assumption was that these would be later replicated for evaluation purposes to offer a comparison with the baseline. However, repeating the assessment proved impossible as it was judged that the asylum seekers perceived the surveys as a form of test and were likely to present what they considered desired answers. This raised both ethical and data validity questions, so a decision was made to pursue a qualitative approach through individual interviews.

In Vienna CoRE, evaluators soon realised that day-to day activities (and informal engagements between the beneficiaries and the social workers) offer a wealth of possible data for evaluation that was initially not planned to be collected. Journaling was introduced as an additional data collection method so that social workers could gather data in a structured way. This in turn led to revealing new needs among the beneficiaries such as information about family law, divorce and female health and the project was able to offer services.

In Rotterdam BRIDGE, the initially planned counterfactual research proved impossible due to political reasons and was replaced with extensive analysis of statistical data and administrative records supported by interviews. It is wiser to adjust a data collection method to one reflecting the actual possibilities and needs than to stick to the original plan regardless of the situation on the ground. UIA projects are innovative; including a certain level of unknown and even the best data collection design can require reviewing. In fact, adjusting the data collection approach in Rotterdam BRIDGE proved beneficial in the sense that it shifted the focus from assessing impact to understanding the intervention and the targeted group.

It started as an assessment tool but because of the focus on interviews and surveys, it was a learning experience for us. For the first time, due to availability of the microdata, we learnt a lot about the target group, the students, that we didn’t know before. This was not the goal of evaluation but a surprising added value.

Source: BRIDGE representative

pic2
BRIDGE

Lesson #5

While you are free to opt for evaluation based only on qualitative data or exclusively quantitative inputs, mixing both will strengthen your case and enhance the validity of your evaluation conclusions. Data collection can start with gathering large-scale quantitative information and analysis to identify certain trends, patterns or gaps, and then exploring these in detail through qualitative, in-depth interviews (possibly group interviews) or ethnography.

If you think more qualitatively, every time you go back, you think and you reframe. When you think quantitively you start with indicators right from the first day. We did a mixed model where we developed our indicators as we were implementing activities, seeing what indicators would be the most reflective of the processes.

Source: Curing the Limbo representative

pic
Curing the Limbo Project: Konolonos neighbourhood, credits to Levente Polyak

You can also begin with in-depth analyses of a limited number of cases and then move to quantitative data collection to verify if the findings from this small group actually apply to a bigger sample. Whatever sequencing you choose, having different kinds of inputs will be beneficial as long as they are combined mindfully.

In some kinds of projects, we only focus on qualitative results, but we do not have the power of big numbers. But, if we go to the other extreme and we have only the quantitative data, it is very difficult to explain why something happened the way it did?

Source: B-MINCOME representative

In fact, the majority of reviewed projects opted for a combination of quantitative and qualitative data collection methods, arguing that this approach increased their ability to account for multiple impacts of their activities and notice unexpected effects of the projects. The experiences of Aveiro Steam City, which placed emphasis on quantitative data collection to demonstrate project’s results, show that this may run the risk of overlooking important elements of the project process. By failing to account for the process of obtaining the effects, it might be very difficult to understand the ‘who’, ‘how’ and ‘why’ of the project in the final evaluation and to consequently make recommendations for upscaling, adjusting or replicating the initiative. However, the Steam City evaluators foresaw a possibility of including qualitative data collection methods in their overall research design, such as individual or group interviews with users of the activities, trainees, professors or employers, if this was necessitated by obtained results or could help answer research questions.

Ethnography

Text

“Ethnography is the study of social interactions, behaviours, and perceptions that occur within groups, teams, organisations, and communities. (…) The central aim of ethnography is to provide rich, holistic insights into people's views and actions, as well as the nature (that is, sights, sounds) of the location they inhabit, through the collection of detailed observations and interviews.”

Source: Reeves, S., Kuper, A., Hodges, B. (2008), “Qualitative research: Qualitative research methodologies: Ethnography,” BMJ (online), 337(7668):a1020, DOI:10.1136/bmj.a1020.

Lesson #6

Elaboration of a standardised document can be particularly useful when a number of stakeholders are involved in data collection. Such a document can take the form of an evaluation manual (Curing the Limbo), a core text (groundwork for evaluation developed in CURANT and CALICO), M&E plan (BRIDGE) or protocol (OASIS). It can help to standardise approaches, ensuring common understandings of definitions and steps to be taken and helping the comparability of data collected. This can be particularly useful in action research, where knowledge is generated in the midst of implementing activities, often co-produced by the researchers and beneficiaries.

action research

Having such documents can also reduce the chances of unethical behaviour. Additionally, providing researchers with an overview of how the information they are collecting fits into the overall evaluation framework can help them to make informed decisions in the midst of data collection, if necessary. This can help with providing additional guidance to respondents or identifying alternative sources of information. Most importantly, understanding the general framework will be useful in keeping data collectors and researchers alert to any pieces of information that may not be direct answers to the questions but can instead reveal something new and unintentional about the project impact.

Collecting and analysing qualitative data is the core of any evaluation effort. Often it is preferred to quantitative data analysis, especially in case of projects strictly focused on social cohesion, empowerment of vulnerable groups or generally understood social change. In UIA projects, it can also complement a lack of reliable quantitative data and possible changes to the implementation of activities.

Due to the complex and innovative project design, ever-shifting implementation, the long-lasting flow of the migrants and the relatively small scale of the project it was most appropriate to rely on qualitative methods because these lead to an in-depth understanding while also allowing for flexibility.

Source: Curing the Limbo evaluation manual

Lesson #7

Conducting qualitative interviews with various stakeholder groups or ethnographic observations of project activities (and community environment) is useful to explain how and why certain changes have been or have not been achieved by the project. In particular, interviews or focus group discussions (group interviews including discussions between participants on a given topic) give room for the beneficiaries to reflect on their experiences and provide insights about the project, its perceived usefulness and effectiveness.

We had a huge dropout rate as you do in this case of interventions. It depends on what you focus on, when you focus on the drop out, you haven’t seen this develop, but when you focus on those who stayed, you see that this has developed.

Source: Curing the Limbo representative

This individualised approach is also helpful in projects facing high dropout rates (such as targeting refugees or people generally on the move) where ex ante and ex post data collection may not include the same group of stakeholders (this was a challenge faced by Curing the Limbo and CURANT). Qualitative interviews can take a number of different forms, including semi-structured conversations and in-depth interviews of focus group discussions (group interviews). These methods can be mixed and matched depending on the current needs of the data collection, available time and resources.

In-depth interviews are the most resource-demanding as they take the longest and require time commitments from both the interviewee and interviewer (and possibly the interpreter when needed). To succeed, good rapport between the researchers and respondents is necessary so that respondents feel comfortable opening up about their lives and experiences. Because of this, they may not be the best option for the initial round of data collection but might work very well in the last stages when beneficiaries are familiar with the project and evaluation team. In-depth interviews are particularly useful to gather data in projects aimed at empowerment and substantial improvement of the lives of individuals. Since such interview settings may lead to revealing personal information, including painful or difficult information, they require very skilled and experienced facilitation as well as an adequate level of interpretation to convey nuanced questions and responses.

Semi-structured interviews are a good data collection method when less time is available and certain comparability of information collected is needed. Contrary to structured interviews (which are in fact a quantitative data collection method, a form of oral survey) and in-depth interviews (which are guided by the respondents’ specific experiences), this form includes a set of open-ended questions, offering structure to the conversation but allowing for discussion and elaboration on issues deemed more relevant than others.

Focus group discussions (FGD) or group interviews allow for gathering inputs from a number of respondents at the same time. They can be useful for both consulting the beneficiaries (for instance in Steam City) but also project implementing partners as done in Brussels CALICO. In such scenario, FGDs increase the collaborative character of the project implementation. They take the form of guided or facilitated discussions where the interviewer poses the same questions to the whole group. An advantage of this method comes from the fact that respondents might feel encouraged or inspired by answers or inputs provided by others in the group. The interview might include stakeholders representing the same group in the project or, it might combine people occupying different roles. The latter has the advantage of confronting opinions and discussing issues around them. Group interviews are not the best choice for data collection on personal topics, where participants might not feel comfortable discussing private matters in the presence of others.

When possible, qualitative data collection should have a longitudinal character, i.e. involve repeated data collection with the same informants over an extended period. This was the approach taken by Antwerp CURANT where three rounds of in-depth interviews were conducted during three specific points of the beneficiaries’ trajectory in the project: when they began involvement in the project, after one year in the project and at the end. Repeating the in-depth interviews three times allowed changes that occurred (or did not take place) in the lives of beneficiaries to be accounted for, discussed and analysed in detail. Barcelona B-MINCOME opted for semi-structured interviews as part of its qualitative data collection, some of which were also repeated. These were combined with ethnographic observations.

pic 3
CURANT project, credits to Ana Izamska

Ethnography is useful in any projects aimed at empowering individuals and communities and fostering social cohesion. As such, it was an important element of the data collection in Barcelona B-MINCOME where ethnographers not only observed project activities but also visited households and neighbourhoods involved in the project to see how relationships changed. The experiences of Athens Curing the Limbo additionally show that ethnographic observations of the project activities can reveal power relations within the beneficiary groups, making analysis of the project outputs more nuanced. The project had an important community-building component where refugees and local residents were expected to develop and implement shared projects. Focusing on the end results only, without attentive ethnographic observations, would reduce evaluators’ abilities to account for and understand how the dynamics between the groups evolved or if the processes were truly democratic and empowering.

Lesson #8

Several UIA M&E case studies highlighted the challenges associated with interviewing beneficiaries, especially since topics covered in the sessions often reflected very personal and potentially painful experiences. Reaching the respondents and ensuring they feel comfortable to speak openly about their lives requires researchers to establish good levels of trust. Brussels CALICO evaluators worked on this by being present at different project activities before entering interview situations and making themselves known to the beneficiaries. Vienna CoRE (addressing the needs of refugees) relied on the presence of social workers during data collection as well as native speakers to mitigate and respond well to any difficult reactions that could be provoked by the interview questions. Social workers are generally considered allies of researchers in projects where vulnerable populations are involved as they know the beneficiaries well and have established different levels of trust. They can offer significant support in the context of research and themselves be valuable sources of information about project activities as long as the beneficiaries’ right to privacy is respected. Interviews should be conducted in safe environment, ensuring privacy, ideally in places familiar to the respondents.

 Researchers really need to work on building trust with refugees because they are not going to talk about being a migrant and difficulties [associated] to just a stranger. (…) To have less biased answers and less socially desirable answers, it was good to build trust relationship.

Source: CURANT representative

Going beyond interviews and reaching out to engaged participants in games and exercises can reveal data that would otherwise be omitted. For instance, Antwerp CURANT used stickers to help the participants map out their social networks. First, respondents were asked to write the names of all of the people that are important to them on stickers. Second, respondents were asked several questions about all of the names written down (e.g. age, sex, place of residence, ethnic background, communication language and frequency of contact). Finally, they were asked to position them in a circle diagram containing three concentric circles. The middle point of the circle diagram represented the respondents themselves. In the inner circle (C1), they had to place the individuals who ‘are most close to them’; in the second circle (C2) ‘those that are not quite as close but are still very important’; and in the third circle (C3) those that ‘are not quite as close, but still important’. It was emphasised that these people can live anywhere; in Belgium or elsewhere. As the participating refugees were unaccompanied, this exercise was very demanding for some of the youngsters. Some did not want to do this a second time. Even after discussing the relevance of the exercise multiple times (to identify shifts in their network), certain youngsters expressed frustration and said: “you already have this information, I don’t want to do it again”. The researcher showed understanding and skipped this method when she saw that it was hard for the youngsters.

pic 4
CURANT project, credits to ramona Fernandez

Lastly, it goes without saying that data collection amongst vulnerable groups will strongly benefit from well trained and ideally very experienced researchers (see Lesson #3 under Evaluation governance). While this alone will not mitigate all of the possible challenges and risks associated with encouraging beneficiaries to narrate their often difficult experiences (including of violence, displacement and loss), skilled interviewers and facilitators will be better prepared to deal with unexpected turns of the sessions.

Lesson #9

As a number of the UIA M&E case studies target migrants, a lack of fluency in the host country language posed a set of challenges to data collection and required adequate approaches. To address this, interpreters are included in the process to facilitate communication. Interpretation affects interviews in multiple ways, from the practical aspects (it makes sessions longer by default and limits the number of issues and questions that would be addressed otherwise) to the fact that respondents’ answers are ‘filtered’ and interpreted by a third person (before being further filtered and interpreted by the researchers). As such, the respondents do not have full control over their own words that can be frustrating and alter their original meaning.

For instance, in Antwerp CURANT, as the youngsters were refugees that had recently arrived and their Dutch language skills were very limited at the beginning. This caused difficulties in conducting the interviews. Most youngsters responded very briefly. When an interpreter was invited to translate, new issues arose. Sometimes the researcher questioned whether the interpreter was translating everything. Occasionally the interviewee and the interpreter had long chats, while the answer the interpreter translated was very brief. The longer youngsters were involved in the project, their language skills massively improved. The researcher conducting second and third interviews with the youngsters was amazed by their improved Dutch language skills.

Importantly, even when respondents have language competency strong enough for participating in interviews without interpreters, they may still face the frustration of not being able to express themselves fully and in a nuanced way. While these challenges are inevitable, accounting for them during designing interview scenarios is helpful. This can affect the number of questions planned and their comprehensiveness (favouring simple questions) and taking extra care that respondents feel safe and comfortable in the presence of an interpreter. Vienna CoRE invited refugees who settled in Vienna several years ago to help review the data collection questions in a way in which they would be linguistically correct but also culturally sensitive and reflective of the situation faced by the project beneficiaries.

In evaluation it’s not always just about what to ask but especially also how to ask it and not only language-wise, but also with respect to all the connotations that may come along with the way you ask a question.

Source: CoRE representative

pic
CoRE project, credits to FSH Romesh Phoenix

Using interpreters who are otherwise employed in the project activities can be helpful, as it will enhance beneficiaries’ trust. Based on their experiences working with interpreters, evaluators of Utrecht U-RLP suggest that employing community researchers could be very beneficial in addressing the language issue.

Text
BRIDGE project
BRIDGE project

 

Lesson #10

While ethnography is primarily associated with researchers embedded in communities for a long time, it can actually take many forms useful for understanding the impact of UIA projects. Generally speaking, ethnography is used to study patterns of interaction and behaviour of defined groups, allowing for long-term observation of their natural conduct. It traditionally requires the ethnographer to live among the observed group, ideally becoming part of the community life. In the context of UIA projects, with their limited time such extensive ethnography does not seem feasible (although it could produce very interesting findings when applied). However, using ethnographic elements such as careful observations of group interactions during project activities, visits to beneficiaries’ households or quantitative observations can enhance the evaluation material significantly. Barcelona B-MINCOME carried out an extensive ethnographic study that explored and analysed in-depth the life experiences of a group of neighbours living in three representative neighbourhoods of the ten included in the project.

For the perspective of deciding what to measure in this population, the ethnographic research was a very important tool for us. […] Separating the ethnographic research from the quantitative analysis was important for us because this could inform part of the survey. So: what should we ask? What was not necessary to ask? What were the common problems of these families? That was part of the first half of the ethnographic research.

 Source: B-MINCOME representative

pic
B-MINCOME project

Beyond helping to identify the impact of the project (ethnography was conducted both in the beneficiary group and the control group), ethnography helped to gain a good sense of the experiences and context in which the target population lives, further informing the quantitative data collection framework. Paris OASIS opted for two waves of quantitative observations of the patterns of interactions in the selected schoolyards. It carried out the activity using an observation grid (focusing on a number of elements regarded as relevant, e.g. gender and age composition of the groups, interactions with the natural environment, cooperative versus conflict dynamics, interactions of educators with children). Several projects applied observation to the implemented activities (such as group meetings in Vienna CoRE and project work carried out by the beneficiaries in Athens Curing the Limbo).

Lesson #11

An examination of the evaluations in the selected UIA projects revealed that people responsible for this task were very much aware of the fact that meaningful research on social change requires various qualitative data inputs and often comes down to a strong component of implementing actors observing the activities. However, some of them also grappled with how to best represent these qualitative inputs, often collected in an unstructured way in the final evaluation, and how to make sense of this data.

To measure scientifically, you can only have a few variables and you are forced to simplify so many things, that my interest is gone. I prefer to write or read a beautiful story, I don't need a statistic, it doesn't ring a bell.

Source: CALICO project partner, ‘Care and Living in Community, CALICO. Groundwork for evaluation and state-of-play’ report

A lot of the knowledge about the project’s impact came from ‘being part of it’, especially when evaluation was largely implemented by the people also involved in the delivery of services. Some projects structured this knowledge through diaries filled out periodically by people implementing activities (such as teachers or teacher supervisors in Curing the Limbo). Others employed extensive data collection of feedback from the beneficiaries (CoRE included oral and written feedback, feedback groups in Vienna). Athens Curing the Limbo approach highlights the importance of carefully examining the products created by the beneficiaries, their quality and the messages they may convey.

Collecting significant amounts of quantitative data and comparing them according to well-established criteria is one of the essential aspects of evaluating large-scale projects. It allows hypotheses put forward by the projects to be tested and for the potential impact on people or environment on a broader scope, and on various levels, to be examined. Developing a comprehensive quantitative data collection and analysis system is primarily necessary in the case of projects characterised by a strong financial or economic component such as Barcelona B-MINCOME. Measuring impact through numeric data is also crucial for interventions aimed at countering climate change, where applying sensors and monitoring microclimate improvements constitutes the basis for accounting for activities’ impacts (as was the case in OASIS). Applying technology to gather large quantities of data about population movement patterns is helpful in establishing initiatives’ impacts on people’s behaviour in transport, as demonstrated by Szeged SAS-Mob. Lastly, as discussed earlier in the text, combining quantitative and qualitative data strongly enhances the evaluation framework in projects that otherwise focus on the qualitative approach or deal with target groups at risk of high dropout rates.

Lesson #12

Barcelona B-MINCOME is an example of a project that made extensive use of administrative records (social security data, housing records, educational information and economic development data) from several agencies and sources. This data was collected to select the project beneficiaries.

The administrative data sometimes have a lag between what you know from the data and what is actually happening with the families. We were identifying families as maybe vulnerable that were no longer vulnerable. You need to check, before you actually allocate treatment of this type of project, whether the person is actually eligible or not.

Source: B-MINCOME representative

While a substantial amount of data was collected, the project quickly realised that the records may not always reflect the actual and current position of the target group. As such, the data had to be carefully examined and verified to make sure that the beneficiaries (and the control group) met the criteria of the project.

Rotterdam BRIDGE, another project with substantial application of administrative data was faced with a challenge for their effective use. While the evaluators worked with a wealth of administrative data on the students, this information could not be assigned to individual project beneficiaries as to protect their privacy. In some cases, the administrative data was disaggregated to show information on the school level as the closest basis for evaluating the intervention’s impact (comparing schools participating in the intervention and schools not participating).

Experimental and quasi-experimental design

Text

Experimental and quasi-experimental designs aim to test causal hypotheses, e.g. that a given change resulted from the project. Experiments differ from quasi-experimental designs in the way subjects are assigned to treatment. In true experiments, subjects are assigned to treatment randomly, e.g. by a roll of a dice or lottery, to eliminate selection bias. After the process of random selection, selected groups are then subjected to identical environmental conditions, while being exposed to different treatments (in treatment groups) or lack of treatment (in the control group). A quasi-experimental design lacks random assignment. Instead, assignment is done by means of self-selection (i.e. participants chose treatment themselves), administrator selection (e.g. by policymakers, teachers, etc.), or both.

To learn more about this approach, you can consult e.g.:

Text
CORE project credits to Romesh Phoenix
CORE project credits to Romesh Phoenix

Lesson #13

Surveys provide good overview of the project’s starting point in terms of the beneficiaries’ characteristics, local population’s views and opinions or general needs and expectations and as such are a valuable data collection instrument for any type of project. Compared to qualitative interviews (which allow for more extensive communication between the researcher and the respondent, providing room to explain the questions, context or purpose for instance), surveys in principle should be completed without assistance.

pic5
CALICO project, credits to Brussels Capital Region

 The questionnaires more than just have the pre and post to test the impact on quality of life. I also believe the added value of the questionnaire for example at this moment is […] to get to know the residents. So the partners now have information on what is actually the population of CALICO building.

Source: CALICO representative

They can be distributed electronically, over the phone and in a paper form. In the case of projects targeting vulnerable groups or contested issues, designing and implementing surveys however requires extra considerations.

Lesson #14

When deciding what form the survey should be administrated in, consider if the medium is adequate for the target group. Sometimes this means that in practical terms, implementation of the survey will take more time and resources than initially planned. In the case of Barcelona B-MINCOME, the project surveys targeted the treatment and control groups comprised of families struggling with poverty and often having limited knowledge of Catalan or Spanish. The baseline data collection was through a computer-assisted telephonic survey, while the two follow-up surveys were partially conducted in person, as a solution to language difficulties experienced by the respondents. Additionally, some families that could not be contacted by phone were interviewed in social services when they showed up.

Language was also an issue in surveys implemented by Antwerp CURANT. Upon entering the project, the youngsters had very limited Dutch/English language skills. Hence, the questionnaires were translated into their native languages (Arabic, Pashtu, Dari, Kurdish, Tigrinya and Somali). During the baseline questionnaire, there was also an interpreter present, who assisted beneficiaries with responding and this revealed some difficulties. For the interpreter it was, for instance, difficult to translate taken-for-granted research terms (such as the Likert Scale: strongly disagree–strongly agree) so that the approach was clear to the respondents. So the interpreter explained the scale with the following categories: -- - +/- + ++. This experience showed that no research wording or instruments should be taken for granted.

The experiences of Utrecht U-RLP show that care is needed even when language fluency is not an issue. The project’s evaluation included grasping the local community’s views and approaches to the issue of refugee placement in the area and this was a contentious theme.

Everything took way longer because of the sensitivity of the issue that we were looking at. We weren’t asking whether people wanted to have more parks in the area but what they thought about asylum seekers, which had been an inflammatory issue not so long before. It was incredibly important to get the right approach, thinking through the questions.

Source: U-RLP representative

As one of the intervention’s aspects was social cohesion, the project evaluation team invested additional time in developing the survey questions that would be relevant but not antagonising to the local community. The two surveys were conducted face-to-face rather than as planned – via post to reduce the risk of being hijacked and manipulated by groups hostile towards the asylum seeker centre in the neighbourhood.

Paris OASIS faced the challenge of implementing the survey amongst pre-school aged children and the fact that this group could not read or write and was not always eager to interact freely with unfamiliar adults. The project evaluation team decided to employ two puppets and apply a simple puppet show scenario where the interviewer was making a claim about the environment and the two puppets either agreed or disagreed with what was being said. The children then had to say whether they agreed with one of the two. This adjustment allowed for interaction with the children but also required careful preparation. It built on a lot of pre-testing (also in other studies), which included, for instance, checking whether the colour of puppets influences the answers of children, and how to conduct the whole exercise without fostering any bias amongst the youngsters.

Our process of adapting surveys to all ages was successful. The pre-primary children looked at the interviewer as ludicrous and seemed to enjoy interaction a lot.

Source: OASIS representative

It was equally important to maintain both the simplicity and clarity of the questions and their scientific value and character. As such, the evaluation team consulted a lot with Meteo France to make sure the sentences and questions were formulated and reformulated correctly.

Lesson #15

Several UIA M&E case studies decided to use technology, and in particular the Internet of Things (IoT) for data collection. The advantage of using such instruments as sensors is that they allow for ongoing monitoring of the environment and the collection of a large quantity of data. The UIA projects revealed that the technology might be useful in multiple ways and could go far beyond distributing surveys online (a useful application in itself). The number of times participants viewed events was collected through touchpads in Aveiro Steam City (rather than in paper form) and could be instantly aggregated.

If it is possible (…) embed the use of technology. (…) Use the technology, for instance the IoT platform to gather the data, to treat the data, and to make the data available to all decision makers.

Source: STEAM City representative

The application of motion sensors in Szeged SAS-Mob facilitated monitoring of a significant segment of the local population while the application of wearable sensors in Helsinki HOPE offered a new form of participation to the beneficiaries – one where they could actively collect data themselves. Lastly, the deployment of both stationary and mobile temperature measuring stations in Paris  OASIS allowed for detailed modelling of the microclimate change.

pic
OASIS project, credits to Ville de Paris, Laurent Bourgogne

About this resource

Author
UIA Permanent Secretariat & Ecorys
Report
About UIA
Urban Innovative Actions
Programme/Initiative
2014-2020

The Urban Innovative Actions (UIA) is a European Union initiative that provided funding to urban areas across Europe to test new and unproven solutions to urban challenges. The initiative had a total ERDF budget of €372 million for 2014-2020.

Go to profile
More content from UIA
1160 resources
See all

Similar content