Snapshot

City

Antwerp

Country

Belgium

Population

521,946

ERDF budget

EUR 4,894,303.32

Duration

01/11/2016 – 31/10/2019

Topic

Social inclusion

About CURANT

pic
Credits to Ana Izamska

CURANT - Cohousing and case management for Unaccompanied young adult Refugees in ANTwerp’ was a project implemented between 2016 and 2019, which set out to support young unaccompanied adults between the ages of 17 and 22 who had arrived in Belgium without their parents and been granted refugee status or subsidiary protection.

The need for this project arose because, according to Belgian legislation, unaccompanied minors who reach legal adulthood are no longer eligible for housing in reception centres, enrolment in reception classes in schools, training courses customised for minors or support from a legal guardian. As a result, they are at risk of leaving school without qualifications and continuing to depend on public social welfare services. CURANT aimed to bridge this service gap for young unaccompanied refugees by providing a holistic support programme that prepared them for independent living and boosted their participation in Belgian society.

The project aimed to achieve this through two main components:

  • Communal living as part of a co-housing system where the refugee would be assigned to a Belgian ‘buddy’;
  • Personalised, multi-disciplinary case management.

The Belgian ‘buddy’ was a young, Dutch-speaking local who was matched with a refugee to live with them as their flatmate in a housing unit. This system aimed to develop a two-way relationship between the buddy and the refugee that would foster informal learning processes through spontaneous social interaction, make the social networks of the refugees and buddies more diverse and improve the refugees’ Dutch language competencies. The project provided a mix of 63 accommodation units. In total 77 refugee-buddy duos cohabitated.

This personalised multidisciplinary case management aimed to develop circular integrated individual trajectories for a minimum of 75 unaccompanied young adult refugees. It guaranteed intensive follow-up with the young refugees by individual case management. Each refugee had a case manager; a social worker who provided personal, centralised support and guidance. Through this component, the young refugees were offered intensive and varied training in areas such as job activation, independent living and Dutch language; leisure time activities and social integration; and orientation towards formal education and work, as well as professional individual psychological support.

This is a case study as part of an UIA report. You can access all of the project's resources on its project collection page.

Evaluation governance

The evaluation of CURANT was conducted by the Centre for Migration and Intercultural Studies (CeMIS) of the University of Antwerp. CeMIS was present from the project’s outset as part of the partnership, overseeing the project evaluation component. At the same time, it also tried to maintain a level of independence within the partnership, balancing its insider and outsider position for the benefit of the evaluation. This balancing act was not always easy, and this will be further explored in the context of specific challenges that presented themselves.

While CeMIS was the evaluation focal point, other partners responsible for implementation of services were also involved in evaluation throughout the project. In particular, they were consulted in the process of developing the evaluation approach through six group interviews, with participation from project designers and coordinators, social and education workers and psychotherapists. Once CeMIS finalised the approach based on partner input, including the project’s theory of change, the material was presented to the delivery partners who decided on further evaluation steps.

We collected the data to see what the wavelengths are, who has what types of goals and ideas […] we presented this analysis; we think these are the indicators, we think maybe here and there there’s a bit of a mismatch, and so on. But then, we passed it on over to the partners and to the project manager to decide how to go further.

Source: CURANT project hearing

Delivery partners also provided data for the evaluation and otherwise supported the research process (by facilitating access to respondents for example, especially among young refugees).

All partners were really interested in providing inputs on the things they were doing, the activities they were organising and the potential impact.

Source: CURANT project hearing

Overall, the evaluation benefited from a cooperative spirit within the partnership. Various platforms were created for the evaluation partner and delivery partners to meet and exchange feedback to improve the project.

We had a lot of meetings. We did reporting at several moments during the project […] We were continuously involved, and from the beginning also, and there was a lot of interaction.

Source: CURANT project hearing

Complementary to the group interviews that it organised with delivery partners, CeMIS also participated in major preparatory meetings during the initial stages of the project and presented evaluation results at different points throughout its duration. This was possible, as the evaluation was structured into four phases:

  • Phase one: Preparatory and start-up phase, which aimed to develop the programme theory and a baseline measurement.
  • Phases two and three: Two rounds of fieldwork, which aimed to gather data through in-depth interviews and focus groups, proceeding to data analysis and evaluation (two evaluation reports were produced).
  • Phase four: The final evaluation and policy-oriented final report.

Want to learn more about the evaluation results? Check out the ‘Policy recommendations’ report and the first and second evaluation reports.

After one year, we had a big partnership meeting, like a one year evaluation. CeMIS got presenting their results. All the partners could say what they felt went wrong, what went good, they could see that CeMIS had the same results.

Source: CURANT project hearing

This ongoing nature of the evaluation allowed for findings to be integrated into the implementation of activities. One example was the inclusion of shorter, tailor made educational trajectories for those young refugees who were eager to start working as soon as possible. When evaluation results indicated that the refugees were overburdened with activities and appointments, their trajectories were adapted and made less intense. When CeMIS and the other partners realised that the project was concentrated more on the refugees, the delivery partners placed more attention on the buddies.

Buddies also needed attention and a different approach sometimes […] It came up during the partnership meeting how we are going to guarantee those things and change the procedure […] Everybody wanted to have refugees more strengthened and buddies more aware about the refugees.

Source: CURANT project hearing

The flexibility to alter activities testifies to the fact that the project embraced a learning mindset beyond the evaluation itself. The results obtained by CeMIS often converged with observations made by delivery partners, which evaluators perceived as logical considering the high level of interaction within the evaluation. While observations sometimes overlapped, CeMIS was perceived as being more objective, possibly adding weight to evaluation results.

It’s really important to have CeMIS on board to have objective results and to see whether what we thought was really right; to have some leverage to sustain the project and to disseminate project results.

Source: CURANT project hearing

In addition to bringing academic objectivity, thanks to its involvement in consultations with all partners and continuous focus on examining the intervention, CeMIS enjoyed a holistic view of the project. As such, this allowed it to play an integrative role in answering questions as to whether, how and why the project worked.

While in a partnership every partner had some expertise […] nobody had a helicopter view. And that’s what CeMIS had, CeMIS had a helicopter view from all the partners.

Source: CURANT project hearing

CeMIS’s integrative function may have been strengthened by the choice of a theory-based evaluation approach which entailed the development of a shared theory of change. Theory development also tested the limits of CeMIS’s role in this respect. The process revealed some differences in how partners viewed specific aspects of the project, and CeMIS reflected on whether it wanted to be the one to bring the partners to a common denominator.     

Is this our role, to get everybody on the same page from the very beginning? We didn’t see it necessarily like that.

Source: CURANT project hearing

In conducting the evaluation, CeMIS had to balance its position within the project. As an insider, it had stronger interactions with project partners, which allowed for easier access to information, sharing of results and feeding into the project. As an outsider not implementing the activities, it had to establish enough presence to build a necessary relationship of trust with young research participants. At the same time, too strong a presence could have blurred the line between CeMIS and delivery partners in the eyes of young respondents, especially refugees. This, in turn, could lead respondents to answer questions in a way they believed to be desirable and, therefore, safe.

Sometimes we also went into activities. We tried to observe at some times. I think we did that more at the beginning, but then the line between being a partner like the other partners and being an outside partner was not so clear for the participants, so in the end we took a bit more distance for them to know that we were really more an outside partner.

Source: CURANT project hearing

Another challenge faced by the evaluators was the fact that they were not able to measure the results of the whole project. The evaluation needed to finish six months before the end of the project to allow the evaluation report to be written, presented and disseminated before the end of the implementation period (when financing for the project stopped).

Evaluation process

As an evaluation partner, CeMIS was responsible for developing the evaluation approach and designing the methodology. Following discussions between the evaluator and the delivery partners, the former proposed a theory-driven approach.

The approach was considered suitable for new and complex interventions such as CURANT, in which causal mechanisms are not yet firmly established, and allowed for the integration of various stakeholder perspectives. It also provided evaluators with an opportunity to assess whether the intervention worked or not, as well as how and why. In other words, in addition to evaluating outcomes, the approach was conducive to analysing the transformation process between the start of the intervention and its outcomes.

It is interesting to use this theory-driven evaluation because you can, along the way, always receive the input of the stakeholders, of the participants and always adjust.

Source: CURANT project hearing

Theory-based approach

Text

Theory-based evaluation has at its core two vital components. Conceptually, theory-based evaluations articulate a policy, programme or project theory, i.e. how activities are supposed to lead to results and impact, given specific assumptions and risks. Empirically, they seek to test this theory, to investigate whether, why or how interventions cause intended or observed results. Testing the theories can be done on the basis of existing or new data, both quantitative (experimental and non-experimental) and qualitative.

Source: European Commission, Evalsed Sourcebook – Method and techniques, 2013.

For more information, visit e.g. Better Evaluation website or the website of the Treasury Board of Canada Secretariat

A counterfactual approach was also initially considered as an option. In the project’s case, the control group would have had to be composed of refugees who had applied for the project but, after the screening procedure, had been declared ineligible. This approach was eventually rejected, since during the design of the methodology, it was not clear whether the evaluators would have access to such a group of young refugees. Furthermore, after the completion of the selection interviews, the number of young refugees declared as ineligible was low. 

Counterfactual approach

Text

The counterfactual is a hypothetical situation describing “what would have happened had the project never taken place or what otherwise would have been true. For example, if a recent graduate of a labour training program becomes employed, is it a direct result of the program or would that individual have found work anyway? To determine the counterfactual, it is necessary to net out the effect of the interventions from other factors—a somewhat complex task.”

Source: Baker, J.L., Evaluating the Impact of Development Projects on Poverty. A Handbook for Practitioners, The World Bank, 2001.

Control group

Text

In experimental designs, a control group is the "untreated" group with which an experimental group (or treatment group) is contrasted. It consists of units of study that did not receive the treatment whose effect is under investigation.

Source: Lavrakas, P. J., Encyclopedia of survey research methods (Vols. 1-0), Sage Publications, Inc. 2008.

The approach worked around a ‘change model’ (or a ‘theory of change’) developed in consultation with delivery partners during the initial phase. It was laid out in a document, ‘Groundwork for evaluation and literature study’. The change model reflected the partners’ assumptions on the actions required to support the social and structural integration of young migrants and why these actions would address the problem. It was, therefore, a stakeholder-driven change model which CeMIS additionally connected to academic literature. While the development of a change model can be a challenging process, it was not perceived as particularly problematic within the project, which accepted the model proposed by CeMIS.

The evaluation tried to establish whether CURANT lived up to the stakeholders’ expectations about communal living and individualised case management. There were six specific questions, as listed in the second evaluation report, that the evaluators attempted to answer:

  1. Did the CURANT communal living setup facilitate regular, informal, meaningful and spontaneous contact between refugees and Dutch-speaking locals?
  2. Did CURANT engender diversification in the social networks of refugees and Dutch-speaking locals?
  3. How did CURANT’s communal living setup contribute to refugee integration?
  4. What are the major strengths and pitfalls of CURANT’s case management approach?
  5. What was CURANT’s outcome, in terms of refugees’ participation in education and the labour market?
  6. What are the major limitations to CURANT’s approach?

CURANT can be considered a good practice example when it comes to theory-driven evaluation. The approach was reflected in the whole evaluation process, including the definition of outcomes (i.e. improved language skills, competency for independent living, diversification of social networks), the formulation of research questions and the development of research tools.

Theory of change

Text

“A ‘theory of change’ explains how activities are understood to produce a series of results that contribute to achieving the final intended impacts. ” A ‘theory of change’ should, therefore, present the link or path between what one is doing and what one is trying to achieve.

Source: Rogers, P., Theory of Change: Methodological Briefs - Impact Evaluation No. 2, Methodological Briefs no. 2, 2014.

The evaluators used a mixed-methods approach, meaning they combined qualitative and quantitative research. Choosing different methods strengthened the research, as the shortcomings of one method could be compensated for with the advantages of another. 

Don’t focus on one kind of data because with social research, you will always have some challenges because you are working with people.

Source: CURANT project hearing

The main focus was, however, placed on qualitative longitudinal data collection, with the same beneficiaries interviewed at different points in time to capture changes. Three rounds of interviews were conducted at three specific points in the beneficiaries’ trajectories in the project; at the start, after one year and at the end. In total, 48 in-depth interviews were conducted with 19 buddies, and 42 in-depth interviews were conducted with 24 refugees.

As the evaluators of CURANT noted, it was difficult to reach refugees, so a lot of effort was put into securing their participation. The first step involved building trust between refugees and researchers. This trust was important to encourage sharing, but also to increase the willingness of the refugees to participate in the research in the first place. With their packed schedules, additional interviews with CeMIS researchers became less of a priority for the youngsters.

They need to trust us to tell us things […] We also took our time to some extent to make them trust us, to help them trust us as an outsider.

Source: CURANT project hearing

The researchers could count on the support of social workers and interpreters, and the refugees were interviewed at home, which offered a safe environment.

Don’t ask a refugee to go to a new building at a university, because they will never go to do it.

Source: CURANT project hearing

Qualitative data was complemented with data obtained through quantitative research. A baseline survey was conducted with 58 buddies and 65 refugees. The questions asked referred to descriptive statistics, such as age, sex, country of birth and socioeconomic status, but also to the identification of the respondent’s position/situation at the beginning of the project (e.g. a refugee’s place of residence before entering CURANT). The initial survey was followed by the final survey. In the latter, some of the questions from the baseline questionnaire were repeated and other, more evaluative questions, were added. In total, 29 buddies and 31 refugees filled in both the baseline and the final survey.

In conducting the survey, the evaluators faced a challenge related to the sensitivity of questions. The first version of the baseline questionnaire covered topics on refugees’ mental well-being and resilience, using:

  • Hopkins Symptom Checklist-37 for refugee adolescents (Bean et al., 2004);
  • Resilience Scale for Adolescents (Hjemdal et al., 2003).

It became clear that youngsters found it difficult to answer trauma symptom- and family-related questions, so these were eventually omitted. In the evaluators’ view, such questions could force the youngsters to relive their trauma. In the final questionnaire, CeMIS focused on CURANT and whether it improved – from the perspectives and experiences of the youngsters – their well-being, social and language skills, knowledge of the Belgian society, etc.

Another challenges in implementing the survey was language-related. The survey was translated into languages understandable for the refugees (Arabic, Pashtu, Dari, Kurdish, Tigrinya, Somali). However, during its implementation, the respondents still struggled to understand the questions. To facilitate the process, an interpreter was provided to explain the questions. However, he struggled with translation of some common research terms, such as the ‘strongly disagree-strongly agree’ Likert Scale. He tried to explain this scale to respondents using a graphic notation ‘--  -  +/-  +  ++’. 

The reliance on qualitative data helped CeMIS to develop an in-depth understanding of the impact of activities, while also allowing for flexibility. The complementary use of quantitative data enabled the easier identification of tendencies, which could be better understood by looking at the qualitative data collected. Additionally, the data collected from the beneficiaries was complemented by project registration data from the Antwerp Public Centre for Social Welfare and Vormingplus regio Antwerpen, allowing for better statistics on refugee and buddy profiles.

Mixed-methods approach

Text

Mixed-methods approach can be defined as “research in which the investigator collects and analyzes data, integrates the findings, and draws inferences using both qualitative and quantitative approaches or methods in a single study or a program of inquiry”.

Source: Tashakkori, A., Creswell, J.W., “Editorial: the new era of mixed methods”, J Mixed Methods Res 1: 3–7, 2007.

Horizontal issues

The process of designing the evaluation approach had a participatory element when it came to the inclusion of the other project partners. The participatory approach did not trickle down to the beneficiaries themselves, however. A possible improvement for designing a similar evaluation exercise might include refugees settled in the country for several years as part of the design team (integrating a bottom-up approach as opposed to top-bottom). Their role would be to advise the executive partners and evaluators on the needed activities and the development of the intervention. For example, they could provide advice on questions to avoid in order to prevent further traumatisation of the new refugees, how to better phrase questions, how to better approach the beneficiaries, etc.

Lessons learnt

The theory-driven approach developed and adopted by CURANT has high transferability potential. It tests well on complex, untried interventions tackling sensitive social issues. It can offer additional results in comparison with other approaches, as it allows the evaluators not only to assess if the planned intervention worked, but also to explain how and why it worked. Being a theory-driven approach, it might slow down the reflexivity and adaptability circle, however. As a result, changing or adapting the format of the activities based on evaluation results might be a slower process in comparison with needs-based participatory and developmental approaches. The heavy reliance on longitudinal qualitative research means that the methodology is best suited for interventions where the number of participants is lower.

Some of the more specific lessons that can be drawn from CURANT’s experience include that:

  • Evaluators should be involved in the project from the application writing stage onwards. In CURANT, it was important for the researchers to attend the partners’ discussions on the shortcomings of contemporary policies and how the project could provide innovative solutions. This input fuelled the groundwork for evaluation of the project.
  • Even if delivery partners and evaluators may sometimes overlap in their conclusions about the project’s impact and processes, having a more independent partner to conduct the evaluation allows the research to benefit from a more holistic view of the project and the potential for additional perspectives. The fact that such an evaluator represents an established academic institution offers higher objectivity and a more comprehensive research effort, increasing the strength and persuasiveness of results.
  • The advantages and disadvantages, benefits and risks related to the insider or outsider position of the evaluator in relation to the partnership should be analysed and understood. Various positionings can be used to the evaluation’s advantage. Either way, the researchers must be comfortable with their participation in the evaluation.
  • It is beneficial to reflect on the role of the evaluator in the project from the outset. When the evaluation approach is based on a change model, tensions may become evident between different visions (in terms of project outcomes, for example) and these will require negotiation. It is therefore important to establish whether the evaluator is expected and willing to be the one to get partners onto the same page.
  • It is very important to build trust between research participants and researchers. This can prove particularly demanding in work with vulnerable participants, who may come with traumatic experiences which they are reluctant to share, or for whom the idea of research might be difficult to understand. These and other factors may translate into higher mistrust towards the researchers. The researchers’ increased presence in the projects, at least at the beginning, helps to establish good relations and encourage sharing, as does the creation of a safe space for participation by choosing an appropriate venue and ensuring the required support is in place. 
  • Continuity and stability must be ensured in the implementation and evaluation of activities. This is facilitated when the same Project Manager is present throughout the entire implementation of the project. In interventions implemented by public authorities and where pressures may arise for specific results, the Project Manager can function as a shield between the evaluators and sources of political pressure.

About this resource

Author
UIA Permanent Secretariat & Ecorys
Report
Location
Antwerp, Belgium
About UIA
Urban Innovative Actions
Programme/Initiative
2014-2020

The Urban Innovative Actions (UIA) is a European Union initiative that provided funding to urban areas across Europe to test new and unproven solutions to urban challenges. The initiative had a total ERDF budget of €372 million for 2014-2020.

Go to profile
More content from UIA
1137 resources
See all

Similar content