The Netherlands



ERDF budget

EUR 4,997, 624. 24


01/11/2016 – 31/10/2019


Local production



Building the Right Investments for Delivering a Growing Economy (BRIDGE)’ was a project implemented between 2016 and 2019, which addressed the urgent urban challenge of better aligning young people’s educational choices with future labour market needs.

The project set out to implement over 20 educational interventions in schools in Rotterdam South, with the goal of improving educational outcomes and the connection between the labour market and pupils/students from the Rotterdam South district, an area lagging behind the rest of the city from a socioeconomic point of view.

The interventions BRIDGE carried out were mainly implemented in primary, secondary and secondary vocational education schools, and were aimed at providing career orientation and vocational guidance for pupils to assist them in choosing a specialisation in high demand in the technology, port-related and care sectors. The project had an ambitious outreach; it included 68 primary schools, 20 secondary schools and 3 vocational schools in the district.

The target group of the intervention consisted of pupils/students, but also teachers, employers and the parents of the young people. In order to be able to better manage the high number of target groups, the project employed sub-contractors to do the fieldwork among companies and schools. Examples of interventions included technical lessons, company visits and (follow-up) training, career interviews, mentoring and job application training for pupils. One of the interventions targeting employers – Career Start Guarantee – aimed to help secure a first job for a student once he/she had graduated from their vocational education.

By 2016, when implementation of the BRIDGE project began, the Rotterdam South district was already included in a twenty-year, multilevel multi-target programme – the National Programme for Rotterdam South (NPRZ) – which aims to combat the district’s social and economic deprivation. The NPRZ programme addresses three areas: education, employment and living. The interventions taking place within the BRIDGE project framework are in line with the NPRZ’s work in the education sector. The existence of the NPRZ also meant that BRIDGE was implemented within the framework of a more comprehensive approach for the district; while the BRIDGE project focused on career guidance for disadvantaged pupils, the NPRZ project focused on housing for families, as well as their health and wellbeing.

This is a case study as part of an UIA report. You can access all of the project's resources on its project collection page.

Evaluation governance

The project was implemented by the City of Rotterdam as the Main Urban Authority, supported by the Metropolitan Region Rotterdam - The Hague, Stichting Economisch Onderzoek Rotterdam (SEOR), the Rotterdam University of Applied Science and the RebelGroup Executives BV. SEOR, an independent research organisation linked to the Erasmus School of Economics, was the main partner responsible for monitoring and evaluation. Some decisions regarding the monitoring and evaluation design were made by the project Steering Committee, composed of two directors from the city of Rotterdam and one representative from each of the project partners.

At the design stage of the project, SEOR was not envisaged as an evaluation partner. The original idea for the BRIDGE project was to gather an evidence base for long-term continuation, with public or private funding – a sort of social impact bond scheme. Within this approach, a consultancy experienced with working on social bond projects was considered as an evaluation partner. However, the idea of a social bond scheme was abandoned at the writing stage of the project, due to the lack of opportunity to gather in-depth data (better understanding of the contributing factors in the south of the city of Rotterdam).

Once the idea of the project changed, the partnership changed as well, and an evaluating partner with a more scientific background was sought. The evaluation was assigned to SEOR and planned as a distinct project component. This was managed by SEOR together with the City of Rotterdam. Data collected through this work package was intended to be enriched by qualitative data from the NPRZ programme, collected in a different project component, dealing with the implementation of the Career Start Guarantee.

The change of the evaluation framework put extra pressure on the evaluation partner, who had to be present at every project meeting and explain to the rest of the partnership what the effects of the interventions were in terms of the broader picture. The evaluators became more central to the project than it was intended at the proposal stage, and their presence in project meetings also facilitated communication between partners; it gave SEOR the opportunity to discuss information gaps and engage the other partners in collecting information and reaching out to relevant stakeholders, such as schools.

 They had a very decisive role in explaining to us, continuously, what our project was doing […] what the effect was in the broader picture. They became much more the centre of the project than it was planned in the proposal stage.

Source: BRIDGE project hearing

The project adopted a participatory approach to evaluation where the partners were involved in the design of some parts of the evaluation and the Steering Committee decided on the level of application for the interventions. The involvement of both teachers and parents proved to be a valuable choice, as they are important for the career guidance of the pupils. However, it could be argued that an increased role in the design of the evaluation itself would benefit the evaluation team, providing valuable insights.

The evaluation team had to face several challenges, leading to the need to adapt to several deviations from the final intervention plan due to practical reasons. In one case, the team had to renounce the idea of using personal portfolios for the pupils involved (longitudinal intervention) because this would have meant obtaining permission from each participating pupil, which (due to the size of the project) was problematic. This would have been a complex situation to handle, since there is no one system used in every school and, as a result, the format of the data would have varied. Using portfolios would be an option better suited to countries where the education system is more centralised and the systems align. The next-best solution was to implement surveys among pupils.

Evaluation process

The methodology for the evaluation exercise was an intensely discussed part of the project and underwent several changes over time, due to political, ethical and practical considerations, as well as data gaps. Importantly, however, the model that was eventually developed satisfied the partners.

Whatever idea you have about the evaluation framework at the proposal stage will probably be amended during the execution of the project. […] I think in our case, we were more happy with the evaluation we did in practice than the evaluation approach we proposed in the proposal.

Source: BRIDGE project hearing

The initial intention was to use an experimental design, with a treatment and a control group – some schools would participate in the project and receive the interventions, while others would not. This idea was abandoned within a few weeks of the start of the implementation, due to ethical and political reasons. The Steering Committee considered it was politically unacceptable to have the control and treatment groups within the same part of the city. Instead, it was decided that the interventions would be applied to all participating schools. 

The original idea was to use some kind of experimental design, let’s say some schools participate in the interventions and other schools don’t, which is of course a very solid way to measure effects, but all kinds of practical problems to implement such a design [arose] in the environment of school, that is you refuse to have access to interventions in some schools […] it is politically unacceptable to get these two groups of students within the same part of the city.

Source: BRIDGE project hearing

Having to rely on a different approach, the evaluation team designed an evaluation framework, which was based on data gathered during the implementation of the project, but also on previously existing data. This approach was chosen as a way to mitigate the limitations and technical challenges that the team encountered in assessing the project’s impact. BRIDGE is a good example of a non-experimental approach in which microdata is used to build a baseline for the pupils receiving the treatment.

The main research questions that were addressed in this evaluation exercise were:

  • If BRIDGE would increase pupils’ participation in interventions
  • To what extent this participation would lead the pupils to choose a high-demand specialisation in the areas of technology, harbours and healthcare;
  • If this choice of high-demand specialisation led to better prospects on the labour market.

Experimental and quasi-experimental design


Experimental and quasi-experimental designs aim to test causal hypotheses, e.g. that a given change resulted from the project. Experiments differ from quasi-experimental designs in the way subjects are assigned to treatment. In true experiments, subjects are assigned to treatment randomly, e.g. by a roll of a dice or lottery, to eliminate selection bias. After the process of random selection, selected groups are then subjected to identical environmental conditions, while being exposed to different treatments (in treatment groups) or lack of treatment (in the control group). A quasi-experimental design lacks random assignment. Instead, assignment is done by means of self-selection (i.e. participants chose treatment themselves), administrator selection (e.g. by policymakers, teachers, etc.), or both.

To learn more about this approach, you can consult e.g.:



“Microdata are unit-level data obtained from sample surveys, censuses, and administrative systems. They provide information about characteristics of individual people or entities such as households, business enterprises, facilities, farms or even geographical areas such as villages or towns.”

Source: World Bank

In order to find an answer for these questions, the evaluation team employed three methods, each with its own strengths and weaknesses, in an effort to mitigate information gaps. The impact of the project was to be evaluated by assessing the direction in which the results of the three methods pointed. The team opted for a mixed-methods approach, in which they were using both qualitative and quantitative data collection methods.

  1. First method: Comparing the education and labour market outcomes of pupils who received the treatment and pupils who did not. The sources for comparing participants and non-participants:
  • Having an aggregated look at the development of educational choices in Rotterdam South versus other regions (data from the Dutch Central Bureau of Statistics – CBS);
  • Looking specifically at the development of intake in training with Career Start Guarantee in comparison with other regions;
  • Surveys among pupils (some pupils were participants in the interventions, others not);
  • Information about participation in the interventions at the school level (per grade), linked to the pupils of the schools that were included in the microdata (more of a pilot analysis due to data limitations).

2. Second method: Comparing educational choices from Rotterdam South with those in Rotterdam North and other major cities in the Netherlands, in order to assess if these develop into choosing a specialisation high in demand in the technology, port-related and care sectors in secondary and secondary vocational education.

3. Third method: Assessing perceptions of the different target groups involved, such as pupils, teachers, parents, employers/companies and agencies responsible for the implementation of the intervention. This data was collected through surveys, interviews and group discussions. For this part of the evaluation, the approach was inspired by theory of change (ToC). The goal of using ToC was not only to assess if an intervention has the desired effect, but also to identify the mechanism that made reaching this effect possible, by asking how the policy intervention is supposed to work.

What is the policy problem? Which mechanisms are responsible for it, which interventions have been chosen to encourage transformation mechanisms for a more desirable behaviour, and what are the limiting and facilitation factors for the desired outcomes?

We tried the best we could. Besides using survey data among pupils and microdata for comparing themselves with other regions, we also used data on school level on what kind of interventions they used and tried to link that with information on microdata, because there is information where these pupils enter at a later stage […] it would be good to construct some kind of group in the South of Rotterdam and follow them, organise a survey you repeat among same people, so you really collect longitudinal data.

Source: Rotterdam project hearing

Theory of change


“A ‘theory of change’ explains how activities are understood to produce a series of results that contribute to achieving the final intended impacts. ” A ‘theory of change’ should, therefore, present the link or path between what one is doing and what one is trying to achieve.

Source: Rogers, P., Theory of Change: Methodological Briefs - Impact Evaluation No. 2, Methodological Briefs no. 2, 2014.

Horizontal issues

The most significant horizontal challenge encountered by the project was political and was related to the fact that the counterfactual approach was considered unacceptable. This forced the project partners and evaluator to be flexible and resourceful in their evaluation approach, finding alternative options which, between them, would manage to capture the impact of the project as close as it can to a counterfactual evaluation.  

However, further issues soon arose with the new methods employed; more specifically, with access to the pupils’ career guidance portfolios. At the start of the project, the evaluators wanted to obtain pupils’ career guidance portfolios from schools, which contained rich information on the beneficiaries. This was not possible due to privacy issues. As an alternative, the evaluators implemented anonymised surveys among pupils and students from different education levels, from primary school to upper secondary vocational education.

The sustainability of the project was an important part in the development of the intervention. The evaluation exercise managed to prove that BRIDGE has an impact on policy and that it has a potential for a high social value return for its cost. The policy impact is highlighted by the increase in the range of interventions in recent years, including the introduction of new ones. The conducted cost-benefit analysis showed that even a low number of pupils (approximately 20 pupils a year) changing their career orientation in the expected direction is enough to compensate for the cost of BRIDGE.

There was a cost-benefit analysis – it started as a special section in our final report; ‘What would the effects be if [the young people in vocational schools] made a change and started their specialisation as intended by the BRIDGE project?’ – using microdata on what affects choices on certain specialisations [and] later market careers, in terms of chances on the job, wage levels, etc. We did it both from an individual perspective and societal perspective, so we are using gross wages […] It shows that only a small change would be enough to compensate the costs of BRIDGE; that was the outcome.

Source: Rotterdam project hearing

However, due to the complexity of the intervention, methodological limitations and the limited timeframe for implementation, it was difficult to discern its effects. It also cannot be ruled out that the effects are actually small. In addition, the surveys and interviews showed little confirmation of an effect on choice of educational direction towards the referred sectors (the interviews with pupils and teachers did confirm the perception that the interventions facilitate the selection process, but do not directly contribute to the choice for education in the preferred sectors).



The counterfactual is a hypothetical situation describing “what would have happened had the project never taken place or what otherwise would have been true. For example, if a recent graduate of a labour training program becomes employed, is it a direct result of the program or would that individual have found work anyway? To determine the counterfactual, it is necessary to net out the effect of the interventions from other factors—a somewhat complex task.”

Source: Baker, J.L., Evaluating the Impact of Development Projects on Poverty. A Handbook for Practitioners, The World Bank, 2001.

Cost-benefit analysis


Cost-benefit analysis is a technique used to compare the total costs of an intervention (e.g. programme or project) with its benefits, using a common metric (most commonly monetary units). This enables the calculation of the net cost or benefit associated with the programme.

Source: Better evaluation

Lessons learnt

The approach of BRIDGE represents a comprehensive attempt to evaluate the project’s impact and processes, including carrying out a cost-benefit analysis. The partners were purposeful in selecting their approach.

The evaluation design has potential to be transferred to other projects in the field of social innovation, specifically projects with a large target group which would like to adopt a counterfactual evaluation but are prohibited for various reasons. A baseline can be created through the use of microdata, however this would require the availability of a database with microdata at city-level.

Throughout the implementation of the project, several shortcomings were identified by the evaluation team, which in the end represented lessons learnt. Some valuable learning points and specific recommendations are presented below:

  1. A counterfactual approach can be sensitive from ethical and political points of view. If possible, it is worth checking the position of all relevant stakeholders at the writing stage of the project, including the local decision-makers.
  2. Openness to amending the evaluation framework may be necessary and beneficial. This might trigger negative results (in this case, losing the control group meant lower insight into the impact of the project), but also positive results (such as surprising insights you have due to the utilisation of other research methods). In the case of BRIDGE, partners were satisfied with their final approach.
  3. To the extent possible, data availability and data structure should be verified beforehand. If faced with an unexpected situation where data is not available due to data gaps or privacy issues, be flexible and adopt alternative measures. Related to this specific project, the following recommendations were made by the project evaluators:
  • The quality of data at school-level needs to be improved. In the best-case scenario, individual data on participation in interventions could be linked to the Central Bureau of Statistics’ (CBS) microdata. This would provide opportunities for testing the relationship between participation in interventions, school career (dropout rate, progression of education direction and level) and position on the labour market. A less precise variant would be if policy at the school level (preferably per grade) is linked to the individual pupils of the school, in order to determine whether this policy (use of interventions) has an effect on their school careers. However, this is less precise because, for a range of interventions, some pupils will participate and others will not.
  • The quality of data at the individual level needs to be improved. This could be achieved through the systematic monitoring of pupils’ participation in interventions and the choices they make at important educational moments. The availability of this data at the individual level would allow for more accurate measurement of effect, which in turn would determine how effective the instruments are.
  • The evaluation results confirmed the importance of having teachers and parents involved as target groups, due to their support in career guidance. The evaluation results also promote supporting teachers in this role and even higher involvement of parents.
  • Benefits of longitudinal data: The project would have benefited from the existence of a ‘panel’ of young people, which would be followed through repeating surveys, in order to improve the information and opportunities for data analysis. This panel would be an alternative to the option of linking the participation in interventions at an individual level with CBS microdata.


students learning a technical job

About this resource

Rotterdam, The Netherlands
About UIA
Urban Innovative Actions

The Urban Innovative Actions (UIA) is a European Union initiative that provided funding to urban areas across Europe to test new and unproven solutions to urban challenges. The initiative had a total ERDF budget of €372 million for 2014-2020.

Go to profile
More content from UIA
1150 resources
See all

Similar content