Public Sector Economics

720
Views



97
Downloads

Productivity and efficiency of central government departments: a mixed-effect model applied to Dutch data in the period 2012-2019



Jos L. T. Blank*
   
Alex A. S. van Heezik *
   
Bas Blank*
Article   |   Year:  2023   |   Pages:  335 - 351   |   Volume:  47   |   Issue:  3
Received:  September 21, 2022   |   Accepted:  April 4, 2023   |   Published online:  September 4, 2023
Download citation        https://doi.org/10.3326/pse.47.3.2       


 

Abstract


Central government aims to stimulate the efficiency and technical change of public organizations. However, government primarily focuses on the institutions that deliver final public services, but not on the policy making institutions. This article analyses the productivity of central government departments (CGDs). From bureaucratic theory we hypothesize that productivity of these CGDs are low. In order to measure efficiency and technical change we estimate an average cost function based on data of Dutch individual CGDs during the period 2012-2019. The dataset consists of data on various services provided, resource usage and efficiency determinants. The cost function is estimated by a mixed-effect non-linear least squares method. The outcomes show that there are large efficiency differences among CGDs. It is also striking that technical change of the CGDs is nonexistent over time, probably due to a lack of innovative behaviour, unwieldy bureaucracies and increasingly complex paperwork.

Keywords:  central government; productivity; cost efficiency; efficiency determinants; technical change; cost function; scaling property; bureaucracy

JEL:  H21, D24


1 Introduction


The public sector makes an important contribution to social welfare. Education, law enforcement and health care are important sectors for a well-functioning economy and contribute to a socially just society. Because these provisions are often financed by taxes and show a lack of market discipline, insight into the performance of these sectors is extremely important (Blank and Lovell, 2000; Blank and Valdmanis, 2019). Since many reforms such as privatization and contracting to outside agencies have taken place, motivated by the wish to enhance performance, analysis of the productivity, efficiency and effectiveness of public services is therefore a topic of great interest. Over the past 40 years there have been extensive developments in assessments of the public sector, due to the development of empirical methods measuring efficiency and productivity. Included among these developments are stochastic frontier analysis (SFA) and data envelopment analysis (DEA). These approaches have proved their value through applications in public services (Blank and Valdmanis, 2019; Fried, Lovell and Schmidt, 2008; Kumbhakar and Lovell, 2000; Kumbhakar, Parmeter and Zelenyuk, 2020).

The focus of productivity research generally is on organizations (or sectors) that are responsible for the provision of public services such as education (Haelermans and Blank, 2012; Haelermans, De Witte and Blank, 2012), health care (Hollingsworth, 2008), drinking water supply (Blank, Enserink and Van Heezik, 2019; Goede et al., 2016), road construction and maintenance (Lopez, Dollery and Byrnes, 2009), policing (Barton and Barton, 2011) and the immigration and naturalisation services (Niaounakis and van Heezik, 2019). To get an impression of the “mer à boire” of research in this field, see for instance link with reports and articles that contain thousands of references to international studies on this topic.

The strong focus on public service delivery is due to the relative simplicity of measuring the services or products provided by these organizations. In many cases, they produce final services that are fairly easy to capture in key figures. For example, the number of graduates, the number of hospital admissions or the amount of drinking water supplied are straightforward measures. However, there are also many public organizations that carry out activities that are more difficult to quantify. In particular, the outputs for policy making and public control defy a natural metric as such. At the decentralized level, these are mainly the policy departments of the municipalities and provinces. At the national level, they are the policy directorates of the ministries, which together form the so-called central government departments (CGDs). On behalf of the minister, they are responsible for the development of policy, laws and regulations and for directing the implementation thereof, including the organization of funding. The CGDs are also responsible for the evaluation of the policy pursued. In these endeavours it is difficult to measure how well these processes translate into the production of successful outcomes.

Hence, an important reason that research has not been carried out in this area is that measuring the output of this type of intermediary services causes many problems, such as the extensive number of services, the lack of documentation of the services and vagueness about its relevance. Another explanation is that the financial reports of these (intermediary) organizations are often not very transparent. This lack of transparency makes assessing the administrative costs of service provisions difficult. From a literature survey we were only able to find two references that could be related to this topic (Bikker and van der Linde, 2016; Hood and Dixon, 2015). Bikker and van der Linde (2016) focuses on the costs of policy making and control of municipalities. Hood and Dixon (2015) focuses on the cost performance of government services in the United Kingdom.

Aside from the fact that research in this area may fill a gap the relevance may also be substantial with reference to economic theory. Whereas other public services are mostly subjected to efficiency incentives resulting from tight funding, mandatory benchmarks, policy reviews or various types of inspection, central government departments are not. They may therefore suffer from perverse behaviour as described by Niskanen (1968), Weber (1922) and Bowen (1980). Although they take different perspectives, the central idea is that civil servants are driven by the ambition of expanding their budgets or at least exhausting the available budgets. In none of these cases do they lead to efficient usage of available resources or to innovative behaviour.

Our aim here is to fill the gap in assessing the productivity of CDGs. This research includes a survey of various data sources and correction of these data in order to carry out an analysis of the efficiency and productivity of the CGDs in the Netherlands during 2012-2019. We discuss the findings of this research in addressing three general questions:
1) What is the cost efficiency of CGDs? 
2) What are the main determinants for the cost efficiency of CGDs?
3) What is the generic productivity trend of CGDs between 2012 and 2019?

In section 2, we will describe the research method employed. We describe the data collection and editing process in section 3. In section 4, the results of the analyses are presented. We present conclusions and recommendations in section 5.



2 Method


The total factor productivity (TFP) of a CGD is defined as the ratio between the value of production (Y) and the value of resources deployed (X) (Blank and Valdmanis, 2019; Niaounakis and Van Heezik, 2019):

(1)

With: 
Vy (Y) = production value of (vector of) services Y
Vx (X) = input value of (vector of) resources X

When an institution provides more than one product and also has to use different means, the different products and resources used must be weighted. In the private sector, relative prices and wages to some fixed base can serve as weighting factors and the productivity is then equal to the production value divided by the input value. In a less formal way productivity can be defined as the ratio between revenues and costs, both controlled for general price and wage differentials. Because the public service generally lacks market-based prices for the services provided, weighting by prices is not possible. We therefore assume that the production value is equal to shadow costs involved at a given production level in a base year. In this case, we use the average costs that a department incurs to deliver a certain level of services. We weigh the different products with the estimated shadow prices that are assumed to reflect cost prices. From a social point of view, we can argue that citizens are willing to pay these prices, or else to reject them via the political process. Summarizing, productivity is measured as the ratio of shadow costs and actual costs. The shadow costs therefore serve as a benchmark. In that case, productivity equals one. Equation (1) can now be written as follows.

(2)

Whereby: 
Csh (Y) = shadow cost to produce Y
Cobs (Y) = actual (observed) cost to produce Y.

To control cost for general price differentials we need to apply price indices. For wage cost, we apply the index on contractual wage costs per hour (in public administration sector and public services). For material cost we apply the consumer price index (CPI).

We calculate shadow costs based on the results of a regression analysis. In doing so, we first make several additional assumptions. For example, the costs do not only depend on the services provided, but also on year (representing technical change) and department. Due to technical change, the costs of the services provided in 2012 are different from the costs in 2019. We also consider the fact that services provided by one department may cost more or less than the same amount of services provided by another department due to differences in the complexity of the policy dossiers or the quality of the services provided. We indicate this as heterogeneity of the service or as a type of case mix. In addition, the model also contains a component that reflects the relative efficiency, which is measured as the difference in costs among CGDs, reflecting the characteristics of the business operations, such as the share of material costs, the staff structure or the employment conditions (i.e., cordial and/or cooperative). This approach has become more and more common in efficiency research and is based on the so-called scaling property. Instead of deriving cost efficiency measures in the first stage and consecutively regressing these cost efficiency measures on a set of determinants in a second stage, the effects of the determinants are derived directly in one stage only (Blank and Niaounakis, 2019; Wang and Schmidt, 2002). Relative efficiency and technical change together determine the development of productivity.

We can summarize the above in an equation in which the different components are incorporated. The cost function is given as:

ln(cdt) = a0 + ∑mbln(ydtm) + h × time + hetd + effdt + errdt(3)

Where: 
cdt   = actual costs department d at time t (adjusted for prices); 
ydtm = production of service m by department d at time t
time = trend, reflecting technical change; 
hetd = percentage of deviating costs department d due to the heterogeneity of production; 
effdt = percentage of additional costs due to inefficiency department d at time t
errdt = measurement error department d at time t.

Further:

effdt = exp[-∑kθkln(zdtk)](4)

With
zdtk = characteristic k of department d at time t
a0, bm, h, hetd and θk are the parameters of the model to be estimated. The parameter a0 is the constant in the model, the parameters bm are elasticities and represent the effect of a growth in production on the growth of costs and the parameter h shows the percentage annual growth/contraction of costs by generic productivity trends or formally as technical change. The parameters hetd show the percentage effect of the complexity of the services provided on the costs of a department. The parameter θk represents the proportion of determinant k in total inefficiency (Alvarez et al., 2006; Blank, 2020; Parmeter, 2018).

We also impose the condition on the model that a growth in production by a certain percentage leads to a proportional growth in costs (homogeneity requirement). So, a ten percent increase in the number of services provided automatically leads to a cost increase of ten percent. This homogeneity requirement implies that the bms must sum up to 1.

The above model can be estimated with a mixed effects model (Lindstrom and Bates, 1990). This approach combines two types of effects. Structural differences in the cost per unit of production among CGDs are “captured” by a random effect and interpreted by us as a measure of heterogeneity (or case mix). This effect is expressed in the term het in equation (3). In addition, eff in equation (4) consists of several determinants for efficiency, such as absenteeism by reason of illness or the degree of overhead. The effects of these determinants are also estimated. The joint effect of all determinants is called cost (in)efficiency.

Because case mix is not measured in a direct way, it cannot be ruled out that the case mix might also absorb some of the inefficiency. The actual efficiency differences could therefore be biased upward. In this case a CGD turns out to be structurally inefficient.



3 Data


The activities of civil servants within the CGDs are diverse, ranging from drafting policy plans and legislative proposals as well as answering parliamentary questions to supervising policy implementation by agencies and public bodies and providing funding. Attempts have already been made to map all these activities, but at the level of directorates-general (Ministerie van Binnenlandse Zaken, 2009). These surveys consisted of an extensive inventory of directorates-general on activities pursued. More than 100 indicators were distinguished, which made using these indicators for analysis unmanageable. Moreover, these indicators are also not available over time. For this reason, we opted for a different route, where we can analyse productivity with fewer indicators. As multicollinearity may arise if too many indicators are included, therefore for the sake of parsimony, we select only the relevant factors affecting productivity. Hence, in this study, we use three indicators that provide insight into the “policy pressure” or workload of a CGD: 
– Documents; 
– Parliamentary questions; 
– Program expenditures (at constant 2012 prices).

These three indicators represent the many related activities and together cover the activities of central government departmental production. A principal component analysis showed that six indicators cover more than 90% of the total variation in the more than 100 indicators (Blank et al., 2009). Here too, it will appear that the limited number of indicators explains a very large part of the variation in costs. The variable policy pressure is particularly visible in the number of documents and parliamentary questions. The documents variable concerns the number of documents published by the ministry, excluding non-autonomous services and agencies, as stated on www.officielebekendmakingen.nl. This mainly concerns legal and regulatory documents, such as laws and legislative amendments. For the parliamentary questions indicator, we have mapped out the number of (written) answers to the questions asked by MPs (in writing) to the ministers of the various departments. The program expenditures are the total expenditures of the department minus the organization expenditures of the CGD and adjusted by CPI. These program expenditures include subsidies for the public bodies responsible for policy implementation and income transfers and therefore give an indication of the size of the policy areas managed by the relevant CGD.

To determine the use of resources from the CGDs, we used the actual organizational expenditures of the CGDs, provided in annual reports of the ministries. In these reports the organizational expenditures of the CGDs are broken down into personnel and material expenditures. For personnel expenditures, the annual reports make a distinction between expenditures for own staff, external hiring and other staff. Material expenditures are broken down by shared service organizations (SSOs), ICT and other material supplies, including expenditures on housing.

In addition to the data on production and use of resources, data have been collected on (possible) determinants of cost efficiency. This mainly concerns human resource management characteristics (HRM), such as absenteeism due to illness, working time factor and average age of employees. For an extensive list see the contents of table 1.

Note that CGDs form a rather homogeneous group of institutions that are more or less affected by the same contextual factors, which prevent estimates being biased by endogeneity.

To map personnel data, we used data provided via the Ministry of the Interior and Kingdom Relations which are based on the central salary administration. The data of eleven determinants were included in the dataset and are described in table 1.

The database used for the analysis consists of 88 observations (8 years in the time period 2012-2019 for 11 CGDs).

Table 1
Statistical description of CGDs data, 2012-2019
DISPLAY Table

We analyse the following central government departments:
1. General Affairs (GA), 
2. Foreign Affairs (FA), 
3. Interior Affairs (IA), 
4. Economic and Agricultural Affairs (EA), 
5. Treasury (TR), 
6. Infrastructures (IS), 
7. Education (ED), 
8. Social Affairs (SA), 
9. Justice and Safety (JS), 
10. Health Care (HC), 
11. Defence (Def).

It should be noted that at the end of the research period the Department of Economic and Agricultural Affairs was split into two separate departments (for political reasons). For that reason, we have aggregated the data of the separate departments into one fictional department for the years that they were still separated.



4 Results


We estimated different specifications of the model and tested them against each other based on the Akaike information criterion (AIC). In the final model, due to the AIC the eleven determinants of efficiency could be reduced to five. Table 2 shows the cost function estimation results of the regression analyses. Based on the estimates, it is possible to calculate the marginal costs that provide evidence on the plausibility of the results. Recall, marginal costs represent the additional costs involved in the production of one additional unit of the product in question and are to a certain extent a reflection of cost prices. Since we are only using a limited number of services some omitted variables bias may occur. This may lead to estimated marginal costs of a specific service that also include costs of services that are correlated with this specific variable. Nevertheless, it still is a useful check on implausible values like negative or very large numbers. Table 3 presents the estimates of marginal costs in 2019.

Table 2
Cost function estimation results
DISPLAY Table

The parameters of the production indicators (b1-b3) are significant. These parameters reflect the weights assigned to the various production indicators in order to calculate productivity. The parameters of the five ultimately remaining determinants of efficiency (θ5, θ7, θ9, θ10, and θ11) are also all statistically significant at the 5% level.

Table 3
Marginal cost estimates
DISPLAY Table

The parametric values of the production indicators have plausible values. They can be interpreted as average cost shares of the distinct services. For example, 38 percent of the resources deployed appear to be involved in the number of documents, 17 percent in the handling of parliamentary questions and 45 percent in the program expenditure. The estimated marginal costs (see table 3) amount to an average of 27,000 euros for a document, 12,000 euros for a parliamentary question and 1,500 euros per 1 million euros of program expenditure. Note the earlier remark on the omitted variable bias that may exists. The correlation between actual costs and the costs predicted by the model is equal to 99%.

The CGD GA, the Office of the Prime Minister, can be seen as an odd man out because of its small size and specific tasks. Such a peculiar observation could substantially affect the estimation results. We have therefore made the estimates again based on a data collection excluding GA. It shows that omitting GA has limited the effect of this on the estimation results.

As indicated in the model description, we also estimate an effect per CGD, which can be interpreted as case mix. Since we are dealing with panel data, we could estimate a so-called fixed effect for each CGD separately. This fixed effect, as the term suggests, is fixed over the whole period and can be regarded as a mixture of unobservable variables that is specific for that peculiar organisation. By applying the principle of the “benefit of the doubt” we assume that these variables are not under control of the CGD and include specific features of the services, such as the complexity or the political sensitivity of the dossiers. The case mix variable indicates how much more (or less) costs a CGD incurs due to a different workload in the activities performed. Figure 1 shows the results of the case mix. For each department, a number is shown about one. A number smaller than one implies that the case mix is lower than average, while a number greater than one implies that the case mix is higher than average. A value of 1.5 indicates that a specific CGD costs 50 percent more than in the average CGD. As has already been argued, it cannot be ruled out that this variable absorbs part of the inefficiency. The case mix may therefore be overestimated and the cost inefficiency underestimated.

Figure 1 shows that the CGDs of ED, SA, Def and GA have the lowest case mix. The cost per unit of product here is about 60 percent of the average case mix. The absolute leader in terms of case mix is the CGD of FA. The unit cost here is 120 percent higher than in the average CGD. The average case mix therefore differs considerably per CGD. However, as explained, there may be some overestimation here.

Figure 1
Case mix per CGD
DISPLAY Figure

Based on the estimates and application of equation (4), we calculate the cost efficiency per CGD. Figure 2 shows the cost efficiency of the CGD in the period 2012-2019. Cost efficiency is given as the ratio between the cost of the average practice and the actual cost. For example, a value of 0.90 means that the same production can be realized at 90 percent of the actual cost (relative to the average practice). In other words: there is an efficiency gain of 10 percent compared to the average practice. Figure 2 consists of eleven sub-figures, each one of them representing one CGD. Each subfigure presents the cost efficiency through the years for the specific CGD. In order to get an impression about the reliability of the estimates the subfigure also includes two dotted lines representing the 95% upper and lower bound of the estimates. This way of presentation makes it rather easy to compare the longitudinal and cross sectional outcomes.

Figure 2 shows that there are substantial differences between the cost-effectiveness of CGDs. For example, the efficiency of the CGDs IA, EA and JS appear to be on average only 70 to 80 percent over the years 2012-2019 compared to the average practice. Especially at IA, a considerable improvement can be seen in recent years. The CGD of Defence far exceeds the other CGDs in terms of cost efficiency. A negative trend can be observed at the CGD of FA.

Figure 2
Cost efficiency CGDs, 2012-2019 (including 95% confidence intervals)
DISPLAY Figure

The results have a certain degree of statistical uncertainty. Therefore, in addition to the point estimates, the area in which productivity falls with a certainty of 95 percent is also indicated. From this it becomes clear that only Def has a higher cost efficiency in all years than the average CGD. For GA and SA, this applies to seven of the eight years examined. The CGD of JS scores significantly lower than the average CGD for all eight years.

Based on the estimation results in table 2 we can also analyse the determinants of cost efficiency (θ5, θ7, θ9, θ10, and θ11). A positive sign on the parameters as shown in table 2 implies an upward effect on costs and thus a lower cost efficiency. The explanation of the negative effect of absenteeism on cost efficiency is straightforward: absenteeism corresponds to higher costs for replacement or decreasing production. The positive cost efficiency effect of the entrance ratio is less evident. A high entrance ratio may initially lead to extra costs related to recruitment and on boarding. The entrance ratio can also be an indication of the influence of young workers with higher labour output or lower wage costs. Another hypothesis is that due to a new influx, the organization can be better aligned with actual needs, especially if they replace retirees who were less productive or entrenched in older bureaucratic norms. The negative effect of the working time factor reflects the positive contribution that part time employees make to the operation of the organization (Künn-Nelen et al., 2013; ROA, 2011). One hypothesis is that part time workers are more productive because they do not work the low-productive hours of the day or week (Collewet and Sauermann, 2017). On the other hand, more overhead costs are incurred per hour worked including office space, HRM services and payroll administration. It cannot be ruled out that both theories are correct, but cannot be accommodated by the current linear model specification. External hiring can theoretically have two effects. On the one hand, external hiring is usually more flexible and therefore more efficient. On the other hand, the wage costs per hour worked are likely to be higher, because the margins for the intermediary company and a risk premium for idle periods for this type of staff are not covered by the CGD. The negative effect of a high cost share of material may indicate an overly “exuberant” purchase of goods and services. A well-known phenomenon is that surpluses in budgets at the end of the year are still spent for all kinds of purchases and hiring. Material expenditures lend itself better to this than personnel expenditures.



5 Conclusions and recommendations


Research into the productivity and efficiency of the public sector usually focuses on the provision of so-called final public services, such as health care or education. The productivity of public organizations involved in policy making and control is rarely examined due to a lack of a clear definition of services delivered. This analysis of productivity and efficiency of the Dutch CGDs is a first step to fill in this gap. It presents a limited set of available service indicators that also include – in a statistical sense – many underlying indicators. It shows that a very large part of cost variation of CGDs is represented by this set of indicators. In order to provide more insight in the underlying factors explaining productivity differences a set of efficiency determinants – mostly HRM related variables – are also included in the model and tested. Obviously, in a labour intensive industry like this may affect cost efficiency more substantially than in other sectors.

The database used for the analysis consists of 88 observations (8 years with 11 CGDs) and contains several product indicators, cost categories and efficiency determinants for each CGD. Based on the data and an advanced regression method, a cost function is estimated from which the research results are derived. On this basis, we can draw the following conclusions.

The most important conclusion is that cost efficiency varies greatly among CGDs. The most effective are the CGDs of GA, SA and Def. The CGD of GA owes its high score to the favourable working time factor and the low absenteeism due to illness, the CGD of SA mainly to the low use of material supplies and the CGD of Def mainly to the low absenteeism due to illness. The CDG of JS has the lowest cost efficiency, mainly caused by high absenteeism and a relatively high use of material supplies. Therefore, room for improvement exists, demonstrated by an improvement in recent years, due mostly to a relatively lower use of material supplies. The efficiency differences are independent of the case mix of the policy dossiers, since any differences in case mix have already been controlled for when determining the cost efficiency. Because case mix is not measured in a direct way, it cannot be ruled out that the estimated case mix absorbs some of the unobserved inefficiency. Since some relevant efficiency determinants might not be included, some omitted variable bias may occur. The actual efficiency differences could therefore be even greater than presented. This occurs when a CGD turns out to be structurally inefficient.

The analysis of the effects of a number of efficiency determinants shows that high absenteeism rates, high working time factors, high shares of external hiring and material costs lead to low cost efficiency. A high entrance ratio of new employees ensures high cost efficiency. These results provide important indications of opportunities to improve efficiency. In addition, the most significant gains can be made in reducing absenteeism due to illness, increasing the number of part-timers and reducing the use of material supplies. This may vary per CGD.

Based on the research results, it appears that no generic productivity trend for the CGDs can be established in that there are no technical or institutional developments that equally affect the productivity of all CGDs. New IT systems and changed work processes, as well as new regulations in the field of safety or the environment could influence productivity. Additional costs to meet environmental requirements could even have contributed to lower generic productivity. Another cause may be the growing complexity of the tasks to be performed. This is a phenomenon that is difficult to influence, although there may be opportunities to reduce the increasing bureaucratic complexity. Further, the figures also show an extraordinarily high overhead.

The analysis also shows that there are significant differences in the average case mix. For example, at FA, handling a document or parliamentary question costs more than 120 percent more than average. For the CGDs of ED, SA, Def and GA, the case mix is only 60 percent of the average CGD. For the sake of completeness, we emphasize that the presented cost efficiencies have already been controlled for these case mix differences. Based on these findings, we make three recommendations.

1) Shrinking budgets 
Given the large differences in cost efficiency between the CGDs, there still seems to be an opportunity for improvement in several CGDs. Because of the permanent intrinsic pressure to expand bureaucracy (Niskanen’s Law, see Niskanen, 1968) and to make full use of available budgets (Bowen’s Law, see Archibald and Feldman, 2008; Bowen, 1980), there are few incentives for the official leadership to use that room. It must therefore be enforced by politicians and then be addressed or settled by the management. As demonstrated in previous productivity research, the shrinkage of budgets is an effective tool. Of course, for the management of the CGDs it must be clear that costs can be reduced. To this end, the insights from this research can be helpful such as reducing absenteeism due to illness and increasing part-time work. A critical look at external hiring and the material costs can also yield efficiency gains. In the long term, this may result in an efficiency gain of tens of percent for some CGDs

2) Targeted research into the causes of lack of productivity growth
Further research can be carried out into the cause of the lack of a generic productivity trend in the CGDs during the research period. Based on the findings using available data, it is impossible to deduce whether the constant productivity is the result of a lack of focus on productivity-enhancing innovations or whether the CGDs are increasingly confronted with more complex tasks and laws and regulations and with stricter requirements with regard to personnel policy, sustainability and quality assurance. To gain insight into these issues, more detailed data about the business operations are needed.

3) Accounting in order
To be able to carry out these types of analyses, it is important to have access to good government accounting. During our research, we found precise accounting was lacking. For example, it appears that not all departments define organizational costs in the same way, that financial reports in the sphere of shared services are handled carelessly and that delivered services and performance are not accounted for at all. The latter is particularly noteworthy because the CGDs do asses the underlying government agencies on this point. An improvement in transparency and accountability is therefore highly recommended.



Notes


* We would like to thank prof. Vivian Valdmanis and two anonymous referees for their valuable comments on earlier versions of this paper.


Disclosure statement


The authors declare that there is no conflict of interest.

References


  1. Alvarez, A. [et al.], 2006. Interpreting and testing the scaling property in models where inefficiency depends on firm characteristics. Journal of Productivity Analysis, 25(3), pp. 201-212 [CrossRef]

  2. Archibald, R. B. and Feldman, D. H., 2008. Explaining Increases in Higher Education Costs. The Journal of Higher Education, 79(3), pp. 268-295 [CrossRef]

  3. Barton, L. and Barton, H., 2011. Challenges, issues and change: what’s the future for UK policing in the twenty-first century? Article in International Journal of Public Sector Management, 25(3), pp. 415-433 [CrossRef]

  4. Bikker, J. and van der Linde, D., 2016. Scale economies in local public administration. Local Government Studies, 42(3), pp. 441-463 [CrossRef]

  5. Blank, J. L. T. [et al.], 2009. Beleidsdruk gemeten. Delft: TU Delft.

  6. Blank, J. L. T. and Lovell, C., 2000. Performance assessment in the public sector: contributions from efficiency and productivity measurement. In J. L. T. Blank (ed.). Public provision and performance. Amsterdam: Elsevier.

  7. Blank, J. L. T. and Niaounakis, T. K., 2019. Managing Size of Public Schools and School Boards : A Multi-Level Cost Approach Applied to Dutch Primary Education [CrossRef]

  8. Blank, J. L. T. and Valdmanis, V. G., 2019. Principles of productivity measurement; an elementary introduction to quantative research on the productivity, efficiency, effectiveness and quality of the public sectoritle. Delft: IPSE Studies.

  9. Blank, J. L. T., 2020. The use of the scaling property in a frontier analysis of a system of equations. Applied Economics, 52(49), pp. 5364-5374 [CrossRef]

  10. Blank, J. L. T., Enserink, B. and Van Heezik, A. A. S., 2019. Policy Reforms and Productivity Change in the Dutch Drinking Water Industry: A Time Series Analysis 1980-2015. Sustainability, 11(12), 3463 [CrossRef]

  11. Bowen, H. R., 1980. The Costs of Higher Education: How much do colleges and universities spend per student and how much should they spend? San Francisco: Jossey-Bass.

  12. Collewet, M. and Sauermann, J., 2017. Working hours and productivity. Labour Economics, 47, pp. 96-106 [CrossRef]

  13. Fried, H. O., Lovell, C. A. K. and Schmidt, S. S., 2008. The measurement of productive efficiency and productivity growth. New York: Oxford University Press [CrossRef]

  14. Goede, M. de [et al.], 2016. Drivers for performance improvement originating from the Dutch drinking water benchmark. Water Policy, 18(5), pp. 1247-1266 [CrossRef]

  15. Haelermans, C. and Blank, J. L. T., 2012. Is a schools performance related to technical change? a study on the relationship between innovations and secondary school productivity. Computers & Education, 59, pp. 884-892 [CrossRef]

  16. Haelermans, C., De Witte, K. and Blank, J. L. T., 2012. On the allocation of resources for secondary schools. Economics of Education Review, 31(5), pp. 575-586 [CrossRef]

  17. Hollingsworth, B., 2008. The measurement of efficiency and productivity of health care delivery. Health Economics, 17(10), pp. 1107-1128 [CrossRef]

  18. Hood, C. and Dixon, R., 2015. A Government that Worked Better and Cost Less? Evaluating Three Decades of Reform and Change in UK Central Government. Oxford, UK : Oxford University Press [CrossRef]

  19. Kumbhakar, C. and Lovell, C., 2000. Stochastic frontier analysis. New York: Cambridge University Press [CrossRef]

  20. Kumbhakar, S. C., Parmeter, C. F. and Zelenyuk, V., 2020. Stochastic Frontier Analysis: Foundations and Advances I. In: C. Ray et al. (ed.). Handbook of production economics. Springer Nature Singapore Pte Ltd [CrossRef]

  21. Künn-Nelen, A., De Grip, A. and Fouarge, D., 2013. Is Part-Time Employment Beneficial For Firm Productivity? ILR Review, 66(5), pp. 1172-1191 [CrossRef]

  22. Lindstrom, M. J. and Bates, D. M., 1990. Nonlinear Mixed Effects Models for Repeated Measures Data. Biometrics, 46(3), pp. 673-687 [CrossRef]

  23. Lopez, M., Dollery, B. and Byrnes, J., 2009. An empirical evaluation of the relative efficiency of roads to recovery expenditure in New South Wales local government, 2005/06. Australasian Journal of Regional Studies, 15(3), 311-328.

  24. Ministerie van Binnenlandse Zaken, 2009. Beleidsdruk in beeld: een kwantitatieve vergelijking van directoraten-generaal. Den Haag: Ministerie van Binnenlandse Zaken.

  25. Niaounakis, T. K. and van Heezik, A. A. S., 2019. Op afstand de beste? Een analyse van de productiviteitsontwikkeling bij IND, CJIB, SVB, RDW en het Kadaster. Delft: IPSE Studies.

  26. Niskanen, W. A., 1968. The Peculiar Economics of Bureaucracy. American Economic Review, 57(2), 293-321.

  27. Parmeter, C. F., 2018. Estimation of the two-tiered stochastic frontier model with the scaling property. J Prod Anal, 49, pp. 37-47 [CrossRef]

  28. ROA, 2011. De arbeidsmarkt naar opleiding en beroep tot 2016. Reports. Maastricht: Researchcentrum voor Onderwijs en Arbeidsmarkt/Maastricht University.

  29. Wang, H. J. and Schmidt, P., 2002. One-step and two-step estimation of the effects of exogenous variables on technical efficiency levels. Journal of Productivity Analysis, 18(2), pp. 129-144 [CrossRef]

  30. Weber, M., 1922. Economy and Society.
  September, 2023
III/2023
In order to give you a better user experience, cookies have been stored on your computer.
Accept cookie     More information