UWE Bristol Economics at the 9th IIPPE Annual Conference in Political Economy

Posted on

By Sara Stevano, Susan Newman and Lotta Takala-Greenish.

On 12-14th September 2018, the 9th IIPPE Annual Conference in Political Economy took place at Juraj Dobrila University of Pula, Croatia. Keeping up with recent years’ record, UWE Economics was very well represented at the conference! The conference was organised around the overarching theme of ‘The State of Capitalism and the State of Political Economy’ and over 300 scholars and activists from across the world discussed their political economy research, touching upon various facets of capitalist transformations and pushing the frontiers of political economy. The conference organisers reported that many participants thought that this was the best IIPPE conference so far!

Among the keynote speeches were a panel shared by Professor Lena Lavinas, Professor of Welfare Economics at the Federal University of Rio de Janeiro, and Professor Fiona Tregenna, University of Johannesburg South African Research Chair in Industrialisation stood out for their original content. Professor Lavinas highlighted the shifts in social programmes to increase financial inclusion. She commented on the contribution of social service programmes to GDP, 1.5% for developing and 2.7% of GDP for OECD countries, and connected these to the accumulation of debt among low-income households (see the excellent twitter feed by Ingrid H. Kvangraven). Professor Spread of income transfer programs across Global South have facilitated mass ‘financial inclusion’. The state and international financial institutions also play important role here. Result: Low-income households have accumulated huge amounts of debtSpread of income transfer programs across Global South have facilitated mass ‘financial inclusion’. The state and international financial institutions also play important role here. Result: Low-income households have accumulated huge amounts of debtTregenna focused on the need to unpack different forms of de-industrialisation and to explore the perspective that Marx’s analysis can offer to understanding industrialisation. In particular, her insights included an expanded focus on the heterogeneity within sectors and the inseparability of production and consumption (see also this blog post for further insights on the IIPPE2018 conference).

Spread of income transfer programs across Global South have facilitated mass ‘financial inclusion’. The state and international financial institutions also play important role here. Result: Low-income households have accumulated huge amounts of debtSpread of income transfer programs across Global South have facilitated mass ‘financial inclusion’. The state and international financial institutions also play important role here. Result: Low-income households have accumulated huge amounts of debtReflections on the state of capitalism are very relevant and timely in the context of shifting geographies of production, global relations of power and political discourse. Thus, it is all the more important to discuss how political economy research can help us understand and shape the economic, social and political transformations that mark our time. Critical political economy has an important role to play in transforming and revitalising economics, making it an inclusive and relevant area of study.

The three UWE Economics researchers who were in attendance this year intervened in panels on neoliberalism, the political economy of work, social reproduction and commodity studies. Dr Lotta Takala-Greenish presented her research on Exploring formal/informal work structures in South African waste collection (slides available here) in a panel that was described by the audience as one of the most interesting of the conference. This panel shared with Professor Stephanie Allais of the University of Witwatersrand, put forward important questions about the role of training and learning (both on and off the job) and the connections between education and labour markets. It also provided a forum to discuss and develop future collaborations with the South African Research Chair for Skills Development at the Centre for Researching Education and Labour. Dr Susan Newman presented her joint paper with Sam Ashman on New Patterns in Capital Flight from South Africa and discussed the preliminary findings of her joint paper with Dr Sara Stevano on The neoliberal restructuring of UK Overseas Development Assistance (slides available here), both papers were very appreciated by the audience who thought them revealing and timely. Sara Stevano presented her paper on Women’s work in Mozambique: Gender, social differentiation and social reproduction (slides available here) in a great all-women panel on social reproduction and the political economy of work.

Across several sessions, there was much discussion of the future of pluralist economics and education where UWE economics was highlighted as a leading institution. UWE Economics is now considered as an established centre for critical political economy, with possibly the largest concentration of critical political economists in a UK university. UWE’s recent recruitment of pluralist economists has been noted widely and was reflected in questions about future recruitment plans. Participation of UWE Economics in IIPPE continues to reaffirm the presence of our group in current political economy debates and generates opportunities for collaboration with colleagues in the UK and beyond. UWE Economics academics are involved with IIPPE in various capacities. Susan Newman oversees the content published on the IIPPE website and coordinates the working group on commodities studies; Sara Stevano coordinates the social reproduction working group with Hannah Bargawi (SOAS); Lotta Takala-Greenish set up and previously coordinated the working group on Minerals Energy Complex and Comparative Industrialisation.

One of the key aims of IIPPE is to provide a platform for early career researchers to interact with more established and senior scholars in political economy. The conference provided an opportunity benchmark and share information about postgraduate training in political economy. The UWE MSc in Global Political Economy was mentioned as one of only a handful degrees providing an interdisciplinary political economy approach housed within an economics department. The first intake of UWE’s MSc Global Political Economy students will be submitting their dissertations end of September and are being encouraged to submit their research to present at the next IIPPE conference in July 2019. We are also welcoming our new 2018-2019 MSc students who will no doubt contribute to the active research environment that we have here at UWE Bristol.

 

First speaker announced for 2018/19 BCEF Economic Research Seminar Series

Posted on

On Thursday, 27th September, we will have the pleasure to hear the presentation by our dear guest Steven Bosworth (University of Reading).

Steven will present his joint paper with Dennis J. Snower (Kiel) on the topic: “Organizational Ethics, Narratives and Social Dysfunctions”.

Paper Abstract:

All organisations are characterised by some degree of conflict between its members’ private interests and the organisation’s mission. This may manifest in corruption, fraud, or more banally, shirking. In response leaders can try to mould the identities of workers to make them more sensitive to the social costs of their actions.

We explicitly model the social interactions and constraints giving rise to this process, deriving an endogenous profile of wages, monitoring, and organisational culture. In this way we provide a theory of organisational dysfunction, and show how such dysfunctions might be mitigated through changes in government policies or social norms. These changes become particularly effective if they encourage both managers and workers to adopt more ethical narratives – organisational culture change is in this case self-reinforcing. Ineffective narratives on the other hand can cause pushback from employees when managers adopt a more ethically ‘strict’ stance. We derive the conditions under which beneficial or countervailing feedback effects can occur.

Dr Steven Bosworth is a behavioural economist working as a Lecturer at the University of Reading. His research uses microeconomic theory and controlled laboratory experiments to investigate how context, motivation and the social environment influence human cooperation. He has published on the topics of uncertainty and coordinated decisions, the distribution of prosocial dispositions in the society and competition, and the consequences of social fragmentation on wellbeing.

Before joining the University of Reading in 2017, Steven was a postdoctoral researcher at the Institute for the World Economy in Kiel, Germany, where he maintains an affiliation.

More information about Steve and his list of publications can be found here.

 

 

Beyond pay gaps: Inequality at work

Posted on

By the researchers of the “Earnings gaps and inequality at work” project, Bristol Business School.

On 25 May 2018, UWE Economics hosted an expert workshop on ‘Beyond Pay Gaps: Inequality at Work’. Six experts were invited to share their reflections, based on their own research, on two questions:

1) What is the nature of inequality at work?

2) Is the pay gap an adequate indicator? If not, how can we improve our assessments of inequality at work?

The key aim was to foster a discussion on how to conceptualise and study inequality at work. In an earlier blog entry the workshop organisers’ provided a response to UWE’s reporting on the gender pay gap, which highlighted the fact that some progression on the gender pay gap is not in itself a sign of overall success. There are aspects of inequality at work that are captured by pay indicators and nonetheless merit our attention.

The morning session of the workshop focused on conceptualisations of inequality at work and featured the presentations of three distinguished scholars of labour and inequality. Dr Alessandra Mezzadri (SOAS University of London) drew on her long-standing research on the garment industry in India to highlight patterns of inequality and gender exploitation. Professor Bridget O’Laughlin (Institute of Social Studies) reflected on the concepts of Marx’s political economy framework as well as its conceptual gaps to study inequality at work. Professor Harriet Bradley (UWE Bristol) illustrated how a three-part conceptual framework based on production, reproduction and consumption can be used to conceptualise gender inequality at work.

In the afternoon session, three distinguished academics on gender, organisation and inequality presented on methodological approaches to study inequality at work. Dr Hannah Bargawi (SOAS University of London) discussed how a pyramid-shaped understanding of inequality at work can guide us through moving our focus between different levels of inequality. Dr Olivier Ratle (UWE Bristol) presented the qualitative methods used to study early career academics’ experience of work. Dr Vanda Papafilippou (UWE Bristol) described a range of methods from the field of sociology of education to study the workplace.

The presentations generated rich discussions on the conceptualisations of social reproduction, the complexity of inequality and the relations between the material and the cultural. The participants agreed that research on these themes is both timely and needed. Furthermore, a podcast series on ‘Feminism, Gender and the Economy’ featuring two interviews with workshop speakers will be launched in 2018/2019 academic year. Watch this space for the upcoming podcast series!

This workshop was funded by UWE Bristol. The workshop’s organisers are grateful to all participants for their thoughtful contributions and productive discussions.

The Role of Social Norms in Incentivising Energy Reduction in Organisations

Posted on

By Peter Bradley

UWE Economics researcher Peter Bradley, has just published a chapter on “The Role of Social Norms in Incentivising Energy Reduction in Organisations” in collaboration with Matthew Leach and Shane Fudge. This is part of a collaboration by leading international academics to develop a research handbook on employee pro-environmental behaviour. The work stems from the UWE Economics groups sustainability related research.

The Research Handbook on Employee Pro-Environmental Behaviour brings contributions that consolidate existing research in the field as well as adding new insights from organisational psychology, human resource management and social marketing.

The whole book is available to download from Edward Elgar Publishing:

Research Handbook on Employee Pro-Environmental Behaviour edited by Victoria K. Wells, Diana Gregory-Smith and Danae Manika.

 

 

Using the Indices of Multiple Deprivation – it is (so much) more than just a top-line indicator.

Posted on

By Ian Smith

There has been a lot of interest in measuring disadvantage over the past 20 years in the UK even if this has not always been matched by government responses. The fifth iteration of the English IMD is to be reviewed over the next 12 months.  Clearly disadvantage is a complex thing and can be represented in many different ways.  As a geographer (or someone who periodically claims to be a geographer hiding in an Economics Department) I am particularly interested in area-based assessments of disadvantage.  I know such measures are problematic but what indicators are not?  I recently have had the opportunity with colleagues to review how the English Indicator of Multiple Deprivation works on behalf of Power to Change (see https://www.powertochange.org.uk/) and this is a short blog that captures some of the thinking that came out of that work (any errors or misinterpretations are all my/our fault and not necessarily shared by anyone at Power to Change).

So, the English IMD is a second-generation indicator of area-based deprivation that represents 7 ‘dimensions’ (or 10 sub-dimensions if you like) of disadvantage from worklessness to housing affordability, from health (mental and physical) to distance from your nearest post office. It is ‘second generation’ because it is not solely dependent on small area census data (as ‘first generation’ indices are/were) but is based on a range of small area administrative and census data from different sources within English government.

I am a fan. It is lovely.  My colleagues in other European countries are jealous of it (the basic model is oft copied) – both because of its breadth of content but also because of our lovely regular statistically ordered lower super output areas (LSOAs) that sometimes get conflated for neighbourhoods.  However, an indicator is a conceptual model of a real concept.  As George Box pointed out – all models are wrong, but some [of the better ones] are useful.  We and Power to Change were interested in posing the question of how useful is the IMD to Power to Change?

In particular, we were interested in how the IMD is used within a particular organisational context (Power to Change). We set up a set of dimensions to help us think about how an indicator (a statistical instrument ‘designed’ to perform a task) is constructed and deployed.  We asked people in Power to Change how they used the IMD and what was their assessment of the strengths and weaknesses for what they needed to do: investing in community businesses that alleviate disadvantage in England.  What struck us in these conversations was that the IMD was only being used in its top-line indicator format – what was being missed was the opportunity to use the IMD as an indicator system that can be moulded to the specific objectives of an organization.

We explored how to use the IMD as a system of indicators to shine a light on a specific objective: investing in community businesses. We compared spatial targeting at LSOA level for the top-line IMD indicator (the full 7 dimensional one) with the spatial targeting from a bespoke indictor bringing together the health and disability, education and qualifications and the geographic access to services dimensions.  Power to Change has hypothesised that community businesses some of which provide local services may impact on employability (skills) and on the health of residents in the communities that community business serve.  So, we constructed a focussed indicator from components of the topline IMD that focused only on geographic access to services, education and health (for details see Smith et al 2018).  We compared how the focussed IMD indicator would spatially target the attention of Power to Change in comparison to the top-line IMD indicator with a particular focus on the city-region of Liverpool and the County of Suffolk as examples of areas of interest for Power to Change.  We then mapped out the differences (using data and shapefiles obtained under a public licence) showing firstly the map of the top-line IMD indicator, secondly showing our ‘new’ indicator focusing on Power to Change’s priorities and thirdly what difference it makes in targeting.  These maps are shown in Figures 1 (for Liverpool) and Figure 2 (for Suffolk).  We have used the somewhat arbitrary threshold of 30% to indicate disadvantage (the most disadvantaged areas to be targeted) and compared the indicators.

Figure 1

The left-hand side map in both Figure 1 and Figure 2 shows neighbourhoods marked relative to the top-line IMD indicator where the deepest green areas are the most disadvantaged. In the middle map the same rule applies.  The right-hand map in these Figures shows what difference it makes for these areas.  In this right-hand map, the red areas are those that are marked as the most disadvantaged 30% under both indicators.  The blue areas are ‘advantaged’ under both measures.  However, the orange areas are marked as disadvantaged under the ‘better places’ indicator but not under the top-line IMD.

Figure 2

Given the greater importance given to access to services (albeit direct distance accessibility based on 2012 data), it is not surprising that Suffolk LSOAs become more disadvantaged under this measure. Thus, nearly half of Suffolk becomes ‘disadvantaged’ on this measure (30% most disadvantaged in England on this measure) than under the top-line IMD (more of Suffolk’s third map is coloured orange).  Perhaps it is of greater surprise that the prioritisation of Liverpool changes little under the new formulation.  Most of Liverpool’s neighbourhoods remain identified as ‘disadvantaged’ (marked as red in the third map along).

This is however, just a schema for moving resources around. It is an inevitable result of re-calculating the target IMD measure that some areas gain whilst others lose out (where resources are fixed). However, if areas in Suffolk gain whilst neighbourhoods in Liverpool do not lose out, then how would such a change modify the geography of disadvantage [under this measure] across England?  Using the 30% figure as the threshold of disadvantage just under half a million fewer people would be designated as living in a ‘disadvantaged’ area.  We did some cluster analysis of the ranking on the top-line IMD indicator and our suggested Power to Change indicator considering both how LSOAs clustered together (using forms of hot spot analysis) to capture how patterns of disadvantage form broad regions and secondly, we looked at the identification of outlier neighbourhoods (using the analysis of Anselin Local Moran’s I) to capture differences within these wider clusters.

Figure 3

On Figures 3 and 4 the LSOAs that are marked as red are ones than appear as advantaged (close to other advantaged areas). In these Figures we have a left-hand map that shows the clustering of indicator ranking in relation to Suffolk.  The middle map shows the Getis-Ord clustering for England as a whole whilst the right-hand map shows the Local Moran’s I maps which show where areas are located as outliers in wider regions.  Where there is red there is advantage and where there is blue there is disadvantage (from an area-based perspective).  Yellow areas are mixed (any area’s ranking is not easily predicted from the ranking of its neighbours).  It is also worth noting that the red and the blue areas are not necessarily all of the most disadvantaged areas – just areas that are close to others that are similarly ranked (whether high or low).

Figure 4

It is not surprising to see clusters or disadvantaged (blue) areas in England’s northern metropolitan areas, in the West Midland and in the extreme South West in Figure 3 that maps out the top-line IMD indicator. It is also not surprising to see the East and

North of London marked as deep blue although it is worth noting that the former Kent Coalfield areas remain marked as disadvantaged in blue. So, it is England to the south of the Wash to Severn axis as well as North Yorkshire that are marked as ‘advantaged’ regions under the top-line IMD indicator.  The Anselin outlier mapping (right hand map) in Figure 3 points out the presence of disadvantaged LSOAs in advantaged clusters and of the presence of advantaged LSOAs in disadvantaged clusters.

Moving to the Power to Change indicator in Figure 4 we see a change in the geography that might be targeted (in this case by investment in community businesses). More rural areas in the East and South West of England become identified as ‘disadvantaged’.  Areas in the East and North of London no longer become identified as disadvantaged in terms of the clustering on this measure’s ranking.  There is a different dynamic – to be disadvantaged area in London is to be surrounded by advantaged areas.  The East of England (including Suffolk) becomes identified with the cluster of disadvantage although there are clearly still advantaged area outliers in the sea of blue disadvantaged areas.  Although there are disadvantaged areas in the advantaged region of London.  It has to be stressed that this applies only to forms of disadvantage that flow from combinations of problematic educational, health and accessibility outcomes.  There would be a case for an organisation like Power to Change to use a form of IMD that relates specifically to their core mission as a spatial guide to targeting rather than just using the top-line IMD indicator.

The aim of the exercise is not to rubbish the general top-line IMD. I am still a fan – it is still offers useful insight into the patterns of generalised area-based disadvantage across England.  The English IMD is still useful to Power to Change in a general sense.  However, the aim of this has been to draw to attention the fact that deploying the indicator system in the light of what is trying to be achieved makes better use of the IMD system.  The East and North of London is clearly a region with many disadvantaged areas but if the aim of the exercise is to invest in community businesses that improve access to services, health and educational outcomes, there might be better areas on which to focus this specific form of investment.  Whatever form of analysis we come up with to capture disadvantage there is always a set of political choices about how to share out public spending.  However, the English IMD is more than just the top-line indicator and the top-line IMD was never intended to be the only way in which area-based disadvantaged was represented.

Although in this delicate dance of spatial targeting, the real answer is to invest more in welfare services. Perhaps that is one normative step too far?

If you want to read more about our work with Power to Change, please download the report we wrote for them (available from September).

Smith, I, Green, E, Whittard, D. and Ritchie, F. (2018) Re-thinking the indices of multiple deprivation (for England): a review and exploration of alternative/complementary area-based indicator systems. Final Report. Bristol Centre for Economics and Finance (BCEF) in the Bristol Business School at the University of the West of England (UWE).

Measuring non-compliance with minimum wages

Posted on

By Professor Felix Ritchie

When a minimum wage is set, ensuring that employees do get at least that minimum is a basic requirement of regulators. Compliance with the minimum wage can vary wildly: amongst richer countries, around 1%-3% of wages appear to fall below the minimum but in developing countries non-compliance rates can be well over 50%.

As might be expected, much non-compliance exists in the ‘informal’ economy: family businesses using relatives on an ad hoc basis, cash-only payments for casual work, agricultural labouring, or simply the use of illegal workers. However, there is also non-compliance in the formal economy. This is analysed by regulators using large surveys of employers and employees which collect detailed information on hours and earnings. This analysis allows them to identify broad characteristics and the overall scale of non-compliance in the economy.

In the UK, enforcement of the minimum wage is carried out by HM Revenue and Customs, supported by the Low Pay Commission. With 30 million jobs in the UK, and 99% of them paying at or above the minimum wage, effective enforcement means knowing where to look for infringements (for example, retail and hospitality businesses tend to pay low, but compliant, wages; personal services are more likely to pay low wages below the minimum; small firms are more likely to be non-compliant than large ones, and so on). Ironically, the high rate of compliance in the UK can bring problems, as measurement becomes sensitive to the way it is calculated.

A new paper by researchers at UWE and the University of Southampton looks at how non-compliance with minimum wages can be accurately measured, particularly in high-income countries. It shows how the quantitative measurement of non-compliance can be affected by definitions, data quality, data collection methods, processing and the choice of non-compliance measure.

The paper shows that small variations in these can have disproportionate effects on estimates of the amount of non-compliance. As a case study, it analyses the earnings of UK apprentices to show, for example, that even something as simple as the number of decimal places allowed on a survey form can have a significant effect on the non-compliance rates.

The study also throws light on the wider topic of data quality. Much research is focused on marginal analyses: looking at the relative relationships between different factors. These don’t tend to be obviously sensitive to very small variations in data quality, but that is partly because it is can be harder to identify sensitive values.

In contrast, non-compliance with the minimum wage is a binary outcome: a wage is either compliant or it is not. This makes tiny variations (just above or just below the line) easier to spot, compared to marginal analysis. Whilst this study focuses on compliance with the minimum wage, it highlights how an understanding of all aspects of the data collection process, including operational factors such as limiting the number of significant digits, can help to improve confidence in results.

Ritchie F., Veliziotis M., Drew H., and Whittard D. (2018) “Measuring compliance with minimum wages”. Journal of Economic and Social Measurement, vol. 42, no. 3-4, pp. 249-270. https://content.iospress.com/articles/journal-of-economic-and-social-measurement/jem448

“A Remarkable National Effort”: The Dismal Arithmetic of Austerity

Posted on

Dr Rob Calvert Jump and Dr Jo Michell assess public debt accounting in this article.

In a recent tweet, George Osborne celebrated the fact that the UK now has a surplus on the government’s current budget. Osborne cited an FT article noting that “… deficit reduction has come at the cost of an unprecedented squeeze in public spending. That squeeze is now showing up in higher waiting times in hospitals for emergency treatment, worse performance measures in prisons, severe cuts in many local authorities and lower satisfaction ratings for GP services.”

It is a measure of how far the debate has departed from reality that widespread degradation of essential public services can be regarded as cause for celebration.

The official objective of fiscal austerity was to put the public finances back on a sustainable path. According to this narrative, government borrowing was out of control as a result of the profligacy of the Labour government. Without a rapid change of policy, the UK faced a fiscal crisis caused by bond investors taking fright and interest rates rising to unsustainable levels.

Is this plausible? To answer, we present alternative scenarios in which actual and projected austerity is significantly reduced and examine the resulting outcomes for national debt.

Public sector net debt (the headline government debt figure) in any year is equal to the debt at the end of the previous year plus the deficit plus adjustments,

where PSND  is the public sector net debt at the end of financial year, PSNB is total public sector borrowing (the deficit) over the same year, and ADJ is any non-borrowing adjustment. This adjustment can be inferred from the OBR’s figures for both actual data and projections. In our simulations, we simply take the OBR adjustment figures as constants. Given an assumption about the nominal size of the deficit in each future year, we can then calculate the implied size of the debt over the projection period.

What matters is not the size of the debt in money terms, but as a share of GDP. We therefore also need to know nominal GDP for each future year in our simulations. This is less straightforward because nominal GDP is affected by government spending and taxation. Estimates of the magnitude of this effect – known as the fiscal multiplier – vary significantly. The OBR, for instance, assumes a value of 1.1 for the effect of current government spending.  In order to avoid debate on the correct size of the nominal multiplier, we assume it is equal to zero.[1] This is a very conservative estimate and, like the OBR, we believe the correct value is greater than one. The advantage of this approach is that we can use OBR projections for nominal GDP in our simulations without adjustment.

We simulate three alternative scenarios in which the pace of actual and predicted deficit reduction is slowed by a third, a half and two thirds respectively.[2] The evolution of the public debt-to-GDP ratio in each scenario is shown below, alongside actual figures and current OBR projections based on government plans.

 

Fig [1]

Fig [2] 

Despite the fact that the deficit is substantially higher in our alternative scenarios, there is little substantive variation in the implied time paths for debt-to-GDP ratios.  In our scenarios, the point at which the debt-to-GDP ratio reaches a peak is delayed by around two years. If the speed of deficit reduction is halved, public debt peaks at around 97% of GDP in 2019-20, compared to the OBR’s projected peak of 86% in the current fiscal year. Given the assumption of zero nominal multipliers, these projections are almost certainly too high: relaxing austerity would have led to higher growth and lower debt-to-GDP ratios.

Now consider the difference in spending.

Halving the speed of deficit reduction would have meant around £10 billion in extra spending in 2011-12, £8 billion in 2012-13, £19 billion in 2013-14, £21 billion in 2014-15, £29 billion extra in 2015-16, and £37 billion extra in 2016-17.  To put these figures into context, £37 billion is around 30% of total health expenditure in 2016-17.  The bedroom tax, on the other hand, was initially estimated to save less than £500 million per year.  These are large sums of money which would have made a material difference to public expenditure.

Would this extra spending have led to a fiscal crisis, as supporters of austerity argue? It is hard to see how a plausible argument can be made that a crisis is substantially more likely with a debt-to-GDP ratio of 97% than of 86%. Several comparable countries maintain higher debt ratios without any hint of funding problems: in 2017, the US figure was around 108%, the Belgian figure around 104%, and the French figure around 97%.

It is now beyond reasonable doubt that austerity led to increases in mortality rates – government cuts caused otherwise avoidable deaths. These could have been avoided without any substantial effect on the debt-to-GDP ratio. The argument that cuts were needed to avoid a fiscal crisis cannot be sustained.

[1] There is surprisingly little research on the size of nominal multipliers – most work focuses on real (i.e. inflation adjusted) multipliers.

[2] We calculate the actual (past years) or projected (future years) percentage change in the nominal deficit from the OBR figures and reduce this by a third, a half and two thirds respectively. The table below provides details of the middle projection where the pace of nominal deficit reduction is reduced by half.

Training Researchers to Work with Confidential Data: A New Approach

Posted on

Prof Felix Ritchie of UWE’s Business School has recently spent time with the Northern Ireland Statistics and Research Agency and makes the following analysis.

I’ve just spent two days at the Northern Ireland Statistics and Research Agency (NISRA), working with them to develop training for researchers who need access to the confidential data held by NISRA for research. This training is jointly being developed by the statistical agencies of the UK (NISRA, the General Register Office for Scotland, and the Office for National Statistics in England and Wales), as well as HMRC, the UK Data Archive and academic partners. The project is being led by ONS as part of its role to accredit researchers under the new Digital Economy Act, with UWE providing key input; other statistical agencies, such as INSEE
in France and the Australian Bureau of Statistics, are being consulted and are trialling
some of the material.

Training researchers in the use of confidential data is common across statistical agencies around the world, particularly when those researchers need access to the most sensitive data only available through Controlled Access Facilities (CAFs). The growth in CAFs in recent years has mostly come from virtual desktops which allow researchers to run unlimited analyses while still operating in an environment controlled by the data holder. There are now six of these in the UK, and many countries in continental Europe, North America and Oceania operate at least one. The existence of CAFs has led to an explosion in social science research as many things that were not previously allowed because it was too risky to send out data (such as use of non-public business data, or detailed personal data) have now become feasible and cost-effective.

All agencies running CAFs provide some training for researchers; around half of these use ‘passive’ training such as handouts or web pages, but the other half require face-to-face training. Much of this training has evolved from a programme developed at ONS in the UK in the 2000s and this training was recommended as an example of ‘best practice’ for face-to-face training by a Eurostat expert group.

However, this style of training is showing its age. Such training typically has two components: firstly how to behave in the CAFs and secondly how to prevent confidential data from mistakenly showing up in research outputs (‘statistical disclosure control’, or SDC). Both are typically taught mechanistically, in the form of dos and don’ts, explanations of laws and penalties and lots of SDC exercises. Overall the aim of the courses is to impart information to the researcher.

The new training is radically different from the old training. It starts from the premise that researchers are both the biggest risk and the biggest advantage to any CAF: the biggest risk because a poorly-trained or malcontented researcher can negate any security mechanism put in place; the biggest advantage because highly-motivated researchers means cheaper system design, better and more robust security and the chance for the data holder to exploit the goodwill of researchers in methodological research, for example.

In this world the main aim of the training is to encourage the researcher to see himself or herself as part of the data community. If this can be established then the rest of the training follows as a consequence. For example, knowledge of the legal environment or SDC is shared not because it keeps you out of jail but because everyone needs to understand this so the community as a whole works. This gives the course quite a different feel to more traditional courses: much of the day is spent in open-ended facilitated discussions exploring concepts of data access.

The training was designed from the ground up in order to take advantage of recent developments in thinking about data access and SDC. This was also done to avoid being restricted by having to ‘fit’ preconceived ideas about what worked or not; material was included on its own merits, not whether “this was what we used to do…”. For example, the previous SDC component had a large number of numerical examples, developed over many years, leading to attendees remarking on afternoons spent “doing Sudoku”. We reviewed every example to identify the minimum set of principles needing to be explored and then wrote a small number of new examples based on this minimum set. On the other hand, the previous training had relatively little to say about the context for checking outputs for confidentiality breaches; this has now been expanded as it fits with the ethos of understanding why things are done.

Of course, this was not all plain sailing. The original structure, trialled in June 2017, had just one presentation before being comprehensively abandoned. Modules have dropped in and out and been moved around. The initial test for the course has been completely rewritten (a topic for a later blog). Various sections have been inserted as ‘options’ to take account of regional variations in operating practices. Throughout this, multiple organisations have been able to feed into the process so that the final product itself has a sense of community ownership.

We are now at the stage of training-the-trainers to enable independent delivery around the UK. This is already generating much feedback for the future development of the course: for example, a need has arisen for ‘crib sheets’ to help in the facilitation of certain exercises. Overall, however, we are confident that we have a well-structured, informative, course that meets the needs of 21st century data training.

Further reading: for more information on the evidential and conceptual basis for the course, see Ritchie F., Green E., Newman J. and Parker T. (2017) “Lessons Learned in Training ‘Safe Users’ of Confidential Data“. UNECE work session on Statistical Data Confidentiality 2017. Eurostat. 

The Knowledge We Have Lost In Information – The History Of Information in Modern Economics, by Philip Mirowski and Edward Nik-Khah

Posted on

Dr Sebastian Berger’s book review is published in the Heterodox Economics Newsletter

Fake news, post-truth, alternative facts, the commercialization of science, the wholesale destruction of university library collections in the name of “information access” and “digital first”; what does all this have to do with information economics? What happens to cognition, knowledge, truth, wisdom and understanding in the information economy? What are the vortices of images emerging between the natural and the social sciences that give rise to our understanding of “information”? What understanding of human beings is this based on? Who are the relevant actors, their politics and intellectual projects? Anybody concerned with such questions will benefit from reading the book under review, and these are what this review will focus on.

Mirowski and Nik-Khah present the comprehensive results of their fascinating research on information economics that began as far back as Mirowski’s work on cyborg economics and Nik-Khah’s dissertation and his Kapp Award-winning article on auction design. Their book is intended as a contribution to the recent history of economic thought, written in the style of a spy novel that tries to reconstruct who got us to where we are today and how this could happen. Theirs is a grand story of the Great Transformation of the economics profession into market engineers via modern information economics or market design theory. Spying as a method for historians of economic thought is meant to demonstrate these developments and to provide an alternative to performativity theory, which is deemed too vague to be able to account for the details of the interplay of material and intellectual factors. The book is structured into 17 chapters, some of which are as concise as six pages. The first two chapters set the scene and illustrate that there is something rotten about our understanding of the history and state of information economics. The core chapters deal with the roles of natural science, the Nobels and Neoliberals, the Socialist Calculation Debate, Hayek’s economics, Market Socialists at the Cowles Commission, the three schools of market design, two recent case studies, and a concluding chapter on artificial ignorance.

K. William Kapp once expressed his fundamental view that the dehumanization of economic theory and social reality are related and spring from an erroneous understanding of human beings. (Kapp 1985) So, what concept and understanding of human beings are at the base of information economics? (Though not FBI agents, information economists conventionally refer to human beings as “agents” which seems suitable to a spy novel.) The authors convincingly demonstrate, in particular in chapter 9, that information economists basically assume the irrelevancy of cognition and preferences of agents for the desired market outcome they are paid to design. This essentially means that information economists adopt a self-image of being smarter than people and being able to design mechanisms that extract the information from the agents that they are unaware of possessing. It seems that the quicksand of double truths inherent in this assumption remains hidden from their purview. They seem to have no trouble assuming that somehow all the limitations that apply to the agents of their models do not apply to themselves.

This goes back to similar double truths in the works of the intellectual behind the foundational ideas of information economics, i.e. Friedrich von Hayek, who denied people the ability to reason about society as a whole, while he reserved this right and ability to himself. Several chapters describe how the mature Hayek believed that people’s cognition can be disregarded as it does not matter for the operation of the market, which is conceived as an information processor (cyborg, machine, computer) more powerful than any human being. Furthermore, according to Hayek the market arguably expands into the realm of non-knowledge, i.e. the unknown unknown, that is subject to evolutionary forces and not within human conscious control. Success and failure in the market thus depend on ones inheritance of unknown unknowns. The best one can hope for is that the market sheds light onto one’s own total darkness in a way that it becomes marketable. In this tradition, market designers claim to be able to design markets that extract information from people’s unknown unknown, that is, to get agents to give up information they hold. Mirowski and Nik-Khah conclude that in this information economy the market no longer gives people what they want but people have to give the market what it wants (cf. the final chapter). While this seems to suggest that the inside of human beings somehow matters for the establishment of Truth, this is secondary to the overriding claim that the Market is the seat and arbiter of Truth. Truth is thus turned into a function of the unequal and arbitrary distribution of the ability to pay (what prevents the top 1% from buying Truth?).

Mirowski and Nik-Khah judge the essence of these views as pure Social Darwinism with a strong dose of predestination. (p. 69) Along with the authors I think that the mature Hayek’s grave error was to deny human beings to be the seat of the kind of Truth that is revealed as a gift from introspection, that is, self-knowledge that enables self-cultivation. Hayek’s highly problematic understanding of human beings is compounded by a problematic that Tony Lawson has recently pointed to in an interview (Lawson 2018). That is, Hayek denied the existence of bio-physical human needs that are objectifiable, such that their satisfaction can be planned in a social provisioning process. Otto von Neurath, Max Weber, K.W. Kapp and K. Polanyi called this material or substantive rationality.

This clash of views goes back to the Socialist Calculation Debate, which Mirowski and Nik-Kah identify as the birthplace of information economics (chapter 5). It is the great achievement of this book to have pointed out the seminal importance of this debate for understanding economics today. Unfortunately, the book does not mention the “lost” Neurath-wing of the Socialist Calculation Debate and only focuses on the Cowles men’s enthusiasm for a cybernetic socialism. According to the present book it was the market socialists following Otto Lange’s argument that developed information economics at the Cowles Commission. The authors support their main thesis with plenty of evidence that the market socialists “lost track of their political argument and deep motivations” and were haunted by Hayek to end up as neoliberals who sell themselves as experts in market design. This raises the question as to the reasons for the odyssey of Walrasian market socialists following Otto Lange’s intellectual project.

For the full review please see:  https://www.heterodoxnews.com/HEN/book%20reviews.html

Degree Algorithms: Equity and Grade inflation

Posted on

In his recent working paper Dave Allen highlights the substantial differences in the way that university degree calculations can be made.

The algorithms UK universities use to calculate a student’s final degree outcome can be complex and sometimes counter-intuitive; some commentators suggest that they have contributed to ‘classification inflation’ across the UK higher education sector. A less well understood concern is that the variety in algorithms potentially means the same set of marks can be awarded a different classification depending on what university the student attended.

The 17 questions and answers below aim to clarify the issue.

1.         Is grade inflation happening?
The recent HESA data on degree qualifications confirms a continued increase (or inflation) in the proportion of ‘good honours’ (1st and 2:1s) being awarded – from 68% of all graduates in 2012/13 to 75% in 2016/17 Likewise, the proportion of first has increased from 18% to 26%.  While the numbers cannot be disputed, the cause is.

2.         Why is it occurring?
According to the Cambridge pro-vice chancellor for education, Professor Graham Virgo, grade inflation is not “a cause for concern” and is “down to tuition fees because students are more motivated and are working harder”. Alternatively, others like Nick Hillman, director of the Higher Education Policy Institute claim, “Universities are essentially massaging the figures, they are changing the algorithms and putting borderline candidates north of the border,” (The Telegraph)

3.         Have there been changes to university degree Algorithms?
There have been significant changes to degree algorithms in the last 10 years. The recent report by  UUK-GuildHE (Understanding Degree Algorithms, Oct 2017) found that many HEIs had changed their algorithms – primarily to ensure internal consistency between departments and faculties, but also to achieve “competitor or sector alignment” (page 18). (see also Higher Education Academy’s 2015)

4.         What is a degree algorithm?
Degree algorithms describe the process universities use to translate module outcomes into a final degree classification (1st, U2, L2, 3rd). The algorithm software calculates the weighted average of the ‘counting’ modules, this average mark then determines the classification.

5.         What is a weighted mark?
Degrees are made up of a number of modules and can have different credits e.g. 10, 15, 30, 20, 40 etc., students typically study 120 credits a year or 360 credits in total. The weighted average takes account of different module sizes (credit weightings). To calculate a weighted average the module marks are first multiplied by their credits, these weighted values are then added together; finally, this total is divided by the total value of credits.

6.         Are all algorithms the same?
While all UK universities adopt the same classifications, how universities arrive at these classifications is a very different matter. The variation comes in how the average of each ‘counting’ year is weighted and whether some module marks are ‘discounted’ or removed from  the calculation.

7.         What is a ‘counting’ year?
Simply those years of study included in the degree calculation, it is notable that most UK  universities do not include year 1 marks in their algorithms – the focus is on year 2 and 3.

8.         Why is there a greater weight on year 3 studies?
The higher weighting given to year 3 marks captures the notion of the student’s ‘exit velocity’ or  the standard that the student is performing at as they graduate from university. Alternatively,  the higher weightings on year 3 might reflect a university’s requirement that programmes must become more challenging as students progress through them.

9.         Is each counting year weighted equally?
There is wide variation in the weightings applied to year 2 and 3 marks. This can range from 50/50 [Oxford Brookes] to 20/80 [Derby].

10.       How does the year weighting affect the degree mark?
This is best illustrated using an example. Assume the year 2 and 3 average marks are 64.38% and 69.00% respectively. If weighted equally [50/50] the combined average would be 66.69%, if weighted 20/80, this combined mark increases to 68.08 – an increase of 1.39% – all because a greater weight has been placed on the year 3 average mark. It follows that had the year 2 and 3 average marks been switch around, the increase in the overall average using a 20/80 weighting would be smaller [i.e. 65.30%].

11.       How does discounting a module affect the weighted average?
Discounting or, removing the lowest marks can only improve the overall degree average. It follows also that “If only the worst, outlying marks are omitted, it is possible that this would lead to grade inflation” (UUK-GuildHE p.37).

Again, we can use a worked example to show the impact. From the previous example, if we exclude the lowest marks for 30 credits (in each year) the year 2 marks become 69.17% (up from 64.38%), the year 3 marks become 70.83% (up from 69.00%). Applying the same weightings the degree mark increases to 70.0% (50/50) and 70.5% (20/80) – the 2:1 is now a 1st. It follows also that the differences between those algorithms that discount and those that do not, will become greater as the discounted module marks get lower.

12.       How common is discounting?
Without some central ‘register of practice’, it is hard to say exactly. The UUK-GuildHE survey suggests that up to a third of those universities contacted use discounting. It follows also that a large proportion of those universities that discount also apply differential weightings. The gradual shift in the use of discounting has probably been the significant driver behind grade/classification inflation.

13.       What is a borderline candidate?
Most algorithms take the degree ‘average’ to either one or two decimal points e.g. 69.5% or  69.45%. This results in borderline marks where the exam board is a called upon to determine what classification is awarded. There are various methods, one includes using a simple rule whereby marks equal to or less than 0.5% below a classification boundary are awarded the higher classification ‘automatically’ and confirmed by the exam board (thus a 1st does not start at 70%, it starts at 69.5%). Alternatively, marks within a given band (e.g. 68.5% – 69.49%) might be granted an ‘uplift’ in classification (e.g. from a U2 to a 1st) using the preponderance principle: a 1st could be awarded if the student has 60 credits in the higher boundary in their final year. Not surprisingly, these borderline adjustments can have a significant impact on individual student’s classification and the overall profile for a given programme.

14.       How do the different weightings and discounting effect a
university’s overall results?
The distribution of the degree classifications can vary significantly depending on the algorithm   used. Figure 1 shows a simulation using the same set of marks for a number of students (211 in total) where 6 different algorithms are applied. The first four algorithms (UNI[1] to UNI[4]) have  different weights for each counting year (Y2 and Y3 only), these weights range from 50/50 to 25/75, the fifth algorithm (UNI[5]) ‘discounts’ 20 credits from each year and uses a 25/75 weighting. For comparison, the sixth algorithm (UNI[6]) uses all years of study, equally weighted (which would be the outcome if the Grade Point Average (GPA) was applied – see below).

The impact is quite dramatic. In terms of the different weightings alone (UNI[1] to UNI[4]) the proportion of 1st ranges between 16% to 23%. The difference increases significantly once discounting is applied (UNI[5]), form 16% up to 32%. In Figure 1 the proportion of students achieving a 2:2 (awarded where the average mark falls between 50-59%) also declines significantly from 28.9% (UNI [1]) to 18% (UNI[5]). This simulation suggests a student’s post university ‘life chances’ may be significantly dependant on how their chosen university determines their classification (all other things being equal).

15.       Do most students understand the degree algorithm that applies to them?
A good question. The simpler the algorithm the more likely the students will understand its implication, if not use it to set personal academic targets. However, the truth is that many algorithms are very complex, and many use more than one rule to determine the degree classification. Here the interested reader might like to see a YouTube video posted by Sheffield University, in particular the comments below this video. It is also very likely that students do not take into consideration the degree algorithm when choosing a university.

16.       What is the bigger problem Grade inflation or Equity?
While the national data shows significant increases in the proportion of 1st and U2 being awarded, we cannot definitively say there has been ‘grade inflation’ – to determine this we would need the actual module marks. The increase in 1st and U2 is likely to be a combination of students working harder and gradual changes in degree algorithms. What we can say – with some certainty: is it is a concern that under the current system the same set of marks can result in such a wide range of degree outcomes. If equity and rigor are to be the hallmarks of UK higher education provision, these differences cannot be ignored or defended.

17.       What can be done about it?
If valid comparisons between students’ achievements are to be made, it is follows that all universities should adopt the same algorithm when classifying degree outcomes. In this context the consistent use of the USA GPA classification system (or similar) has clear benefits. Jonathan Wolff (professor of philosophy at University College London) accepts that adopting the GPA is a “move in the right direction” but also takes the view that “we should simply issue students with transcripts to record their study, and leave it at that (Guardian).  This is a laudable idea but one which students (and employers) might find difficult to accommodate.

Allen, D. O. (2018) Degree algorithms, grade inflation and equity: the UK higher education sector, Bristol Centre for Economics and Finance, Working Paper 1803,