Australia’s bold proposals for government data sharing

Posted on

By Felix Ritchie.

In August I spent a week in Australia working with the new Office of the National Data Commissioner (ONDC). The ONDC, set up at the beginning of July, is barely two months old but has been charged with the objective of getting a whole-of-government approach to data sharing ready for legislation early in 2019.

This is a mammoth undertaking, not least because the approach set out in the ONDC’s Issues Paper proposes a new way of regulating data management. Rather than the traditional approach of trying to specify in legislation exactly what may or may not be allowed, the ONDC is proposing a principles-based approach: this focuses on setting out the objectives of any data sharing and the appropriate mechanisms by which access is governed and regulated.

In this model, the function of legislation is to provide the ground rules for data sharing and management within which operational decisions can be made efficiently. This places the onus on data managers and those wanting to share data to ensure that their solutions are demonstrably ethical, fair, appropriate and sensible. On the other hand, it also frees up planners to respond to changing circumstances: new technologies, new demands, shifts in attitudes, the unexpected…

The broad idea of this is not completely novel. In recent years, the principles-based approach to data management in government has increasingly come to be seen as operational best practice, allowing as it does for flexibility and efficiency in response to local conditions. It has even been brought into some legislation, including the UK’s Digital Economy Act 2017 and the European General Data Protection Regulation. Finally, the monumental Australian Productivity Commission report of 2017  laid out much of the groundwork, by providing an authoritative evidence base and a detailed analysis of core concepts and options.

In pulling these strands together, the ONDC proposals move well beyond current legislation but into territory which is well supported by evidence. Because of the unfamiliarity with some of the concepts, the ONDC has been carrying out an extensive consultation, some of which I was able to observe and participate in.

A key proposal is to develop five ‘Data Sharing Principles’, based on the Five Safes framework (why, who, how, with what detail, with what outcomes) as the overarching structure. The Five Safes is the most widely used model for government data access but has only been used twice before to frame legislation, in the South Australia Public Sector (Data Sharing) Act 2016 and the  UK Digital Economy Act 2017.

The most difficult issues facing the ONDC arise from the ‘why’ domain: what is the public benefit in sharing data and the concomitant risk to an individual’s privacy? How will ‘need-to-know’ for data detail be assessed? What are the mechanisms to prevent unauthorised on-sharing of data? How will shared data be managed over its lifecycle, including disposal? To what uses can shared data be put? Can data be shared for compliance purposes? How can proposals be challenged?

These are all good questions, but they are not new: any ethics or approvals board worth its salt asks similar questions, and would expect good answers before it allows data collection, sharing or analysis to proceed. A good ethics board also knows that this is not a checklist: ethical approval should be a constructive conversation to ensure a rock-solid understanding of what you’re trying to achieve and the risks you’re accepting to do so.

This is the also the crux of the principles-based approach being taken by the ONDC: it is not for the law to specify how things should be done, nor to specify what data sources can be shared. But the law does provide the mechanisms to ensure that any proposals put forward can be assessed against a clear purpose test around when data may and may not be shared and that appropriate safeguards are in place…

Finally, the law will require transparency; this has to be done in sunlight. A public body, using public money and resources for the public benefit, should be able to answer the hard questions in the public arena; otherwise, where is the accountability? The ONDC will require data sharing agreements to be publicly available, so people can see for what purpose (and with what associated protections) their data are being used.

To some, this need to justify activities on a case-by-case basis, rather than having a black-and-white yes/no rule, might seem like an extra burden. The aim of the consultation is to ensure that this isn’t the case. In fact, a transparent, multi-dimensional assessment is any project’s best friend: it provides critical input at the design stage and helps to spot gaps in planning or potential problems, as well as giving opponents a clear opportunity to raise objections.

Of course, even if the legislation is put in place, there is still no guarantee that it will turn out as planned. As I have written many times (for example in 2016), attitudes are what matter. The best legislation or regulation in the world can be derailed by individuals unwilling to accept the process. This is why the consultation process is so important. This is also why the ONDC has been charged with the broader role of changing the Australian public sector culture around data sharing, which tends to be risk-averse. The ONDC also has a role to build and maintain trust with the public through better engagement to hear their concerns.

From my perspective, this is a fascinating time. The ONDC’s proposals are bold but built on a solid foundation of evidence. In theory, they propose a ground-breaking way to offer a holy trinity of flexibility, accountability, and responsibility. If the legislation ultimately reflects the initial proposals, then I suspect many other governments will be beating a path to Australia’s door.

All opinions expressed are those of the author.

First speaker announced for 2018/19 BCEF Economic Research Seminar Series

Posted on

On Thursday, 27th September, we will have the pleasure to hear the presentation by our dear guest Steven Bosworth (University of Reading).

Steven will present his joint paper with Dennis J. Snower (Kiel) on the topic: “Organizational Ethics, Narratives and Social Dysfunctions”.

Paper Abstract:

All organisations are characterised by some degree of conflict between its members’ private interests and the organisation’s mission. This may manifest in corruption, fraud, or more banally, shirking. In response leaders can try to mould the identities of workers to make them more sensitive to the social costs of their actions.

We explicitly model the social interactions and constraints giving rise to this process, deriving an endogenous profile of wages, monitoring, and organisational culture. In this way we provide a theory of organisational dysfunction, and show how such dysfunctions might be mitigated through changes in government policies or social norms. These changes become particularly effective if they encourage both managers and workers to adopt more ethical narratives – organisational culture change is in this case self-reinforcing. Ineffective narratives on the other hand can cause pushback from employees when managers adopt a more ethically ‘strict’ stance. We derive the conditions under which beneficial or countervailing feedback effects can occur.

Dr Steven Bosworth is a behavioural economist working as a Lecturer at the University of Reading. His research uses microeconomic theory and controlled laboratory experiments to investigate how context, motivation and the social environment influence human cooperation. He has published on the topics of uncertainty and coordinated decisions, the distribution of prosocial dispositions in the society and competition, and the consequences of social fragmentation on wellbeing.

Before joining the University of Reading in 2017, Steven was a postdoctoral researcher at the Institute for the World Economy in Kiel, Germany, where he maintains an affiliation.

More information about Steve and his list of publications can be found here.

 

 

Beyond pay gaps: Inequality at work

Posted on

By the researchers of the “Earnings gaps and inequality at work” project, Bristol Business School.

On 25 May 2018, UWE Economics hosted an expert workshop on ‘Beyond Pay Gaps: Inequality at Work’. Six experts were invited to share their reflections, based on their own research, on two questions:

1) What is the nature of inequality at work?

2) Is the pay gap an adequate indicator? If not, how can we improve our assessments of inequality at work?

The key aim was to foster a discussion on how to conceptualise and study inequality at work. In an earlier blog entry the workshop organisers’ provided a response to UWE’s reporting on the gender pay gap, which highlighted the fact that some progression on the gender pay gap is not in itself a sign of overall success. There are aspects of inequality at work that are captured by pay indicators and nonetheless merit our attention.

The morning session of the workshop focused on conceptualisations of inequality at work and featured the presentations of three distinguished scholars of labour and inequality. Dr Alessandra Mezzadri (SOAS University of London) drew on her long-standing research on the garment industry in India to highlight patterns of inequality and gender exploitation. Professor Bridget O’Laughlin (Institute of Social Studies) reflected on the concepts of Marx’s political economy framework as well as its conceptual gaps to study inequality at work. Professor Harriet Bradley (UWE Bristol) illustrated how a three-part conceptual framework based on production, reproduction and consumption can be used to conceptualise gender inequality at work.

In the afternoon session, three distinguished academics on gender, organisation and inequality presented on methodological approaches to study inequality at work. Dr Hannah Bargawi (SOAS University of London) discussed how a pyramid-shaped understanding of inequality at work can guide us through moving our focus between different levels of inequality. Dr Olivier Ratle (UWE Bristol) presented the qualitative methods used to study early career academics’ experience of work. Dr Vanda Papafilippou (UWE Bristol) described a range of methods from the field of sociology of education to study the workplace.

The presentations generated rich discussions on the conceptualisations of social reproduction, the complexity of inequality and the relations between the material and the cultural. The participants agreed that research on these themes is both timely and needed. Furthermore, a podcast series on ‘Feminism, Gender and the Economy’ featuring two interviews with workshop speakers will be launched in 2018/2019 academic year. Watch this space for the upcoming podcast series!

This workshop was funded by UWE Bristol. The workshop’s organisers are grateful to all participants for their thoughtful contributions and productive discussions.

The Role of Social Norms in Incentivising Energy Reduction in Organisations

Posted on

By Peter Bradley

UWE Economics researcher Peter Bradley, has just published a chapter on “The Role of Social Norms in Incentivising Energy Reduction in Organisations” in collaboration with Matthew Leach and Shane Fudge. This is part of a collaboration by leading international academics to develop a research handbook on employee pro-environmental behaviour. The work stems from the UWE Economics groups sustainability related research.

The Research Handbook on Employee Pro-Environmental Behaviour brings contributions that consolidate existing research in the field as well as adding new insights from organisational psychology, human resource management and social marketing.

The whole book is available to download from Edward Elgar Publishing:

Research Handbook on Employee Pro-Environmental Behaviour edited by Victoria K. Wells, Diana Gregory-Smith and Danae Manika.

 

 

Using the Indices of Multiple Deprivation – it is (so much) more than just a top-line indicator.

Posted on

By Ian Smith

There has been a lot of interest in measuring disadvantage over the past 20 years in the UK even if this has not always been matched by government responses. The fifth iteration of the English IMD is to be reviewed over the next 12 months.  Clearly disadvantage is a complex thing and can be represented in many different ways.  As a geographer (or someone who periodically claims to be a geographer hiding in an Economics Department) I am particularly interested in area-based assessments of disadvantage.  I know such measures are problematic but what indicators are not?  I recently have had the opportunity with colleagues to review how the English Indicator of Multiple Deprivation works on behalf of Power to Change (see https://www.powertochange.org.uk/) and this is a short blog that captures some of the thinking that came out of that work (any errors or misinterpretations are all my/our fault and not necessarily shared by anyone at Power to Change).

So, the English IMD is a second-generation indicator of area-based deprivation that represents 7 ‘dimensions’ (or 10 sub-dimensions if you like) of disadvantage from worklessness to housing affordability, from health (mental and physical) to distance from your nearest post office. It is ‘second generation’ because it is not solely dependent on small area census data (as ‘first generation’ indices are/were) but is based on a range of small area administrative and census data from different sources within English government.

I am a fan. It is lovely.  My colleagues in other European countries are jealous of it (the basic model is oft copied) – both because of its breadth of content but also because of our lovely regular statistically ordered lower super output areas (LSOAs) that sometimes get conflated for neighbourhoods.  However, an indicator is a conceptual model of a real concept.  As George Box pointed out – all models are wrong, but some [of the better ones] are useful.  We and Power to Change were interested in posing the question of how useful is the IMD to Power to Change?

In particular, we were interested in how the IMD is used within a particular organisational context (Power to Change). We set up a set of dimensions to help us think about how an indicator (a statistical instrument ‘designed’ to perform a task) is constructed and deployed.  We asked people in Power to Change how they used the IMD and what was their assessment of the strengths and weaknesses for what they needed to do: investing in community businesses that alleviate disadvantage in England.  What struck us in these conversations was that the IMD was only being used in its top-line indicator format – what was being missed was the opportunity to use the IMD as an indicator system that can be moulded to the specific objectives of an organization.

We explored how to use the IMD as a system of indicators to shine a light on a specific objective: investing in community businesses. We compared spatial targeting at LSOA level for the top-line IMD indicator (the full 7 dimensional one) with the spatial targeting from a bespoke indictor bringing together the health and disability, education and qualifications and the geographic access to services dimensions.  Power to Change has hypothesised that community businesses some of which provide local services may impact on employability (skills) and on the health of residents in the communities that community business serve.  So, we constructed a focussed indicator from components of the topline IMD that focused only on geographic access to services, education and health (for details see Smith et al 2018).  We compared how the focussed IMD indicator would spatially target the attention of Power to Change in comparison to the top-line IMD indicator with a particular focus on the city-region of Liverpool and the County of Suffolk as examples of areas of interest for Power to Change.  We then mapped out the differences (using data and shapefiles obtained under a public licence) showing firstly the map of the top-line IMD indicator, secondly showing our ‘new’ indicator focusing on Power to Change’s priorities and thirdly what difference it makes in targeting.  These maps are shown in Figures 1 (for Liverpool) and Figure 2 (for Suffolk).  We have used the somewhat arbitrary threshold of 30% to indicate disadvantage (the most disadvantaged areas to be targeted) and compared the indicators.

Figure 1

The left-hand side map in both Figure 1 and Figure 2 shows neighbourhoods marked relative to the top-line IMD indicator where the deepest green areas are the most disadvantaged. In the middle map the same rule applies.  The right-hand map in these Figures shows what difference it makes for these areas.  In this right-hand map, the red areas are those that are marked as the most disadvantaged 30% under both indicators.  The blue areas are ‘advantaged’ under both measures.  However, the orange areas are marked as disadvantaged under the ‘better places’ indicator but not under the top-line IMD.

Figure 2

Given the greater importance given to access to services (albeit direct distance accessibility based on 2012 data), it is not surprising that Suffolk LSOAs become more disadvantaged under this measure. Thus, nearly half of Suffolk becomes ‘disadvantaged’ on this measure (30% most disadvantaged in England on this measure) than under the top-line IMD (more of Suffolk’s third map is coloured orange).  Perhaps it is of greater surprise that the prioritisation of Liverpool changes little under the new formulation.  Most of Liverpool’s neighbourhoods remain identified as ‘disadvantaged’ (marked as red in the third map along).

This is however, just a schema for moving resources around. It is an inevitable result of re-calculating the target IMD measure that some areas gain whilst others lose out (where resources are fixed). However, if areas in Suffolk gain whilst neighbourhoods in Liverpool do not lose out, then how would such a change modify the geography of disadvantage [under this measure] across England?  Using the 30% figure as the threshold of disadvantage just under half a million fewer people would be designated as living in a ‘disadvantaged’ area.  We did some cluster analysis of the ranking on the top-line IMD indicator and our suggested Power to Change indicator considering both how LSOAs clustered together (using forms of hot spot analysis) to capture how patterns of disadvantage form broad regions and secondly, we looked at the identification of outlier neighbourhoods (using the analysis of Anselin Local Moran’s I) to capture differences within these wider clusters.

Figure 3

On Figures 3 and 4 the LSOAs that are marked as red are ones than appear as advantaged (close to other advantaged areas). In these Figures we have a left-hand map that shows the clustering of indicator ranking in relation to Suffolk.  The middle map shows the Getis-Ord clustering for England as a whole whilst the right-hand map shows the Local Moran’s I maps which show where areas are located as outliers in wider regions.  Where there is red there is advantage and where there is blue there is disadvantage (from an area-based perspective).  Yellow areas are mixed (any area’s ranking is not easily predicted from the ranking of its neighbours).  It is also worth noting that the red and the blue areas are not necessarily all of the most disadvantaged areas – just areas that are close to others that are similarly ranked (whether high or low).

Figure 4

It is not surprising to see clusters or disadvantaged (blue) areas in England’s northern metropolitan areas, in the West Midland and in the extreme South West in Figure 3 that maps out the top-line IMD indicator. It is also not surprising to see the East and

North of London marked as deep blue although it is worth noting that the former Kent Coalfield areas remain marked as disadvantaged in blue. So, it is England to the south of the Wash to Severn axis as well as North Yorkshire that are marked as ‘advantaged’ regions under the top-line IMD indicator.  The Anselin outlier mapping (right hand map) in Figure 3 points out the presence of disadvantaged LSOAs in advantaged clusters and of the presence of advantaged LSOAs in disadvantaged clusters.

Moving to the Power to Change indicator in Figure 4 we see a change in the geography that might be targeted (in this case by investment in community businesses). More rural areas in the East and South West of England become identified as ‘disadvantaged’.  Areas in the East and North of London no longer become identified as disadvantaged in terms of the clustering on this measure’s ranking.  There is a different dynamic – to be disadvantaged area in London is to be surrounded by advantaged areas.  The East of England (including Suffolk) becomes identified with the cluster of disadvantage although there are clearly still advantaged area outliers in the sea of blue disadvantaged areas.  Although there are disadvantaged areas in the advantaged region of London.  It has to be stressed that this applies only to forms of disadvantage that flow from combinations of problematic educational, health and accessibility outcomes.  There would be a case for an organisation like Power to Change to use a form of IMD that relates specifically to their core mission as a spatial guide to targeting rather than just using the top-line IMD indicator.

The aim of the exercise is not to rubbish the general top-line IMD. I am still a fan – it is still offers useful insight into the patterns of generalised area-based disadvantage across England.  The English IMD is still useful to Power to Change in a general sense.  However, the aim of this has been to draw to attention the fact that deploying the indicator system in the light of what is trying to be achieved makes better use of the IMD system.  The East and North of London is clearly a region with many disadvantaged areas but if the aim of the exercise is to invest in community businesses that improve access to services, health and educational outcomes, there might be better areas on which to focus this specific form of investment.  Whatever form of analysis we come up with to capture disadvantage there is always a set of political choices about how to share out public spending.  However, the English IMD is more than just the top-line indicator and the top-line IMD was never intended to be the only way in which area-based disadvantaged was represented.

Although in this delicate dance of spatial targeting, the real answer is to invest more in welfare services. Perhaps that is one normative step too far?

If you want to read more about our work with Power to Change, please download the report we wrote for them (available from September).

Smith, I, Green, E, Whittard, D. and Ritchie, F. (2018) Re-thinking the indices of multiple deprivation (for England): a review and exploration of alternative/complementary area-based indicator systems. Final Report. Bristol Centre for Economics and Finance (BCEF) in the Bristol Business School at the University of the West of England (UWE).

Bringing the ‘political’ back into the economy: A report from I Workshop in Contemporary Political Economy (UWE Bristol-Paris 1 Sorbonne)

Posted on

By Danielle Guizzo and Bruno Tinel

 

The 1st Workshop in Contemporary Political Economy recently took place at the Bristol Business School on June 28th, 2018. It was the first event of a recently established partnership between the UWE Economics subject group (AEF) and the Paris 1 Department of Economics (Pantheon Sorbonne University) as a response to the increasing importance of pluralism in economics teaching and research in the post-crisis scenario. A large proportion of attendants were young scholars (early-career researchers or PhD students), who represent a promising generation for promoting the expansion and excellence of research in political economy as the future of the international PE community.

We are very pleased to inform the community that the workshop was a great success in terms of the quality of the presentations, the number of participants, and the pluralism of subjects. The presentations and subsequent discussions explored the recent frontiers in contemporary political economy, aiming at expanding three main areas: critical macroeconomics; financialisation; and ideology, power and the state.

The final session constituted of a roundtable about the future of pluralistic research in economics and the possibility of engagement with the mainstream of the discipline. Participants expressed the importance of institutional support and an active scholarly community to move beyond standardized metrics and diamond lists in research assessment exercises if we seek to achieve an open, equal dialogue in Economics that allows inclusivity.

Thanks to the great enthusiasm of the workshop’s participants, we will continue to organize a yearly workshop with the purpose of further promoting and disseminating teaching and research in political economy and pluralistic economics, expanding the partnership between UWE and Paris 1-Pantheon Sorbonne, and improving communication and academic exchange among scholars. Therefore, the Department of Economics at Paris 1-Pantheon Sorbonne will organize the second edition of the workshop in contemporary political economy.

We would like to thank the Bristol Centre for Economics & Finance (BCEF), as well as the Accounting, Economics and Finance (AEF) for the grants and support they provided, allowing for the organization of this workshop. We also express our gratitude to the presenters, who delivered excellent talks and provided a space for the exchange of ideas that significantly contributes to future partnerships and prospective research projects.

 

Mexico and Trump

Posted on

By Laura Povoledo

I had the good fortune of visiting Mexico last year on a research visit funded by the British Academy. Mexico is a beautiful country, full of rich history and diverse culture. But it is also a country with huge social problems.

Mexico is the 11th most populated country in the world with around 127 million people. It has been estimated that 42% of Mexico’s total population lives below the national poverty line. Getting millions out of poverty will require enormous effort, but in recent years a growing middle class has emerged, thanks to sustained economic development. The economy of Mexico is now the 11th largest in the world by purchasing power parity, and according to Goldman Sachs, by 2050 Mexico will be the 5th largest economy in the world.

However, recently Mexico has been through some very difficult years. In 2016 the GDP growth rate was below 2% and inflation was 4%. The peso steadily depreciated against the dollar, forcing the government to increase the price of gasoline (half of the fuel consumed by Mexico is imported from the US). Given the country’s poor public transport infrastructure, the cost of private transport is especially important in Mexico, so the increased price of petrol will severely affects households’ living standards. The recent increase in the minimum wages is unlikely to meet the needs of those on low incomes. This deteriorating economic environment has prompted Standard & Poor’s to change their perspective from “stable” to “negative”. And of course, in 2016 Trump was elected.

One of the risks posed by a failing economy is nationalism (and Trump himself is an example). However, Latin America nationalism is different from Trump’s right-wing nationalism. Nationalisms in Latin America have often been associated with left-wing political positions. This is explained by the colonial past and the struggles of national liberation. Mexico will hold its general election on July 1st, and the left’s presidential candidate, Andrés Manuel López Obrador, is as opposed to NAFTA as is Trump.

NAFTA has led to an enormous expansion of US companies in Mexican territory, and several Mexican economists I talked to were often keen to point out that protectionists policies will ultimately damage American interests. It has been estimated that in its 22 years NAFTA has generated 6 million jobs in the US. 40% of the components of Mexican exports are actually produced in the US, in other words, 40 cents of every dollar spent on Mexican exports support jobs in the US. There are now 35 millions of Mexicans living in the United States, of which about 11 million were born in Mexico. Mexican immigrants take the hardest and lowest paid occupations and they provide a source of manpower in many industries, such as agriculture, construction and food processing.

During my visit there I often found a strong rejection of Trump’s rhetoric and a determination to fight against all adversities. A resurgence of national pride may not be totally undesirable if it spurs Mexico to take anti-poverty measures, to support its growing middle class and to dismantle the other wall that is holding it back, that of corruption.

 

Measuring non-compliance with minimum wages

Posted on

By Professor Felix Ritchie

When a minimum wage is set, ensuring that employees do get at least that minimum is a basic requirement of regulators. Compliance with the minimum wage can vary wildly: amongst richer countries, around 1%-3% of wages appear to fall below the minimum but in developing countries non-compliance rates can be well over 50%.

As might be expected, much non-compliance exists in the ‘informal’ economy: family businesses using relatives on an ad hoc basis, cash-only payments for casual work, agricultural labouring, or simply the use of illegal workers. However, there is also non-compliance in the formal economy. This is analysed by regulators using large surveys of employers and employees which collect detailed information on hours and earnings. This analysis allows them to identify broad characteristics and the overall scale of non-compliance in the economy.

In the UK, enforcement of the minimum wage is carried out by HM Revenue and Customs, supported by the Low Pay Commission. With 30 million jobs in the UK, and 99% of them paying at or above the minimum wage, effective enforcement means knowing where to look for infringements (for example, retail and hospitality businesses tend to pay low, but compliant, wages; personal services are more likely to pay low wages below the minimum; small firms are more likely to be non-compliant than large ones, and so on). Ironically, the high rate of compliance in the UK can bring problems, as measurement becomes sensitive to the way it is calculated.

A new paper by researchers at UWE and the University of Southampton looks at how non-compliance with minimum wages can be accurately measured, particularly in high-income countries. It shows how the quantitative measurement of non-compliance can be affected by definitions, data quality, data collection methods, processing and the choice of non-compliance measure.

The paper shows that small variations in these can have disproportionate effects on estimates of the amount of non-compliance. As a case study, it analyses the earnings of UK apprentices to show, for example, that even something as simple as the number of decimal places allowed on a survey form can have a significant effect on the non-compliance rates.

The study also throws light on the wider topic of data quality. Much research is focused on marginal analyses: looking at the relative relationships between different factors. These don’t tend to be obviously sensitive to very small variations in data quality, but that is partly because it is can be harder to identify sensitive values.

In contrast, non-compliance with the minimum wage is a binary outcome: a wage is either compliant or it is not. This makes tiny variations (just above or just below the line) easier to spot, compared to marginal analysis. Whilst this study focuses on compliance with the minimum wage, it highlights how an understanding of all aspects of the data collection process, including operational factors such as limiting the number of significant digits, can help to improve confidence in results.

Ritchie F., Veliziotis M., Drew H., and Whittard D. (2018) “Measuring compliance with minimum wages”. Journal of Economic and Social Measurement, vol. 42, no. 3-4, pp. 249-270. https://content.iospress.com/articles/journal-of-economic-and-social-measurement/jem448

“A Remarkable National Effort”: The Dismal Arithmetic of Austerity

Posted on

Dr Rob Calvert Jump and Dr Jo Michell assess public debt accounting in this article.

In a recent tweet, George Osborne celebrated the fact that the UK now has a surplus on the government’s current budget. Osborne cited an FT article noting that “… deficit reduction has come at the cost of an unprecedented squeeze in public spending. That squeeze is now showing up in higher waiting times in hospitals for emergency treatment, worse performance measures in prisons, severe cuts in many local authorities and lower satisfaction ratings for GP services.”

It is a measure of how far the debate has departed from reality that widespread degradation of essential public services can be regarded as cause for celebration.

The official objective of fiscal austerity was to put the public finances back on a sustainable path. According to this narrative, government borrowing was out of control as a result of the profligacy of the Labour government. Without a rapid change of policy, the UK faced a fiscal crisis caused by bond investors taking fright and interest rates rising to unsustainable levels.

Is this plausible? To answer, we present alternative scenarios in which actual and projected austerity is significantly reduced and examine the resulting outcomes for national debt.

Public sector net debt (the headline government debt figure) in any year is equal to the debt at the end of the previous year plus the deficit plus adjustments,

where PSND  is the public sector net debt at the end of financial year, PSNB is total public sector borrowing (the deficit) over the same year, and ADJ is any non-borrowing adjustment. This adjustment can be inferred from the OBR’s figures for both actual data and projections. In our simulations, we simply take the OBR adjustment figures as constants. Given an assumption about the nominal size of the deficit in each future year, we can then calculate the implied size of the debt over the projection period.

What matters is not the size of the debt in money terms, but as a share of GDP. We therefore also need to know nominal GDP for each future year in our simulations. This is less straightforward because nominal GDP is affected by government spending and taxation. Estimates of the magnitude of this effect – known as the fiscal multiplier – vary significantly. The OBR, for instance, assumes a value of 1.1 for the effect of current government spending.  In order to avoid debate on the correct size of the nominal multiplier, we assume it is equal to zero.[1] This is a very conservative estimate and, like the OBR, we believe the correct value is greater than one. The advantage of this approach is that we can use OBR projections for nominal GDP in our simulations without adjustment.

We simulate three alternative scenarios in which the pace of actual and predicted deficit reduction is slowed by a third, a half and two thirds respectively.[2] The evolution of the public debt-to-GDP ratio in each scenario is shown below, alongside actual figures and current OBR projections based on government plans.

 

Fig [1]

Fig [2] 

Despite the fact that the deficit is substantially higher in our alternative scenarios, there is little substantive variation in the implied time paths for debt-to-GDP ratios.  In our scenarios, the point at which the debt-to-GDP ratio reaches a peak is delayed by around two years. If the speed of deficit reduction is halved, public debt peaks at around 97% of GDP in 2019-20, compared to the OBR’s projected peak of 86% in the current fiscal year. Given the assumption of zero nominal multipliers, these projections are almost certainly too high: relaxing austerity would have led to higher growth and lower debt-to-GDP ratios.

Now consider the difference in spending.

Halving the speed of deficit reduction would have meant around £10 billion in extra spending in 2011-12, £8 billion in 2012-13, £19 billion in 2013-14, £21 billion in 2014-15, £29 billion extra in 2015-16, and £37 billion extra in 2016-17.  To put these figures into context, £37 billion is around 30% of total health expenditure in 2016-17.  The bedroom tax, on the other hand, was initially estimated to save less than £500 million per year.  These are large sums of money which would have made a material difference to public expenditure.

Would this extra spending have led to a fiscal crisis, as supporters of austerity argue? It is hard to see how a plausible argument can be made that a crisis is substantially more likely with a debt-to-GDP ratio of 97% than of 86%. Several comparable countries maintain higher debt ratios without any hint of funding problems: in 2017, the US figure was around 108%, the Belgian figure around 104%, and the French figure around 97%.

It is now beyond reasonable doubt that austerity led to increases in mortality rates – government cuts caused otherwise avoidable deaths. These could have been avoided without any substantial effect on the debt-to-GDP ratio. The argument that cuts were needed to avoid a fiscal crisis cannot be sustained.

[1] There is surprisingly little research on the size of nominal multipliers – most work focuses on real (i.e. inflation adjusted) multipliers.

[2] We calculate the actual (past years) or projected (future years) percentage change in the nominal deficit from the OBR figures and reduce this by a third, a half and two thirds respectively. The table below provides details of the middle projection where the pace of nominal deficit reduction is reduced by half.

The Knowledge We Have Lost In Information – The History Of Information in Modern Economics, by Philip Mirowski and Edward Nik-Khah

Posted on

Dr Sebastian Berger’s book review is published in the Heterodox Economics Newsletter

Fake news, post-truth, alternative facts, the commercialization of science, the wholesale destruction of university library collections in the name of “information access” and “digital first”; what does all this have to do with information economics? What happens to cognition, knowledge, truth, wisdom and understanding in the information economy? What are the vortices of images emerging between the natural and the social sciences that give rise to our understanding of “information”? What understanding of human beings is this based on? Who are the relevant actors, their politics and intellectual projects? Anybody concerned with such questions will benefit from reading the book under review, and these are what this review will focus on.

Mirowski and Nik-Khah present the comprehensive results of their fascinating research on information economics that began as far back as Mirowski’s work on cyborg economics and Nik-Khah’s dissertation and his Kapp Award-winning article on auction design. Their book is intended as a contribution to the recent history of economic thought, written in the style of a spy novel that tries to reconstruct who got us to where we are today and how this could happen. Theirs is a grand story of the Great Transformation of the economics profession into market engineers via modern information economics or market design theory. Spying as a method for historians of economic thought is meant to demonstrate these developments and to provide an alternative to performativity theory, which is deemed too vague to be able to account for the details of the interplay of material and intellectual factors. The book is structured into 17 chapters, some of which are as concise as six pages. The first two chapters set the scene and illustrate that there is something rotten about our understanding of the history and state of information economics. The core chapters deal with the roles of natural science, the Nobels and Neoliberals, the Socialist Calculation Debate, Hayek’s economics, Market Socialists at the Cowles Commission, the three schools of market design, two recent case studies, and a concluding chapter on artificial ignorance.

K. William Kapp once expressed his fundamental view that the dehumanization of economic theory and social reality are related and spring from an erroneous understanding of human beings. (Kapp 1985) So, what concept and understanding of human beings are at the base of information economics? (Though not FBI agents, information economists conventionally refer to human beings as “agents” which seems suitable to a spy novel.) The authors convincingly demonstrate, in particular in chapter 9, that information economists basically assume the irrelevancy of cognition and preferences of agents for the desired market outcome they are paid to design. This essentially means that information economists adopt a self-image of being smarter than people and being able to design mechanisms that extract the information from the agents that they are unaware of possessing. It seems that the quicksand of double truths inherent in this assumption remains hidden from their purview. They seem to have no trouble assuming that somehow all the limitations that apply to the agents of their models do not apply to themselves.

This goes back to similar double truths in the works of the intellectual behind the foundational ideas of information economics, i.e. Friedrich von Hayek, who denied people the ability to reason about society as a whole, while he reserved this right and ability to himself. Several chapters describe how the mature Hayek believed that people’s cognition can be disregarded as it does not matter for the operation of the market, which is conceived as an information processor (cyborg, machine, computer) more powerful than any human being. Furthermore, according to Hayek the market arguably expands into the realm of non-knowledge, i.e. the unknown unknown, that is subject to evolutionary forces and not within human conscious control. Success and failure in the market thus depend on ones inheritance of unknown unknowns. The best one can hope for is that the market sheds light onto one’s own total darkness in a way that it becomes marketable. In this tradition, market designers claim to be able to design markets that extract information from people’s unknown unknown, that is, to get agents to give up information they hold. Mirowski and Nik-Khah conclude that in this information economy the market no longer gives people what they want but people have to give the market what it wants (cf. the final chapter). While this seems to suggest that the inside of human beings somehow matters for the establishment of Truth, this is secondary to the overriding claim that the Market is the seat and arbiter of Truth. Truth is thus turned into a function of the unequal and arbitrary distribution of the ability to pay (what prevents the top 1% from buying Truth?).

Mirowski and Nik-Khah judge the essence of these views as pure Social Darwinism with a strong dose of predestination. (p. 69) Along with the authors I think that the mature Hayek’s grave error was to deny human beings to be the seat of the kind of Truth that is revealed as a gift from introspection, that is, self-knowledge that enables self-cultivation. Hayek’s highly problematic understanding of human beings is compounded by a problematic that Tony Lawson has recently pointed to in an interview (Lawson 2018). That is, Hayek denied the existence of bio-physical human needs that are objectifiable, such that their satisfaction can be planned in a social provisioning process. Otto von Neurath, Max Weber, K.W. Kapp and K. Polanyi called this material or substantive rationality.

This clash of views goes back to the Socialist Calculation Debate, which Mirowski and Nik-Kah identify as the birthplace of information economics (chapter 5). It is the great achievement of this book to have pointed out the seminal importance of this debate for understanding economics today. Unfortunately, the book does not mention the “lost” Neurath-wing of the Socialist Calculation Debate and only focuses on the Cowles men’s enthusiasm for a cybernetic socialism. According to the present book it was the market socialists following Otto Lange’s argument that developed information economics at the Cowles Commission. The authors support their main thesis with plenty of evidence that the market socialists “lost track of their political argument and deep motivations” and were haunted by Hayek to end up as neoliberals who sell themselves as experts in market design. This raises the question as to the reasons for the odyssey of Walrasian market socialists following Otto Lange’s intellectual project.

For the full review please see:  https://www.heterodoxnews.com/HEN/book%20reviews.html