Measuring non-compliance with minimum wages

Posted on

By Professor Felix Ritchie

When a minimum wage is set, ensuring that employees do get at least that minimum is a basic requirement of regulators. Compliance with the minimum wage can vary wildly: amongst richer countries, around 1%-3% of wages appear to fall below the minimum but in developing countries non-compliance rates can be well over 50%.

As might be expected, much non-compliance exists in the ‘informal’ economy: family businesses using relatives on an ad hoc basis, cash-only payments for casual work, agricultural labouring, or simply the use of illegal workers. However, there is also non-compliance in the formal economy. This is analysed by regulators using large surveys of employers and employees which collect detailed information on hours and earnings. This analysis allows them to identify broad characteristics and the overall scale of non-compliance in the economy.

In the UK, enforcement of the minimum wage is carried out by HM Revenue and Customs, supported by the Low Pay Commission. With 30 million jobs in the UK, and 99% of them paying at or above the minimum wage, effective enforcement means knowing where to look for infringements (for example, retail and hospitality businesses tend to pay low, but compliant, wages; personal services are more likely to pay low wages below the minimum; small firms are more likely to be non-compliant than large ones, and so on). Ironically, the high rate of compliance in the UK can bring problems, as measurement becomes sensitive to the way it is calculated.

A new paper by researchers at UWE and the University of Southampton looks at how non-compliance with minimum wages can be accurately measured, particularly in high-income countries. It shows how the quantitative measurement of non-compliance can be affected by definitions, data quality, data collection methods, processing and the choice of non-compliance measure.

The paper shows that small variations in these can have disproportionate effects on estimates of the amount of non-compliance. As a case study, it analyses the earnings of UK apprentices to show, for example, that even something as simple as the number of decimal places allowed on a survey form can have a significant effect on the non-compliance rates.

The study also throws light on the wider topic of data quality. Much research is focused on marginal analyses: looking at the relative relationships between different factors. These don’t tend to be obviously sensitive to very small variations in data quality, but that is partly because it is can be harder to identify sensitive values.

In contrast, non-compliance with the minimum wage is a binary outcome: a wage is either compliant or it is not. This makes tiny variations (just above or just below the line) easier to spot, compared to marginal analysis. Whilst this study focuses on compliance with the minimum wage, it highlights how an understanding of all aspects of the data collection process, including operational factors such as limiting the number of significant digits, can help to improve confidence in results.

Ritchie F., Veliziotis M., Drew H., and Whittard D. (2018) “Measuring compliance with minimum wages”. Journal of Economic and Social Measurement, vol. 42, no. 3-4, pp. 249-270. https://content.iospress.com/articles/journal-of-economic-and-social-measurement/jem448

“A Remarkable National Effort”: The Dismal Arithmetic of Austerity

Posted on

Dr Rob Calvert Jump and Dr Jo Michell assess public debt accounting in this article.

In a recent tweet, George Osborne celebrated the fact that the UK now has a surplus on the government’s current budget. Osborne cited an FT article noting that “… deficit reduction has come at the cost of an unprecedented squeeze in public spending. That squeeze is now showing up in higher waiting times in hospitals for emergency treatment, worse performance measures in prisons, severe cuts in many local authorities and lower satisfaction ratings for GP services.”

It is a measure of how far the debate has departed from reality that widespread degradation of essential public services can be regarded as cause for celebration.

The official objective of fiscal austerity was to put the public finances back on a sustainable path. According to this narrative, government borrowing was out of control as a result of the profligacy of the Labour government. Without a rapid change of policy, the UK faced a fiscal crisis caused by bond investors taking fright and interest rates rising to unsustainable levels.

Is this plausible? To answer, we present alternative scenarios in which actual and projected austerity is significantly reduced and examine the resulting outcomes for national debt.

Public sector net debt (the headline government debt figure) in any year is equal to the debt at the end of the previous year plus the deficit plus adjustments,

where PSND  is the public sector net debt at the end of financial year, PSNB is total public sector borrowing (the deficit) over the same year, and ADJ is any non-borrowing adjustment. This adjustment can be inferred from the OBR’s figures for both actual data and projections. In our simulations, we simply take the OBR adjustment figures as constants. Given an assumption about the nominal size of the deficit in each future year, we can then calculate the implied size of the debt over the projection period.

What matters is not the size of the debt in money terms, but as a share of GDP. We therefore also need to know nominal GDP for each future year in our simulations. This is less straightforward because nominal GDP is affected by government spending and taxation. Estimates of the magnitude of this effect – known as the fiscal multiplier – vary significantly. The OBR, for instance, assumes a value of 1.1 for the effect of current government spending.  In order to avoid debate on the correct size of the nominal multiplier, we assume it is equal to zero.[1] This is a very conservative estimate and, like the OBR, we believe the correct value is greater than one. The advantage of this approach is that we can use OBR projections for nominal GDP in our simulations without adjustment.

We simulate three alternative scenarios in which the pace of actual and predicted deficit reduction is slowed by a third, a half and two thirds respectively.[2] The evolution of the public debt-to-GDP ratio in each scenario is shown below, alongside actual figures and current OBR projections based on government plans.

 

Fig [1]

Fig [2] 

Despite the fact that the deficit is substantially higher in our alternative scenarios, there is little substantive variation in the implied time paths for debt-to-GDP ratios.  In our scenarios, the point at which the debt-to-GDP ratio reaches a peak is delayed by around two years. If the speed of deficit reduction is halved, public debt peaks at around 97% of GDP in 2019-20, compared to the OBR’s projected peak of 86% in the current fiscal year. Given the assumption of zero nominal multipliers, these projections are almost certainly too high: relaxing austerity would have led to higher growth and lower debt-to-GDP ratios.

Now consider the difference in spending.

Halving the speed of deficit reduction would have meant around £10 billion in extra spending in 2011-12, £8 billion in 2012-13, £19 billion in 2013-14, £21 billion in 2014-15, £29 billion extra in 2015-16, and £37 billion extra in 2016-17.  To put these figures into context, £37 billion is around 30% of total health expenditure in 2016-17.  The bedroom tax, on the other hand, was initially estimated to save less than £500 million per year.  These are large sums of money which would have made a material difference to public expenditure.

Would this extra spending have led to a fiscal crisis, as supporters of austerity argue? It is hard to see how a plausible argument can be made that a crisis is substantially more likely with a debt-to-GDP ratio of 97% than of 86%. Several comparable countries maintain higher debt ratios without any hint of funding problems: in 2017, the US figure was around 108%, the Belgian figure around 104%, and the French figure around 97%.

It is now beyond reasonable doubt that austerity led to increases in mortality rates – government cuts caused otherwise avoidable deaths. These could have been avoided without any substantial effect on the debt-to-GDP ratio. The argument that cuts were needed to avoid a fiscal crisis cannot be sustained.

[1] There is surprisingly little research on the size of nominal multipliers – most work focuses on real (i.e. inflation adjusted) multipliers.

[2] We calculate the actual (past years) or projected (future years) percentage change in the nominal deficit from the OBR figures and reduce this by a third, a half and two thirds respectively. The table below provides details of the middle projection where the pace of nominal deficit reduction is reduced by half.

Training Researchers to Work with Confidential Data: A New Approach

Posted on

Prof Felix Ritchie of UWE’s Business School has recently spent time with the Northern Ireland Statistics and Research Agency and makes the following analysis.

I’ve just spent two days at the Northern Ireland Statistics and Research Agency (NISRA), working with them to develop training for researchers who need access to the confidential data held by NISRA for research. This training is jointly being developed by the statistical agencies of the UK (NISRA, the General Register Office for Scotland, and the Office for National Statistics in England and Wales), as well as HMRC, the UK Data Archive and academic partners. The project is being led by ONS as part of its role to accredit researchers under the new Digital Economy Act, with UWE providing key input; other statistical agencies, such as INSEE
in France and the Australian Bureau of Statistics, are being consulted and are trialling
some of the material.

Training researchers in the use of confidential data is common across statistical agencies around the world, particularly when those researchers need access to the most sensitive data only available through Controlled Access Facilities (CAFs). The growth in CAFs in recent years has mostly come from virtual desktops which allow researchers to run unlimited analyses while still operating in an environment controlled by the data holder. There are now six of these in the UK, and many countries in continental Europe, North America and Oceania operate at least one. The existence of CAFs has led to an explosion in social science research as many things that were not previously allowed because it was too risky to send out data (such as use of non-public business data, or detailed personal data) have now become feasible and cost-effective.

All agencies running CAFs provide some training for researchers; around half of these use ‘passive’ training such as handouts or web pages, but the other half require face-to-face training. Much of this training has evolved from a programme developed at ONS in the UK in the 2000s and this training was recommended as an example of ‘best practice’ for face-to-face training by a Eurostat expert group.

However, this style of training is showing its age. Such training typically has two components: firstly how to behave in the CAFs and secondly how to prevent confidential data from mistakenly showing up in research outputs (‘statistical disclosure control’, or SDC). Both are typically taught mechanistically, in the form of dos and don’ts, explanations of laws and penalties and lots of SDC exercises. Overall the aim of the courses is to impart information to the researcher.

The new training is radically different from the old training. It starts from the premise that researchers are both the biggest risk and the biggest advantage to any CAF: the biggest risk because a poorly-trained or malcontented researcher can negate any security mechanism put in place; the biggest advantage because highly-motivated researchers means cheaper system design, better and more robust security and the chance for the data holder to exploit the goodwill of researchers in methodological research, for example.

In this world the main aim of the training is to encourage the researcher to see himself or herself as part of the data community. If this can be established then the rest of the training follows as a consequence. For example, knowledge of the legal environment or SDC is shared not because it keeps you out of jail but because everyone needs to understand this so the community as a whole works. This gives the course quite a different feel to more traditional courses: much of the day is spent in open-ended facilitated discussions exploring concepts of data access.

The training was designed from the ground up in order to take advantage of recent developments in thinking about data access and SDC. This was also done to avoid being restricted by having to ‘fit’ preconceived ideas about what worked or not; material was included on its own merits, not whether “this was what we used to do…”. For example, the previous SDC component had a large number of numerical examples, developed over many years, leading to attendees remarking on afternoons spent “doing Sudoku”. We reviewed every example to identify the minimum set of principles needing to be explored and then wrote a small number of new examples based on this minimum set. On the other hand, the previous training had relatively little to say about the context for checking outputs for confidentiality breaches; this has now been expanded as it fits with the ethos of understanding why things are done.

Of course, this was not all plain sailing. The original structure, trialled in June 2017, had just one presentation before being comprehensively abandoned. Modules have dropped in and out and been moved around. The initial test for the course has been completely rewritten (a topic for a later blog). Various sections have been inserted as ‘options’ to take account of regional variations in operating practices. Throughout this, multiple organisations have been able to feed into the process so that the final product itself has a sense of community ownership.

We are now at the stage of training-the-trainers to enable independent delivery around the UK. This is already generating much feedback for the future development of the course: for example, a need has arisen for ‘crib sheets’ to help in the facilitation of certain exercises. Overall, however, we are confident that we have a well-structured, informative, course that meets the needs of 21st century data training.

Further reading: for more information on the evidential and conceptual basis for the course, see Ritchie F., Green E., Newman J. and Parker T. (2017) “Lessons Learned in Training ‘Safe Users’ of Confidential Data“. UNECE work session on Statistical Data Confidentiality 2017. Eurostat. 

The Knowledge We Have Lost In Information – The History Of Information in Modern Economics, by Philip Mirowski and Edward Nik-Khah

Posted on

Dr Sebastian Berger’s book review is published in the Heterodox Economics Newsletter

Fake news, post-truth, alternative facts, the commercialization of science, the wholesale destruction of university library collections in the name of “information access” and “digital first”; what does all this have to do with information economics? What happens to cognition, knowledge, truth, wisdom and understanding in the information economy? What are the vortices of images emerging between the natural and the social sciences that give rise to our understanding of “information”? What understanding of human beings is this based on? Who are the relevant actors, their politics and intellectual projects? Anybody concerned with such questions will benefit from reading the book under review, and these are what this review will focus on.

Mirowski and Nik-Khah present the comprehensive results of their fascinating research on information economics that began as far back as Mirowski’s work on cyborg economics and Nik-Khah’s dissertation and his Kapp Award-winning article on auction design. Their book is intended as a contribution to the recent history of economic thought, written in the style of a spy novel that tries to reconstruct who got us to where we are today and how this could happen. Theirs is a grand story of the Great Transformation of the economics profession into market engineers via modern information economics or market design theory. Spying as a method for historians of economic thought is meant to demonstrate these developments and to provide an alternative to performativity theory, which is deemed too vague to be able to account for the details of the interplay of material and intellectual factors. The book is structured into 17 chapters, some of which are as concise as six pages. The first two chapters set the scene and illustrate that there is something rotten about our understanding of the history and state of information economics. The core chapters deal with the roles of natural science, the Nobels and Neoliberals, the Socialist Calculation Debate, Hayek’s economics, Market Socialists at the Cowles Commission, the three schools of market design, two recent case studies, and a concluding chapter on artificial ignorance.

K. William Kapp once expressed his fundamental view that the dehumanization of economic theory and social reality are related and spring from an erroneous understanding of human beings. (Kapp 1985) So, what concept and understanding of human beings are at the base of information economics? (Though not FBI agents, information economists conventionally refer to human beings as “agents” which seems suitable to a spy novel.) The authors convincingly demonstrate, in particular in chapter 9, that information economists basically assume the irrelevancy of cognition and preferences of agents for the desired market outcome they are paid to design. This essentially means that information economists adopt a self-image of being smarter than people and being able to design mechanisms that extract the information from the agents that they are unaware of possessing. It seems that the quicksand of double truths inherent in this assumption remains hidden from their purview. They seem to have no trouble assuming that somehow all the limitations that apply to the agents of their models do not apply to themselves.

This goes back to similar double truths in the works of the intellectual behind the foundational ideas of information economics, i.e. Friedrich von Hayek, who denied people the ability to reason about society as a whole, while he reserved this right and ability to himself. Several chapters describe how the mature Hayek believed that people’s cognition can be disregarded as it does not matter for the operation of the market, which is conceived as an information processor (cyborg, machine, computer) more powerful than any human being. Furthermore, according to Hayek the market arguably expands into the realm of non-knowledge, i.e. the unknown unknown, that is subject to evolutionary forces and not within human conscious control. Success and failure in the market thus depend on ones inheritance of unknown unknowns. The best one can hope for is that the market sheds light onto one’s own total darkness in a way that it becomes marketable. In this tradition, market designers claim to be able to design markets that extract information from people’s unknown unknown, that is, to get agents to give up information they hold. Mirowski and Nik-Khah conclude that in this information economy the market no longer gives people what they want but people have to give the market what it wants (cf. the final chapter). While this seems to suggest that the inside of human beings somehow matters for the establishment of Truth, this is secondary to the overriding claim that the Market is the seat and arbiter of Truth. Truth is thus turned into a function of the unequal and arbitrary distribution of the ability to pay (what prevents the top 1% from buying Truth?).

Mirowski and Nik-Khah judge the essence of these views as pure Social Darwinism with a strong dose of predestination. (p. 69) Along with the authors I think that the mature Hayek’s grave error was to deny human beings to be the seat of the kind of Truth that is revealed as a gift from introspection, that is, self-knowledge that enables self-cultivation. Hayek’s highly problematic understanding of human beings is compounded by a problematic that Tony Lawson has recently pointed to in an interview (Lawson 2018). That is, Hayek denied the existence of bio-physical human needs that are objectifiable, such that their satisfaction can be planned in a social provisioning process. Otto von Neurath, Max Weber, K.W. Kapp and K. Polanyi called this material or substantive rationality.

This clash of views goes back to the Socialist Calculation Debate, which Mirowski and Nik-Kah identify as the birthplace of information economics (chapter 5). It is the great achievement of this book to have pointed out the seminal importance of this debate for understanding economics today. Unfortunately, the book does not mention the “lost” Neurath-wing of the Socialist Calculation Debate and only focuses on the Cowles men’s enthusiasm for a cybernetic socialism. According to the present book it was the market socialists following Otto Lange’s argument that developed information economics at the Cowles Commission. The authors support their main thesis with plenty of evidence that the market socialists “lost track of their political argument and deep motivations” and were haunted by Hayek to end up as neoliberals who sell themselves as experts in market design. This raises the question as to the reasons for the odyssey of Walrasian market socialists following Otto Lange’s intellectual project.

For the full review please see:  https://www.heterodoxnews.com/HEN/book%20reviews.html

Happiness in Bangladesh: The Role of Religion and Connectedness

Posted on

Dr Tim Hinks, Senior Economics Lecturer at UWE, in conjunction with fellow academics Joe Devine and Arif Naveed have published this paper in the Journal of Happiness Studies


Abstract
Research into the relation between religion and happiness offers inconclusive evidence. Religion seems to matter but it is not entirely clear how and why. Moreover much of the research to date is rooted in western experiences. This article analyzes primary data from Bangladesh to examine how religion figures in people’s wellbeing and life chances. It identifies differences in reported happiness between the country’s two largest religious populations: Muslims and Hindus. Our main argument is that the significance of religion is only really understood when considered alongside social, economic and political processes. The data and analysis make an important contribution to the limited knowledge we have of the relation between religion, political connectedness and happiness in non-western societies. It also highlights the need to incorporate more contextualizing analyses into our assessments of the relation between religion and happiness.

Introduction
1.1 Religion and Wellbeing
Academic interest in the connection between religion and happiness has grown significantly over recent years, and produced an impressive body of scholarship. Many studies demonstrate a positive association between religion and happiness. The significance of this association should not be underestimated. For example Witter et al. (1985) reviewed 28 wellbeing studies and found that the majority reported a positive association between religion and subjective wellbeing. Moreover they found that religion accounted for 2–6% of the variation in subjective wellbeing. Ellison et al. (1989) went one step further arguing that the effect of religion on subjective wellbeing is as strong if not stronger than income. This finds some support in Luttmer’s claim that religion is positively correlated with measures of subjective wellbeing even when demographic variables such as income, age and marital status are taken into account (Luttmer 2005).

In what ways then does religion make us happy? The evidence offered by the literature falls broadly into two categories, reflecting a distinction first introduced by Allport and Ross’s (1967) pioneering work into religious orientation. According to Allport and Ross, people have intrinsic or extrinsic motivations in relation to religion. The latter sees religion as a means to achieve particular goals including non-religious ones while the former is autonomous and considers religion as an end in itself. Although this distinction is not without its critics (see Lavrič and Flere 2007), it has left its mark on research into religion and happiness. On the one hand therefore it is argued that religion enhances wellbeing because it offers access to support structures or enables individuals to cope with stress (Lim and Putnam 2010), or to adapt preferences or aspirations (Clark 2012). On the other hand, religion enhances wellbeing because it offers a sense of meaning and purpose, and acts as a moral compass in this as well as the ‘after-life’ (Greeley and Hout 2006). What is striking however is that we can find evidence of both intrinsic and extrinsic benefits of religion in all of the world’s major religions including Islam (Sahraian et al. 2013), Hinduism (Ganga and Kutty 2013), Judaism (Levin 2014), Buddhism (Elliot 2014), and Christianity (Steiner et al. 2010). Although the effects of religion on wellbeing are generally reported as being positive, there are important counter observations. First, religion may also be a factor in producing negative wellbeing values. Ellis (1962) for example reports that excessive religion can produce depression and mental disorders. More recently, Mookerjee and Beron’s cross country analysis of the relation between religion and happiness concluded that contexts with high levels of religious fractionalisation produce relatively lower levels of happiness (Mookerjee and Beron 2005). Second, much of the literature draws conclusions on the effects of religion from studies that focus entirely on individual level processes. As such the context is overlooked. Some recent work has warned of the dangers of this approach arguing that positive individual level effects disappear when contextualised with a country’s overall level of religiosity (Eichhorn 2012). Third, it is important to acknowledge the bias in the literature towards religious experiences and contexts in the West with relatively little attention being paid to non-western contexts where the parameters of any discussion about religion and wellbeing may be radically different (Joshanloo 2013, 2014). Finally, most of the literature rests on an assumption about the direction of causality. Thus it is assumed that religion leads to happiness as opposed to happiness leading to religion.

1.2 Religion and Wellbeing in Bangladesh
Our research focuses on the relation between religion and wellbeing in Bangladesh, and as such contributes to the nascent scholarship focusing specifically on wellbeing in countries of the Global South (Diener et al. 2013; Shams 2016). Bangladesh is a particularly appropriate location in which to examine wellbeing dynamics since it throws up a number of wellbeing puzzles which all reflect different aspects of the Easterlin paradox (Easterlin 1974). Thus in the 1990s, Bangladesh reported higher levels of happiness than many other countries, including the UK, where people enjoy significantly larger per capita incomes and access to a wider range of basic services and good (Worcester 1998). Since the 1990s, the country has made significant progress in reducing poverty and introducing socio-economic improvements (Devine and Wood 2017), and can be described as a global international development success story. Despite this however, levels of reported happiness seem to be declining (Asadullah and Chaudhury 2012). Improved living standards therefore seem to be having an impact upon the wellbeing expectations and demands of its citizens (Diener et al. 2013).

The early years of state formation in Bangladesh were anchored in a very clear commitment to secularism, and indeed early writings on religion, most notably Islam, emphasised its syncretic and malleable qualities (Uddin 2006). However since the early 1990s, a different expression of religion has emerged which has been described as neo-orthodox, militant, and extremist (Riaz 2004). These changes reflect deeper questions about what constitutes ‘proper’ Islam in Bangladesh and also what constitutes a ‘Muslim democracy’ (Devine and White 2013). The unresolved nature of these questions is etched visibly in the relations between the dominant Muslims and followers of other religions in the country. Muslims in Bangladesh constitute around 87% of the population. While the remaining 13% belong to a number of different religions, Hinduism is by far the largest minority religion in the country.

In Bangladesh religion is directly translated as dharma, a term which derives from the Sanskrit dhr meaning to sustain, support or uphold (Mahony 1987). However dharma means more than just ‘religion’, at least as understood in the West. Etymologically, dharma refers to the ‘proper cosmo-moral ordering’ of things (Inden 1985). In this sense, everything that exists, animate or inanimate, has its dharma. Even religion has its own dharma. Second, the word dharma is used in everyday speech to ask about one’s religion. So it is quite common in Bangladesh to ask: ‘apnar dharma ki?’, i.e. what is your religion?. The response to this question however reveals two things. First, it communicates a person’s religious affiliation. Second, the declaration of a religious affiliation or identity provides important implicit information on which social groups you belong to and can interact with; what practical lifestyle choices you can or cannot make; what constitutes appropriate behaviour and conduct; what aspirations you might have; who you can marry, what food you can eat, and so forth (Kotalova 1993). Dharma therefore is as much about everyday practical choices and opportunities as it is about religious affiliation.

There is very little literature on the relation between religion and wellbeing in Bangladesh. The founding research projects which inform this paper,1 found statistically significant correlations between religion and happiness, especially among older respondents (Camfield et al. 2009). In more qualitative follow-up research, we identified a number of areas where the influence of religion on wellbeing comes to the fore including the structuring of community relations (Devine and White 2013), responsibilities, obligations and expectations around marriage, gender and intergenerational relations (White 2012), and the development of political culture and democracy (Basu et al. 2017).

Recently Asadullah and Chaudhury (2012) offer a very different argument. Analysing data from 2400 households across 12 districts in Bangladesh, the authors claim that neither religion nor gender had any significant impact on happiness. Instead they found that the influence of inter-personal relations and social trust on happiness was statistically significant. This opens up a new avenue into an equally under-researched area, i.e. trust. The only study on trust we could identify was Gupta et al’s (2013) comparative analysis of the behaviour of Muslims and Hindus in Bangladesh and West Bengal in India. In the latter, Hindus constitute the majority group and Muslims the minority; whilst in the former, the opposite is true. In both sites, the authors found that identity based on status (i.e. being a member of the majority or minority group) rather than religion per se determined levels of trust and trustworthiness. We return to this finding in our  Sect. 3 below.

Our research makes three important contributions. First, to the best of our knowledge, the findings presented here are the first to quantitatively look at the impact of religious identity on people’s self-reported happiness in Bangladesh. Second, given that the analysis is anchored in Bangladesh our research offers an important contribution to a literature that is dominated by Western experiences and understandings of religion. Finally, the article contributes to the growing but still relatively thin literature on wellbeing and happiness in the Global South.

For the full paper see http://bit.ly/2FcGkGP

 

Russia: A Mercantilist Economy

Posted on

Dr Nadia Vanteeva, Senior Lecturer in Economics at the Bristol Business School, gives this abstract from her latest research project.

The rapid Russian industrialization at the end of the 19th century took place behind high tariffs, protecting nascent firms against foreign competition; such firms also enjoyed protection against domestic competition through state-imposed entry restrictions. Furthermore, firms enjoyed state-subsidized capital loans, state-supported cartel pricing and wage controls. This led to the characterization that Tsarist industrialization policy was a classic example of List’s infant industry hypothesis. However, the new industries at the time were concentrated first in railroad construction, followed by the iron and steel, coal mining and machine tools.

All of the above industries were chosen by the state for development and were also under its close governance. Under a comparative advantage hypothesis, none of the above capital-intensive industries were likely candidates for success, given Russia’s then economic and technological backwardness. Gerschenkron hypothesized that the motivation for the Tsarist industrialization plan was to provide industrial support to develop a modern military. If Gerschenkron’s hypothesis is correct, then direct state involvement in industrialization is not a temporary phenomenon as the case in many countries, but a more permanent feature of Russian economic model. Thus Tsarist industrial revolution may be a better example of a mercantilist economy spanning Russia’s large contiguous empire area in much the way described by Heckscher’s continental system. It might explain not only the peculiar emphasis in Russia for the capital goods rather than consumer goods industry, but also where such industries may be located, and why some regions are more favoured for industrial development over others.

Degree Algorithms: Equity and Grade inflation

Posted on

In his recent working paper Dave Allen highlights the substantial differences in the way that university degree calculations can be made.

The algorithms UK universities use to calculate a student’s final degree outcome can be complex and sometimes counter-intuitive; some commentators suggest that they have contributed to ‘classification inflation’ across the UK higher education sector. A less well understood concern is that the variety in algorithms potentially means the same set of marks can be awarded a different classification depending on what university the student attended.

The 17 questions and answers below aim to clarify the issue.

1.         Is grade inflation happening?
The recent HESA data on degree qualifications confirms a continued increase (or inflation) in the proportion of ‘good honours’ (1st and 2:1s) being awarded – from 68% of all graduates in 2012/13 to 75% in 2016/17 Likewise, the proportion of first has increased from 18% to 26%.  While the numbers cannot be disputed, the cause is.

2.         Why is it occurring?
According to the Cambridge pro-vice chancellor for education, Professor Graham Virgo, grade inflation is not “a cause for concern” and is “down to tuition fees because students are more motivated and are working harder”. Alternatively, others like Nick Hillman, director of the Higher Education Policy Institute claim, “Universities are essentially massaging the figures, they are changing the algorithms and putting borderline candidates north of the border,” (The Telegraph)

3.         Have there been changes to university degree Algorithms?
There have been significant changes to degree algorithms in the last 10 years. The recent report by  UUK-GuildHE (Understanding Degree Algorithms, Oct 2017) found that many HEIs had changed their algorithms – primarily to ensure internal consistency between departments and faculties, but also to achieve “competitor or sector alignment” (page 18). (see also Higher Education Academy’s 2015)

4.         What is a degree algorithm?
Degree algorithms describe the process universities use to translate module outcomes into a final degree classification (1st, U2, L2, 3rd). The algorithm software calculates the weighted average of the ‘counting’ modules, this average mark then determines the classification.

5.         What is a weighted mark?
Degrees are made up of a number of modules and can have different credits e.g. 10, 15, 30, 20, 40 etc., students typically study 120 credits a year or 360 credits in total. The weighted average takes account of different module sizes (credit weightings). To calculate a weighted average the module marks are first multiplied by their credits, these weighted values are then added together; finally, this total is divided by the total value of credits.

6.         Are all algorithms the same?
While all UK universities adopt the same classifications, how universities arrive at these classifications is a very different matter. The variation comes in how the average of each ‘counting’ year is weighted and whether some module marks are ‘discounted’ or removed from  the calculation.

7.         What is a ‘counting’ year?
Simply those years of study included in the degree calculation, it is notable that most UK  universities do not include year 1 marks in their algorithms – the focus is on year 2 and 3.

8.         Why is there a greater weight on year 3 studies?
The higher weighting given to year 3 marks captures the notion of the student’s ‘exit velocity’ or  the standard that the student is performing at as they graduate from university. Alternatively,  the higher weightings on year 3 might reflect a university’s requirement that programmes must become more challenging as students progress through them.

9.         Is each counting year weighted equally?
There is wide variation in the weightings applied to year 2 and 3 marks. This can range from 50/50 [Oxford Brookes] to 20/80 [Derby].

10.       How does the year weighting affect the degree mark?
This is best illustrated using an example. Assume the year 2 and 3 average marks are 64.38% and 69.00% respectively. If weighted equally [50/50] the combined average would be 66.69%, if weighted 20/80, this combined mark increases to 68.08 – an increase of 1.39% – all because a greater weight has been placed on the year 3 average mark. It follows that had the year 2 and 3 average marks been switch around, the increase in the overall average using a 20/80 weighting would be smaller [i.e. 65.30%].

11.       How does discounting a module affect the weighted average?
Discounting or, removing the lowest marks can only improve the overall degree average. It follows also that “If only the worst, outlying marks are omitted, it is possible that this would lead to grade inflation” (UUK-GuildHE p.37).

Again, we can use a worked example to show the impact. From the previous example, if we exclude the lowest marks for 30 credits (in each year) the year 2 marks become 69.17% (up from 64.38%), the year 3 marks become 70.83% (up from 69.00%). Applying the same weightings the degree mark increases to 70.0% (50/50) and 70.5% (20/80) – the 2:1 is now a 1st. It follows also that the differences between those algorithms that discount and those that do not, will become greater as the discounted module marks get lower.

12.       How common is discounting?
Without some central ‘register of practice’, it is hard to say exactly. The UUK-GuildHE survey suggests that up to a third of those universities contacted use discounting. It follows also that a large proportion of those universities that discount also apply differential weightings. The gradual shift in the use of discounting has probably been the significant driver behind grade/classification inflation.

13.       What is a borderline candidate?
Most algorithms take the degree ‘average’ to either one or two decimal points e.g. 69.5% or  69.45%. This results in borderline marks where the exam board is a called upon to determine what classification is awarded. There are various methods, one includes using a simple rule whereby marks equal to or less than 0.5% below a classification boundary are awarded the higher classification ‘automatically’ and confirmed by the exam board (thus a 1st does not start at 70%, it starts at 69.5%). Alternatively, marks within a given band (e.g. 68.5% – 69.49%) might be granted an ‘uplift’ in classification (e.g. from a U2 to a 1st) using the preponderance principle: a 1st could be awarded if the student has 60 credits in the higher boundary in their final year. Not surprisingly, these borderline adjustments can have a significant impact on individual student’s classification and the overall profile for a given programme.

14.       How do the different weightings and discounting effect a
university’s overall results?
The distribution of the degree classifications can vary significantly depending on the algorithm   used. Figure 1 shows a simulation using the same set of marks for a number of students (211 in total) where 6 different algorithms are applied. The first four algorithms (UNI[1] to UNI[4]) have  different weights for each counting year (Y2 and Y3 only), these weights range from 50/50 to 25/75, the fifth algorithm (UNI[5]) ‘discounts’ 20 credits from each year and uses a 25/75 weighting. For comparison, the sixth algorithm (UNI[6]) uses all years of study, equally weighted (which would be the outcome if the Grade Point Average (GPA) was applied – see below).

The impact is quite dramatic. In terms of the different weightings alone (UNI[1] to UNI[4]) the proportion of 1st ranges between 16% to 23%. The difference increases significantly once discounting is applied (UNI[5]), form 16% up to 32%. In Figure 1 the proportion of students achieving a 2:2 (awarded where the average mark falls between 50-59%) also declines significantly from 28.9% (UNI [1]) to 18% (UNI[5]). This simulation suggests a student’s post university ‘life chances’ may be significantly dependant on how their chosen university determines their classification (all other things being equal).

15.       Do most students understand the degree algorithm that applies to them?
A good question. The simpler the algorithm the more likely the students will understand its implication, if not use it to set personal academic targets. However, the truth is that many algorithms are very complex, and many use more than one rule to determine the degree classification. Here the interested reader might like to see a YouTube video posted by Sheffield University, in particular the comments below this video. It is also very likely that students do not take into consideration the degree algorithm when choosing a university.

16.       What is the bigger problem Grade inflation or Equity?
While the national data shows significant increases in the proportion of 1st and U2 being awarded, we cannot definitively say there has been ‘grade inflation’ – to determine this we would need the actual module marks. The increase in 1st and U2 is likely to be a combination of students working harder and gradual changes in degree algorithms. What we can say – with some certainty: is it is a concern that under the current system the same set of marks can result in such a wide range of degree outcomes. If equity and rigor are to be the hallmarks of UK higher education provision, these differences cannot be ignored or defended.

17.       What can be done about it?
If valid comparisons between students’ achievements are to be made, it is follows that all universities should adopt the same algorithm when classifying degree outcomes. In this context the consistent use of the USA GPA classification system (or similar) has clear benefits. Jonathan Wolff (professor of philosophy at University College London) accepts that adopting the GPA is a “move in the right direction” but also takes the view that “we should simply issue students with transcripts to record their study, and leave it at that (Guardian).  This is a laudable idea but one which students (and employers) might find difficult to accommodate.

Allen, D. O. (2018) Degree algorithms, grade inflation and equity: the UK higher education sector, Bristol Centre for Economics and Finance, Working Paper 1803,

 

Are we heading for another economic crash?

Posted on

Dr Susan Newman, Senior Lecturer in Economics, is interviewed by State of Nature

State of Nature, the blog dedicated to interviews with leading thinkers in social and political theory, interviewed Dr Newman in January. Here she responds to the question: “Are we heading for another economic crash?”

I was approached by the editors of State of Nature to contribute to their monthly series “One Question”. I was asked to join thinkers such as Wolfgang Streeck, David Kotz, Mary Mellor and Richard Murphy amongst others to contribute a 300word response to the question, “Are we heading for another economic crash?”. In doing so, I was able to highlight some of the excellent work conducted by my colleagues at UWE as part of my contribution which is reproduced below. The full set of responses by international thinkers can be found here.

We are heading for another economic crash because the underlying conditions that brought about the financial crisis of 2007-8 remain. The post crisis slump saw the restructuring of capital, aided by government and central bank policies, in order to restore profitability and the incomes and wealth of the 1% premised upon fictitious accumulation.

Speculative finance continues to dominate economic activities in the advanced capitalist economies. Corporate profits, personal wealth, pension provision and food prices, continue to be tied to the vagaries of finance. The IMF’s growth projections for 2018 recognise that modest growth will be driven by financial markets with little impact on real investment, job creation, productivity or wages. Stock market capitalisation to GDP ratio is higher than at any time except for the eve of the dot.com bust in 2000 indicating the disconnection between financial investment and productive activities. In spite of Basel III, the financial system continues to be characterised by high leveraging and global interconnectedness owing to the rise of the shadow banking system.

Austerity in the UK since 2010 has created new trigger points for crises. Personal debt in the UK has reached alarming and unsustainable levels in excess of £200bn. Welfare cuts, stagnant wages and the deterioration of employment contracts has meant that low income families in the UK have had to borrow for basic day to day expenditures. One can expect many more cracks in the system in which the next crisis will emerge. However, rather than trying to predict the timing or origins of impending crises, efforts would be more productively oriented towards radical change of the economic system. Reforms such as those that supported the Golden Age could help temper some of the deadliest side effects of capitalist growth. But in the long run we need to treat those side effects as the main goals for society: for each of us to reach our full potential and live in material comfort free from alienation from each other and our environment.

 

Back to top