With student loan repayments back at the centre of public debate, it is timely to reflect on why the Retail Prices Index (RPI), officially discredited since 2013 yet still supported by many, continues to play a role in public policy and financial contracts – and why its use remains so controversial.
The RPI used to be a respectable and respected measure of inflation. But a combination of unexpected consequences, confusion and some mismanagement has brought it to a sorry state. It has almost certainly overestimated inflation since 2010, although a number of the other criticisms made of it are misguided.
The embedding of RPI and the debate over the “Carli” formula
First introduced in 1956, its prime use was originally expected to be in wage negotiations. Over time, it spread into many other areas: uprating of pensions, tax thresholds, travel fares, business contracts and many others. A derivative, RPIX, was used as the inflation target when inflation targeting became central to economic management from 1992.
No price index is perfect. Each is based on decisions about what to include and how to calculate price changes. Problems arise when those decisions no longer reflect appropriate practice or the way the index is being used.
A key issue has been the RPI’s use, for certain items and at one stage in the calculation, of a method known as the Carli formula – an arithmetic mean of price relatives, comparing each item’s price to its price in the annual base month of January. When items are highly variable, an arithmetic mean tends to be overly influenced by the highest values in the group.
HICP, the CPI and diverging results in a changing world
In the 1990s, EU statisticians developed the Harmonised Index of Consumer Prices (HICP), designed to ensure comparability across countries and, subsequently, to serve as the inflation target for the European Central Bank. HICPs were not intended for uprating purposes: they exclude mortgage interest payments, measure insurance net of claims paid, and to this day omit owner-occupiers' housing costs. The RPI, by contrast, was aimed at measuring inflation as experienced by most households.
When the UK HICP – now known as the CPI – was first published, it produced generally lower inflation than the RPI, with an average gap of around 0.5 percentage points. A key reason was its use of a geometric mean (Jevons) at the stage where the RPI uses Carli. Mathematically, Carli always gives an equal or higher result than Jevons.
Then, in 2010, changes were made to clothing price collection, intended to correct an underestimation of clothing inflation in the CPI. They achieved that, but the statisticians did not anticipate that one change in particular would sharply increase the variability of price relatives and thus, combined with the use of Carli, push up RPI inflation. Disastrously, the change was considered so minor that it was not tested before implementation. The average gap between the two indices widened to nearly one percentage point.
RPI falls out of favour
This became more significant when, also in 2010, the government switched the uprating of public sector pensions and benefits from RPI to CPI. The differences began to affect both government finances and people's incomes. The ONS, unwilling to reverse the clothing changes since they had improved CPI measurement, tested alternatives but felt none of them helped. In 2013 the RPI lost its National Statistic (Accredited) status — the kitemark indicating certain quality standards. Even so, it remains widely used, including for index-linked government bonds, many defined benefit pensions and student loan interest rates.
Debate continued, with RPI remaining officially disparaged by many but also having many advocates. There was confusion about what had actually happened in 2010, and confusion about the Carli formula itself, with some conflating the version used by the ONS with a different version that inherently tends to give biased results.
ONS hands are tied by a 1981 Government decision
Underlying this confusion is a decision the government made in 1981, when it introduced indexlinked gilts—government bonds that rise in value with the RPI. To reassure investors that RPI would not be manipulated to their disadvantage, the government added a protection clause. It said that if RPI calculation ever changed in a way the Bank of England judged both “fundamental” and “materially detrimental” to investors, those investors could demand to be repaid immediately – which could cause a financing crisis for the Treasury.
This clause was included in every indexlinked gilt issued until 2002. It was reinforced by the Statistics and Registration Service Act 2007, which made clear that while pre-2002 gilts remained outstanding, any such change required the consent of the Chancellor of the Exchequer,
The clause may have seemed reasonable at the time, but it created a problem: it effectively introduced a one-way ratchet on RPI inflation. Changes that make RPI inflation higher are allowed, but not changes that lower it thus hampering attempts to correct for the post-2010 overestimation. This shows how hard it is to update a statistical measure once it becomes embedded in laws and longterm contracts. A decision that once felt minor can end up shaping policy for decades.
Because the last gilt with this clause remains in the market until 2030, ONS’s hands are tied until then.
Changes from 2030: a fix for the problem?
In 2019, the National Statistician and the UK Statistics Authority Board announced that, from 2030, the RPI will be calculated in the same way as CPIH – the CPI plus a measure of owner-occupiers' housing costs and council tax. However, CPI and, by extension, CPIH are designed primarily for monetary policy and international comparisons and, while well suited for that, they are less good at measuring inflation actually experienced by households. The proposal was (and is) not universally supported, but survived a legal challenge from the trustees of some major pension funds. From 2030 the RPI will therefore no longer be a "household" index.
The Household Costs Indices, currently under development by the ONS can fill that gap. Based, as far as practical, on what households actually pay, they allow comparisons by income level, housing tenure and other characteristics — distinctions that became very visible during the 2021-23 cost of living crisis. Unlike CPI and CPIH, they are not skewed towards the experience of higher income households. Like the RPI they include mortgage interest payments and fully weight insurance premiums. They also include other interest payments and student loan repayments albeit coverage of this last is not yet complete.
Lessons for the future
Meanwhile, the government uses both RPI and CPI for different purposes without there always being any clear logic in its choice. The RSS has long argued that government must not "inflation shop" – choosing higher measures when calculating money it receives and lower ones when calculating what it pays out. The choice of measure should reflect both its quality and what it was designed to capture.
As 2030 approaches, clarity is essential. Securing Accredited status for the Household Costs Indices would strengthen confidence that the UK has both robust national inflation measures and reliable statistics that reflect household experiences.
The history of the RPI shows that debate about measurement is normal – inflation is complex. What matters is clarity about purpose, openness about methods and strong governance. The ONS has learned the lesson that changes must be tested before implementation. But the circumstances and misunderstanding that led to the 2010 change must not be allowed to recur. And, finally, the 1981 decision, well-intentioned as it may have been, shows why statistics need to be independent of government.