I woke up this morning to the sound of Radio 4 telling me that Cancer Research UK had done an analysis showing that a 20% tax on sugary drinks could reduce the number of obese people in the UK by 3.7 million by 2025. (That could be the start of the world’s worst ever blues song, but it isn’t.)
My first thought was that was rather surprising, as I wasn’t aware of any evidence on how sugar taxes impact on obesity. So I went hunting for the report with interest.
Bizarrely, Cancer Research UK didn’t link to the full report from their press release (once you’ve read the rest of this post, you may conclude that perhaps they were too embarrassed to let anyone see it), but I tracked it down here. Well, I’m not sure even that is the full report. It says it’s a “technical summary”, but the word “summary” makes me wonder if it is still not the full report. But that’s all that seems to be made publicly available.
There are a number of problems with this report. Christopher Snowdon has blogged about some of them here, but I want to focus on the extent to which the model is based on untested assumptions.
It turns out that the conclusions were indeed not based on any empirical data about how a sugar tax would impact on obesity, but on a modelling study. This study made various assumptions about various things, principally the following:
- The price elasticity of demand for sugary drinks (ie the extent to which an increase in price reduces consumption)
- The extent to which a reduction in sugary drink consumption would reduce total calorie intake
- The effect of total calorie intake on body mass
The authors get 0/10 for transparent reporting for the first of those, as they don’t actually say what price elasticity they used. That’s pretty basic stuff, and not to report it is somewhat akin to reporting the results of a clinical trial of a new drug and not saying what dose of the drug you used.
However, the report does give a reference for their price elasticity data, namely this paper. I must say I don’t find the methods of that paper easy to follow. It’s not at all clear to me whether the price elasticities they calculated were actually based on empirical data or themselves the results of a modelling exercise. But the data that are used in that paper come from the period 2008 to 2010, when the UK was in the depths of recession, and when it might be hypothesised that price elasticities were greater than in more economically buoyant times. They don’t give a single figure for price elasticity, but a range of 0.8 to 0.9. In other words, a 20% increase in the price of sugary drinks would be expected to lead to a 16-18% decrease in the quantity that consumers buy. At least in the depths of the worst recession since the 1930s.
That figure for price elasticity is a crucial input to the model, and if it is wrong, then the answers of the model will be wrong.
The next input is the extent to which a reduction in sugary drink consumption reduces total calorie intake. Here, an assumption is made that total calorie intake is reduced by 60% of the amount of calories not consumed in sugary drinks. Or in other words, that if you forego the calories of a sugary drink, you only make up 40% of those from elsewhere.
Where does that 60% figure come from? Well, they give a reference to this paper. And how did that paper arrive at the 60% figure? Well, they in turn give a reference to this paper. And where did that get it from? As far as I can tell, it didn’t, though I note it reports the results of a clinical study in people trying to lose weight by dieting. Even if that 60% figure is based on actual data from that study, rather than just plucked out of thin air, I very much doubt that data on calorie substitution taken from people trying to lose weight would be applicable to the general population.
What about the third assumption, the weight loss effects of reduced calorie intake? We are told that reducing energy intake by 100 KJ per day results in 1 kg body weight loss. The citation given for that information is this study, which is another modelling study. Are none of the assumptions in this study based on actual empirical data?
A really basic part of making predictions by mathematical modelling is to use sensitivity analyses. The model is based on various assumptions, and sensitivity analyses answer the questions of what happens if those assumptions were wrong. Typically, the inputs to the model are varied over plausible ranges, and then you can see how the results are affected.
Unfortunately, no sensitivity analysis was done. This, folks, is real amateur hour stuff. The reason for the lack of sensitivity analysis is given in the report as follows:
“it was beyond the scope of this project to include an extensive sensitivity analysis. The microsimulation model is complex involving many thousands of calculations; therefore sensitivity analysis would require many thousands of consecutive runs using super computers to undertake this within a realistic time scale.”
That has to be one of the lamest excuses for shoddy methods I’ve seen in a long time. This is 2016. You don’t have to run the analysis on your ZX Spectrum.
So this result is based on a bunch of heroic assumptions which have little basis in reality, and the sensitivity of the model to those assumptions were not tested. Forgive me if I’m not convinced.