You can’t always get what you want, a young man once sang. This is a simple aphorism, but it is worth remembering. In 2016, Boris Johnson was widely – and rightly – ridiculed for stating that “our policy is to eat your cake and eat it.” It was a dishonest refusal to acknowledge that the Brexit referendum forced the UK government to make some painful decisions. But it’s not always easy to know when Mick Jagger’s maxim applies.
Consider the question of whether algorithms make fair decisions. In 2016, a group of ProPublica reporters led by Julia Angwin published an article titled “Machine slopeThis was the result of more than a year of research on an algorithm called Compas, which was widely used in the US justice system to make recommendations for parole, pre-trial detention, and sentencing. Angwin’s team concluded that Compass was much more likely to evaluate whites of defendants as less risky than black defendants. Moreover, “Black defendants were twice as likely to be rated as higher risk individuals but did not reoffend. And white defendants were twice as likely to be charged with new crimes after they were classified as low risk.”
This seems bad. Northpointe, the maker of Compas, has pointed out that black and white defendants who receive a risk score of, say, 3 have an equal chance of being re-arrested. The same is true for black and white defendants with a risk rating of 7 or any other rating. Risk scores mean the same thing regardless of race.
Shortly after ProPublica and Northpointe presented their findings, rebuttals, and counter-rebuttals, several groups of scholars published papers that reach a simple but surprising conclusion: there are several different definitions of what it means to be “fair” or “impartial,” and this it is arithmetically impossible to be just in all these respects at the same time. An algorithm may either satisfy the ProPublica definition of fairness or satisfy the Northpointe definition, but not both.
Here is Corbett-Davies, Pearson, Feller and Goel: “It is not really possible for a risk assessment to satisfy both criteria of fairness at the same time.”
Or Kleinberg, Mullainathan and Raghavan: “We are formalizing the three conditions of justice. . . and we prove that, except in special cases with strong restrictions, there is no method that can satisfy these three conditions at the same time.
It’s not just about algorithms. Whether parole decisions are made by human judges, robots, or throwing chimpanzees, the same ruthless arithmetic applies.
We need more attention and less trust in the life-changing magic of algorithmic decision making, so this ProPublica analysis has been invaluable in shedding light on the automation of the most important judgments. But if we want to improve algorithmic decision making, we need to keep Jagger’s aphorism in mind. These decisions cannot be “fair” in all possible respects. When it is impossible to have all of these, we will have to choose what is really important.
The agonizing choice is, of course, the bread and butter of the economy. There is a particular type that seems to fascinate economists: the “impossible trinity.” The wisest of all impossible trinities will be well known to fans of Armistead Maupin’s Great Stories of the City (1980). It’s Mona’s Law: You can have a hot job, a sexy lover, and a cool apartment, but you can’t have all three.
In economics, impossible trinities are more prosaic. The best known of these is that while you may want a fixed exchange rate, free movement of capital across borders, and an independent monetary policy, at best you should choose two of them. The other, coined by economist Dani Rodrik, is more informal: you can set the rules at the national level, you can be highly economically integrated, or you can let popular vote determine policy, but you can’t do all three. An economically integrated national technocracy is possible; so is democratic politics at the supranational level. If you don’t like either, you need to place limits on economic globalization.
Like Mona’s Law, these impossible trinities are more like rules of thumb than mathematical proofs. There may be exceptions, but don’t get your hopes up.
Mathematicians refer to such conclusions as “proof of impossibility” or simply “results of impossibility”. Some of them are elementary: we will never find the largest prime number, because there is no largest prime number, and we cannot express the square root of two as a fraction.
Others are deeper and more breathtaking. Perhaps the most profound of these is Gödel’s incompleteness theorem, which demonstrated in 1931 that for any mathematical system, that system would have true statements that could not be proven. Thus, mathematics is not perfect, and the legions of mathematicians who have tried to develop a complete, consistent mathematical system have been wasting their time. At the end of the seminar in which Gödel detonated this intellectual bomb, the great John von Neumann succinctly remarked, “It’s all over.”
No one likes to be told that they can’t have everything, but a bitter truth is more useful than a comforting lie. Gödel’s incompleteness theorem was one of the painful truths I learned as a young logician with Liz Truss. Perhaps she had finally learned her lesson. It is important to understand when something is impossible. This truth frees us from the fruitless attempt to always get what we want and allows us to focus instead on getting what we need.
Written and first published in Financial Times October 28, 2022