Rules, we tell ourselves, are made to be broken. When strict application of the rule produces a silly outcome, we prefer to bend the rule rather than enforce the silly outcome. A rule which could cope with every exception and every special circumstance would be so complex and incomprehensible that it couldn’t in practice work as a rule at all. And so we muddle through.
Leeway is the only way we manage to live together: We ignore what isn’t our business. We cut one another some slack. We forgive one another when we transgress.
By bending the rules we’re not violating fairness. The equal and blind application of rules is a bureaucracy’s idea of fairness. Judiciously granting leeway is what fairness is all about. Fairness comes in dealing with the exceptions.
And there will always be exceptions because rules are imposed on an unruly reality. The analog world is continuous. It has no edges and barely has corners. Rules at best work pretty well. That’s why in the analog world we have a variety of judges, arbiters, and referees to settle issues fairly when smudgy reality outstrips clear rules.
It’s a concept I have found useful in all sorts of contexts since I first came across it more than ten years ago, but while I have referred to it in passing a few times, I have never written about it directly. That feels like a gap overdue for filling.
At first glance leeway may seem a charmingly harmless idea. But in fact it is deeply subversive. It applies in all sorts of contexts, as Weinberger’s own examples make clear, but there is a very obvious set of issues around automated systems and services, which have a default tendency to be highly rigid. The need for leeway suggests that we should give careful thought to where the safety valves need to be, and how they should operate.1
The best rules are simple and explicit. The best way of applying rules is with an element of judgement about the context. Computers (and, for different reasons, bureaucrats) are good at the first part, rather less so at the second. Computerised bureaucrats (who can be found far beyond the public sector) are a case of their own. So there is a dilemma. We can try to create a system which is perfectly rule bound, where total fairness is ensured by the complete absence of discretion – but that complete fairness is almost certain to look (and be) unfair in a whole range of difficult edge cases. Or we can try to create a system based on the application of principles and judgement, where fairness is ensured by tailoring decisions to precise circumstances – but that fairness is almost certain to result in similar cases getting dissimilar outcomes. That dilemma does not just apply at the level of individual entitlements and obligations. It – or something very like it – also applies in broader collective decision making. We demand that service provision should be tailored to local needs and circumstances but decry the postcode lottery.
Computerisation tends to make all this worse, for two big reasons. The first is that humans become interface devices not autonomous agents, not able to offer leeway even it they want to (indeed, preventing them from doing so may be part of the point). That’s not limited to government, of course, as anybody who has done battle over a mobile phone contract or a dodgy gas bill knows. The second is that computerised rules need to be computable. Binary conditions are easier to code than fine assessments. More subtly, the act of computerisation can be a prompt to ‘simplify’ systems in ways which risk creating much cruder boundaries, so exacerbating the first problem.
We thought the future would be flying cars but it's actually arguing with a motion sensor about whether or not your hands are in the sink.
— Donna Dickens (@MildlyAmused) April 8, 2015
Computers crystallise and may exacerbate the problem, but they do not create it. Without needing to plumb the depths of Gödel’s incompleteness theorems, it is not possible to deal with the problem of ill-fitting rules by endless refining the rules. Doing so doesn’t drive out fractally increasing detail, it blurs the idea of their being rules in the first place. Or as Jay Stanley puts it in a recent blog post:
No matter how detailed a set of rules is laid out, no matter how comprehensive the attempt to deal with every contingency, in the real world circumstances will arise that will break that ruleset. Applied to such circumstances the rules will be indeterminate and/or self-contradictory.
One obvious response to that is to head in the other direction and attempt to simplify the rules. But however obvious, that approach is unlikely to work either, because it is trying to solve the wrong problem: there is no reason to think that reducing the number of rules will reduce the number of cases for which the rules are not a good fit. On the contrary, it means that more people will get rougher justice.2
So we come back to leeway, being careful to follow Weinberger’s approach to what it does and doesn’t mean. Leeway doesn’t mean that there are no rules or that some people are entitled to ignore the rules,3 it means that at the margin it may be more important to respect the spirit of a rule than the letter. That leads us to some very familiar systems. As Stanley summarises it:
So far the best that humans have come up with is what might be described as “guided discretion.” First, judges must work within the core currents of the law, but apply their own judgment at the margins. Second, such discretion must be subject to review and appeal, which, while still vulnerable to mass delusions and prejudices such as racism, at least smooths over individual idiosyncrasies to minimize the chances of unpredictably quirky rulings.
But we don’t need to depend on imagery drawing on the full panoply (and expense) of the judicial machine. The same principles can be applied more prosaically, where even bureaucracies can show virtue:
Bureaucracies often have something that computers do not: logical escape valves. When the inevitable cases arise that break the logic of the bureaucratic machine, these escape valves can provide crucial relief from its heartless and implacable nature. Every voicemail system needs the option to press zero. Escape valves may take the form of appeals processes, or higher-level administrators who are empowered to make exceptions to the rules, or evolved cultural practices within an organization. Sometimes they might consist of nothing more than individual clerks who have the freedom to fix dumb results by breaking the rules. In some cases this is perceived as a failure—after all, making an exception to a rule in order to treat an individual fairly diminishes the qualities of predictability and control that make a bureaucratic machine so valuable to those at the top. And these pockets of discretion can also leave room for bad results such as racial discrimination. But overall they rescue bureaucracies from being completely mindless, in a way that computers cannot be (at least yet).
There are many ways of testing whether systems are appropriate and effective. The possibility of leeway is not enough to rescue a bad system. But the absence of leeway is a strong indicator that the system as whole may be in need of improvement.
- That may mean creating flexibility in the rules or their application within the core system, but it is at least as likely to mean ensuring that there is a way of breaking out of the system where circumstances require it. ↩
- And defining justice as the result of applying an algorithm, however fine its distinctions, doesn’t solve the problem at all, it merely provides a way of relying on ostensibly neutral authority. ↩
- That’s not to say that there is no unfair discrimination in the application of rules and norms. Of course there often is, and that’s a very important reason for having the rule before the leeway – but it’s not a reason to think that things would be better with no leeway. ↩