Working In Uncertainty
Matthew Leitch column: The Campaign for Plain Maths starts here
by Matthew Leitch, first published 2005.
(This article first appeared under the title ‘The Matthew Leitch Column: The Campaign for Plain Math starts here’ in Emerald Insight's publication ‘The Journal of Risk Finance incorporating Balance Sheet’, volume 6 number 3, 2005.)
In a past column I argued that using risk quantification in more areas of management would be a good thing. In this column it's time to look at one of the biggest barriers to doing that, which is the extra mathematics involved.
It's not the computations required; a computer does those. It's the understanding.
When business-people do not understand a quantitive method they naturally distrust it. When their mathematical specialists cannot explain the method to their satisfaction they distrust the results even more. Potentially useful software, data, and results are ignored because of this lack of trust.
The scope for improvement
Is this inevitable? Maths is hard, so can we expect a higher proportion of managers to understand some risk maths than do already?
I have been looking at samples of mathematical writing in articles and books, and collecting advice on writing mathematics clearly (usually written by maths Professors for their students).
The experience has been a real eye opener and points to massive scope for clearer, more interesting communication of mathematical ideas and techniques.
This should be no surprise. After all, a lot of ordinary writing is unclear and the inherently difficult nature of many mathematical ideas means that even slight confusion can block a reader completely. We should expect improvements to mathematical writing to make a dramatic difference to the number of people who can read it.
Despite this I believed until a few years ago (like many people) that I couldn't understand advanced mathematics because I am ignorant and stupid.
The myth that maths is clear
I believed that mathematics is inherently clear and unambiguous. It is not, and the myth was exploded in the early 1980s by the development of a mathematical style of software specification known as Z (pronounced ‘zed’).
Proponents of Z claimed that using mathematics to specify software would improve the clarity of specifications and produce more reliable software. It does. However, to make good on their claims they had to solve some fundamental problems in writing mathematics.
First, they wanted specifications to be readable by a computer, so that they could be checked and, perhaps, turned into software automatically. Ordinary mathematical writing fails dismally on this so they introduced strict rules on ‘types’, invented ways to make it clear when a variable was defined and when it was not, and raised the level of rigour in other ways too.
Second, they wanted their specifications to be readable by ordinary humans, not just the authors of the specifications. They designed a format for explaining models that involved interleaving plain English explanations with their mathematical equivalent, self-explanatory variable names, and gradual introduction of complexity.
Z works, and it has been adopted more widely than other ‘formal’methods. It combines improved rigour and accessibility.
Examples of confusion
Sadly, most mathematical writing does not reach the benchmark set by Z. Both the narrative and the symbols are more confusing than necessary. For example, one author introduces the basic model for statistical learning with references to two different probability distributions, both of which he calls F. Another author, writing about robust statistics, left me baffled by writing:
Tn = Tn(X1,...,Xn)
Here's an example of poor narrative from an interesting paper on using cluster analysis to assess the probability of a risk event based on past experiences.
‘For each cluster, the rate between the number of unexpected results of some interactions and the number of elements in the cluster is defined as the Average Loss Rate (ALR). Each element itself has a risk probability associated with the ALR.’
If only they had used the same number of words to say: ‘The Average Loss Rate (ALR) of each cluster is defined as the proportion of past encounters in that cluster where the risk event occurred. Each encounter has a risk probability equal to the ALR of the cluster to which it belongs.’
Common clarity faults in mathematical writing include all those found in ordinary English, plus over 30 specific to mathematical writing.
The specific mathematical faults range from tiny irritations to big and persistent barriers to communication. Here are some examples of irritants:
The more pervasive problems include difficulties with assumed knowledge and failure to explain motives.
Much mathematical writing makes inconsistent assumptions about the reader's prior knowledge, and nearly all assumes more than necessary. Even if readers cannot follow the fancy calculus used to get from some initial assumptions to a useful result they still need to understand the initial assumptions so they can see when the result is unsafe to use. Too often the writer just starts with the result leaving the reader confused and suspicious.
A lot of mathematical writing in books makes little or no attempt to explain the motivation behind the ideas it sets out. Why are they useful? Why do we need the following definition? What is the value of the model? What questions are we trying to answer? Why is a particular approach to a problem likely to solve it?
Readers have to work hard to follow mathematical writing so they should be encouraged to make that effort. They also benefit from an insight into the writer's thinking.
Campaign for Plain Maths
If quantitive methods are to be used more in managing risk then the people who understand them need to explain them superbly well. I urge everyone who writes about the mathematics of risk to do everything they can to communicate more clearly, and I urge everyone else to demand that they do!
Words © 2005 Matthew Leitch. First published 2005.