Matthew Leitch, educator, consultant, researcher SERVICES OTHER MATERIAL |
## Working In Uncertainty## Taming probability notation
This article proposes that we should make probability notation simpler and more consistent to avoid confusing learners and everyone else. ## Improving the three types of probability notationIt's not obvious, but there are three basic types of notations for 'probabilities' in probability theory: Generic notation using P, p, or Pr for everything. Specific functions, using a variety of names invented to represent particular distributions, such as f _{x}.Distribution families (e.g. Normal(x|μ, σ ^{2})), where specific distributions are selected by specifying particular parameter values (e.g. μ = 2.3 and σ^{2}= 9.4).
The approach I suggest below is to use the generic notation more strictly, which tends to make it a bit more lengthy, but to use specific functions more often and more systematically to compensate for this extra writing. ## More complete generic notationThe format suggested below is inspired by Z (see Spivey, for example), a mathematical style developed for specifying computer systems. It also has similarities with proposals for notation by Carroll Morgan and by Maarten Fokkinga. The format looks like this: P[X, A, B], where P is the symbol used every time to show that this is a probability, X is the name of the probability space involved while A and B are sets used in the probability space. If the probability space has the usual three elements, so that X = (Ω, S, μ), then: P[X, A, B] = μ[A ∩ B]/μ[A]. In other words, this is the probability, using the probability measure μ, that the truth lies in B given that the truth lies in A. (Remember that Ω is the set of possible truths, S is a set of sets of these possible truths, and μ is a function that gives a probability number for each set in S.) If the probability is not considered as conditional on anything, it is still in fact conditional on something in Ω being true, so we can write: P[X, Ω, B]. Here are some familiar probabilities in old notation and the complete notation I am suggesting:
An advantage of the stricter notation is that you can avoid making mistakes when two or more probability spaces are involved in a problem. This might be because you are working with the views of two or more people, each one having a different view of the probabilities, represented by a different probability space. For example, when analysing a negotiation, the two parties might have different views of the outcome from a particular settlement and it would be helpful to be able to distinguish between them. Perhaps both parties analyse the future in the same way but just have different views as to how likely different outcomes are: X Or perhaps they analyse the future differently so that not even the set of possible truths agree: X We also want to be explicit about probability spaces when we build one from another. I like the way this notation continually reminds us that there is a probability space involved and that all probabilities are conditional. I also prefer the rigour of the set builder notation used to specify the sets involved. The stricter notation is consistent and gives more information. The old notation for random variables (e.g. P(Z = 3)) is a particularly misleading abuse of notation. ## Systematic and frequent use of specific functionsBoth the old and the complete versions of generic probability notation are extremely flexible and powerful. However, they both have two limitations. One is that they can be long when written down. The other is that they only represent individual probabilities, not whole distributions. In practical applications of probabilities we nearly always want to work with whole distributions, most of the time. It is helpful to avoid using generic notation all the time by introducing specific functions with individual names, rather than trying to make P do all the work. Defining these specific functions produces more compact notation but requires some care in thinking of function names that are easy to remember and then providing clear definitions if needed. When working on a particular problem it is usually easy to learn the type and meaning of the functions you create. The following examples again assume a probability space, X, defined as X = P(Ω, S, μ). Also, notice that I am using square brackets for functions to avoid confusion with the curved brackets used to show order of calculation.
In example number 1, the specific notation does little more than eliminate the need to explicitly specify the probability space and conditioning set. The function, f, takes as input a set from Ω (the 'outcome space' from the probability space X) and returns the probability that the truth lies in that set, according to the μ from the probability space. In example 2, the idea of a conditional probability distribution is captured as a function, g, that takes as input a set from Ω and returns Example 3 shows a typical situation involving a so-called 'random variable'. The old notation is read as saying 'the probability of the random variable, Z, being less than F. However, technically, Z is a function that takes as input an item from Ω and returns a Real number. The generic notation shows this idea using the standard rules of set builder notation. The function Z Example 4 is another conditional probability distribution, this time probably based on a joint probability density distribution, with the notation representing the probability density of a particular value of x given a particular value of y. ## Standard notation for distribution familiesThe notation for distribution families is really that of conditional distributions so instead of writing Normal(x|μ, σ An advantage of this style is that it is possible to talk about the function Normal[μ, σ ## Some longer examples## Flipping a fair coinFlipping a fair coin, and using the usual assumptions of equal probabilities, we might define a probability space, X, as follows:
The probability of 'heads' can then be written as: P[X, Ω, {heads}] To set up a probability distribution with a specific name we can start with a space-saving abbreviation for the results: HT == heads | tails and then write:
Note that all the objects and rules defined in the Coin Probability Space schema (the box) earlier are imported into this schema at the start, just by writing Coin Probability Space. Alternatively, using lambda notation, we could replace the last three lines:
The lambda notation for defining functions can be read as 'f Another alternative is to give the probabilities directly rather than refer back to the probability space. Either technique could be used:
Or, since this is a very small distribution we could just write:
This style uses the idea that a function is really just a set of paired inputs and outputs. With all these definitions the function is the same. It lets us write things like: f ## Bayesian modellingBayesian modelling of data is a good area for using specific functions. The probability space for most Bayesian methods combines potential models with data we could potentially observe and use as evidence as to how likely each model is to be the best model of the bunch. Since the type is quite complicated, here is an abbreviation for it: [MODEL, DATUM] BAYES_SPACE == (MODEL × DATUM) × ℙ (MODEL × DATUM) × ((MODEL × DATUM) → ℝ) We can now define the probability space, giving the definition the name Bayes Space so that it can reused later.
In addition to a Bayes Space, we also need functions representing views, before and after considering the data, of the probability that each of the set of models is the best model. We also need a function giving the probability of observing particular data assuming each model is true.
These definitions start off by importing the elements of Bayes Space. This makes available Ω, S, μ, ms, and ds, with the relationships established between them. Then each function is defined with statements that establish the domain of the function (i.e. the inputs it can handle) and the rule that maps inputs to outputs. In these cases the rule uses the probability space. One of the easiest ways to do a Bayesian analysis is using 'conjugate priors.' The beauty of this technique is that the distributions representing views before and after using evidence can be taken from the same distribution family. All that changes is the value of the parameters that select a particular distribution from the distribution family. The simplest example is that of tossing an unfair coin to learn about the rate at which it turns up heads, long term. Our initial view of the relative probabilities of each possible rate of heads can be represented by a probability density distribution from the beta family. Our view of the relative probabilities of each possible rate of heads A second distribution family is also used in this analysis. The binomial family is used to state the probability of getting a certain number of heads from a series of tosses, assuming the probability of heads is the same on every toss. Two parameters are used to select a particular distribution from the binomial family. They are the number of trials (i.e tosses) and the probability of 'success' on each trial. The function M simply maps models to particular Real numbers. For example, if you think the rate of heads is 0.3 then the associated Real number is 0.3. It's almost too obvious to mention, but there is a logical difference between a model and a Real number.
## Final thoughtsThese examples give a flavour of the notation that can be used, but probably also look rather complicated and perhaps even intimidating on a first look. Bear in mind that these examples give vastly more information than typical writing about probabilities and distributions. Also, the effect of reading, carefully, each statement and understanding what it says is to provide a much clearer understanding of probabilities than can usually be achieved. Brevity is not always the key to clarity — not if brevity is achieved by leaving the reader to guess the rest. ## ReferencesFokkinga, Maarten M. (2006) Z-style notation for Probabilities. In: Morgan, C. (2012). Elementary Probability Theory in the Eindhoven Style. Spivey, J.M. (1989). Hundreds of people receive notification of new publications every month. They include company directors, heads of finance, of internal audit, of risk management, and of internal control, professors, and other influential authors and researchers. Please share: Tweet |

Words © 2014 Matthew Leitch