We all learn from our experiences, but with systematic application of simple
methods we can learn even more. Consider the case of David Ogilvy, the
advertising genius who founded the Ogilvy & Mather advertising agency in
1949 and by the 1980s had turned it into the 4th largest advertising agency in
the world almost entirely by organic growth. In one period of seven years Ogilvy
& Mather won every new account for which it competed. One day someone from
IBM arrived unexpectedly and simply gave them the IBM account; he knew their
work.
David Ogilvy had many gifts, but two things in particular distinguished his
advertising. First, he could write interesting and charming copy. Second, he was
totally focused on the effectiveness of his advertising (rather than its
creative qualities) and used experimentation, not just to guide individual
campaigns as others did, but also to build up a body of empirically supported
rules of thumb about what works and what doesn't in advertisements of different
kinds in different media. For example, he found that, in print advertising,
pictures of human faces at larger than life size tended to repel people.
His favourite form of advertising was direct response, where consumers
respond directly to advertising. It allowed him to conduct experiments with
different ideas, such as headlines, and get easily measurable results.
In his book ‘Ogilvy on advertising’ he relates how the great Stanley Resor,
who by then had been head of J Walter Thompson for 45 years, told him he had
begun looking at research to find factors which usually work. In two years they
had already found a dozen. Ogilvy said he was ‘too polite’ to mention that he
already had 96.
Ogilvy exploited the opportunities for experimentation inherent in his
business. If you are not in advertising your opportunities will be different,
but there will probably be more than you are currently using.
Learning more from experience
As we come to a more realistic understanding of what we understand, can
predict, and can control it becomes clear that we will often benefit from
learning more about how our organisation and its environment really work, and
what we can do that produces results we value.
From the organisation's perspective this learning is essential to improving performance.
For an individual within an organisation, being able to draw convincing
lessons from experience as well as by reasoning is useful as a means of finding
good ideas and winning support for them. If you can back up your claims with
results there is less chance of your discovery being overlooked.
In some industries (e.g. design of chemical manufacturing plants) it is
possible to carry out proper scientific experiments, with control conditions, or
systematically vary independent variables and record the effects on dependent
variables. It is even possible to build a detailed model of the connections
between variables and find optimum settings for independent variables. A lot of
work has been done on how to design and interpret these experiments efficiently
to screen out unimportant variables and build a model.
Once such a process is in live operation it is still possible to use rigorous
quantitive methods to adjust the optimum settings to meet slowly changing
conditions. EVOP (Evolutionary Operation of Processes), for example, involves
varying the independent variables by very small increments, so that the output
of the live process is still acceptable but the slight changes in output can
guide future trials towards new optimum levels.
Unfortunately, most business situations don't allow these methods to be
effective. An advertising agency, hotel, car dealership, or travel agent, for
example, is very different from a chemical plant. It can be difficult to vary
independent variables in a controlled way. The volume of data available is often
small, with many potentially confounding variables. Conditions change in many
ways and often quickly. Finally, we are almost always in a situation of running
a live process and it is difficult to defend not applying what appears to be the
best approach wherever possible.
Despite all these difficulties we have to do something. Most often we
‘experiment’ by simply doing something to see what happens. There's no control
group or other comparison. This falls far short of standards for scientific
experimentation but it's all we have. What can stop this being a futile exercise
in self-deception is that we can use our knowledge of the world and the
conditions surrounding our ‘experiment’ do two things:
Make allowances: In the Design of Experiments terminology this
is a covariate design, where you record the values of potentially confounding
variables and try to adjust for their effects. We can bring lifetimes of
experience of bear on this task though quantification is often difficult. (We
make allowances most often when the results of a trial surprise us but we
should do it every time.)
Observe steps of causality: If you think that A causes D,
because A causes B, which causes C, which then causes D you can sometimes
observe changes in B and C as well as the final result, D. This is
particularly important if D is a delayed response, such as future purchases by
customers. A change at B or C ‘stores’ the effects for later and it is usually
easier and quicker to observe the intermediate effect than wait for the final
result.
For example, suppose you think that offering slightly different payment terms
will improve sales. You try it on the next sales lead but don't get the sale.
However, the buyer tells you that the payment terms are more attractive than the
usual terms but explains that a corporate decision has been taken to purchase
only from another ‘strategic’ supplier. Overall, this is slightly encouraging
for your idea. The confounding factor is the corporate decision and you make an
allowance for it. The intermediate effect is that the buyer finds the payment
terms more attractive, even though this did not lead to the ultimate result of a
sale.
If we persist in trying to quantify effects it becomes possible to make
quantitive allowances for more and more factors that could not be controlled in
our experiments. The benefits of this approach increase over time.
The power of experimentation can be increased by the following:
Try more things: There is a tremendous advantage in simply
trying more things. Don't just plough on with exhaustive trials of one idea.
Screen lots. Self-made millionaires are more often energetic than smart. They
try a lot of different things and so are more likely to find a hit.
Typically, an idea will be tried on a small scale first. If the signs are
that it is worth a further look it may be tried on a wider scale, and so on in
stages. This way the results may give increasing certainty.
Try ideas in favourable conditions first: Whether your idea is a
theory about a causal factor or an idea for a better way of doing things the
chances are that it won't apply equally well to all the situations where it
might apply. At an early stage you will learn more if you try your idea in
conditions where its impact is likely to be easy to see.
For example, if you have thought of a new way to do something and think it
is a better way, at least in some situations, think about what might
characterise those situations. If possible, try your new method first in
situations that suit it. Use what you learn from this to repeatedly revise
your ideas about what works and what conditions affect its usefulness.
Gradually expand the range of situations where you have tried it out.
Do this consciously or you run the risk of concluding that your idea has
universal application when it does not. If you stratify the population on the
basis of suitability and you are clear about your criteria you can select
sample items from the most suitable sub-group and use statistical inference to
generalise about the effectiveness of your idea in this group.
The approach of experimenting in the most favourable conditions first means
that it is easier to see the effects you are interested in and you spend most
of your time doing things you believe to be the best approach.
Go for data volume if you can: The more trials you can do the
better. If there's a way to cheaply and quickly do lots then do so and plot
graphs to help you see directly what is going on. It helps you separate the
effects you are interested in from all others. Design of experiments
literature tends to concentrate on what are called ‘two level’ designs i.e.
each factor is tested at just two levels. If you can vary factors over more
levels cheaply and do a lot of trials it is possible to plot graphs that give
a clear message without complicated statistics. Unfortunately, high volume
isn't often an option.
Observe and record more conditions and make allowances: Think as
widely as possible to identify conditions that might have been important but
perhaps you had not noticed. These are often conditions that are normally true
in your business and which tend to be assumed. Perhaps in future that
assumption won't hold. Keeping a diary can be useful.
Observe more steps of causality: Don't forget to check for steps
in the causal chain that might lead to effects you don't want, as well as
effects you do want. Many apparently sensible strategies have delayed adverse
effects.
Design experiments with comparisons: You can get a stronger
indication of effects by varying factors systematically and over a wider range
(within sensible limits). There are many alternative factorial designs, not
all requiring every combination of levels of factors. You can also reduce the
confounding effect of other variables by 'blocked' designs. This is where you
separate your total population into relatively homogeneous sub-groups and then
split each group between the experimental conditions. In particular, as
business conditions tend to change a lot over time it is very helpful to run
the comparisons in parallel rather than one after another. Even when there is
pressure to apply what is thought to be the best strategy wherever possible
there is still room for small variations and these may be enough to guide
optimisation (following the principles of EVOP).
Start with what you are most certain of: Most outcomes are
driven by many factors. If you need to eliminate the effects of factors you
can't control it makes sense to start with the factors whose effects you are
most certain of. This in turn suggests that it is good to use early
experiments to try to understand factors whose effect is pretty obvious and
easy to model, even if it is not very interesting. Once you have eliminated
the effects you understand, take a look at what is left.
Quantify effects: It is more useful to quantify effects than
just say ‘more’ or ‘less’ because you can make better allowances for factors
you understand and reveal more clearly what is left. Mathematical and
spreadsheet models are useful.
Start with very simple models: Model building can be time
consuming and tiring, so start with embarrassingly simply models and use them
to see what parts of the model produce most of the uncertainty in results.
Iteratively refine your models, concentrating on the areas that are
responsible for most of the uncertainty.
Use more than one model: Often it makes sense to have more than
one model in use at the same time. First, it may be that there are alternative
models for the same purpose. (This is discussed further below.) Second, it is
often helpful to develop models to support specific decisions (one offs or
regular decisions). For example, the models used to plan stock levels may be
different from those used to plan staff levels. There may be no link or
consistency between them and this is not necessarily a bad thing.
Keep the detail: Gather experience in the smallest units
possible and combine your data with averages only as a last resort. For
example, if you want to try that idea for more attractive payment terms you
could divide your customers into two groups with the same level of sales in
the previous year. One group is offered the new terms and the other is offered
the usual terms. After a period of time you compare the total sales in each
group. This gives you two numbers to compare, but it is hard to make
allowances for potentially confounding factors. Alternatively, you could try
to make allowances at the level of each customer account, or even at the level
of each potential sale. This will give you more information.
Evolve your experimental designs: Many experiments give weak or
unusable conclusions when first attempted and nearly all experiments could be
better designed with hindsight. Be prepared to change your experimental
approach and try again, and again, and again.
Learning from financial and management accounts
At the aggregated level of company financials it is usually difficult to see
the effects of trying different approaches until they are widely used. Even then
the gradual roll out and combination of many initiatives and other trends makes
direct analysis of the summarised financials very difficult.
Nevertheless, we can and should try to learn from management and financial accounts. Here are some ideas:
Compare actuals with detailed forecasts: What are the
differences between summarised actuals and the various forecasts made on the
basis of analysing much more detailed data? Why have they arisen? Are
individual predictions poor? Are there factors that are not being understood
and allowed for at all? Have familiar causal links changed so that previously
reliable predictions are now failing?
This level of comparison provides assurance that lower level forecasting
and learning is working, and a warning if it is not.
Model crudely at a high level: Treat the financials as a set of
time series. Get as many past periods as possible into view and use graphs to
show how things have changed over time. Look for the trends and quantify the
variability between months. Even this simple view may give more realistic
expectations than complicated and detailed analyses using more detail if the
detailed modelling is not going well or hasn't started. The crude model is
useful on its own, but can also be used to challenge more detailed models. If
someone has been experimenting at a detailed level and now predicts future
results that would be very different from past results without suggesting a
radically different plan of action or pointing to a powerful external trend
then you should doubt the prediction.
Some pitfalls
Some things that can go wrong are:
repeating an experiment until you get the results you want, either
through chance or by unwittingly introducing factors that bias results in the
direction you want/expect;
not noticing important confounding factors;
determinedly pursuing an idea that's not a good one and distorting
contrary evidence by wrongly concluding that confounding factors are
responsible for contrary results;
not anticipating indirect adverse effects, and not wanting to know
about them; and
assuming an idea with immediate face value is a good one and not testing.
Don't be paralysed by analysis
Paralysis by analysis is another pitfall.
It's one thing to recognise the value of more information but quite another
to be unable to act at the right time because you aren't sure what to do. So
what can you do when the hard evidence is inconclusive? Here are four
possibilities, each more sophisticated and rational than the last:
Choose a ‘null hypothesis’ until you're confident it is wrong:
For example, you might decide to assume that schemes for motivating employees
have no effect on motivation unless there is strong evidence to the contrary.
Or you might assume they always increase motivation until proven otherwise.
Your null hypothesis can be anything you like; there is no magic to it. If
your null hypothesis happens to be the wrong place to start from this can mean
you are making bad decisions and ignoring helpful evidence for longer than you
really need to.
Choose whatever the data point to: Suppose you've tried a scheme
to improve motivation and motivation increased by a certain amount. You could
assume that this is the true effect and if you use the scheme again you will
get the same result. This is very responsive to the data, but can lead to
extreme results if the evidence is weak. If your data base is small you are
more likely to get values that are away from the true value.
Go for the most likely hypothesis taking into consideration data and
prior expectations: In this strategy you need to recognise that you had
some expectations before you even tried the scheme, and try to clarify what
they are. The evidence of an experiment is then combined with this prior view
to produce a revised view. You can do this by judgement or computation.
Doing this involves setting out all the potentially true hypotheses and
attaching a probability to each (a probability density in the continuous case)
that it is the true one. Once you combine evidence from the experiment the
result is a revised set of probabilities. You then choose the hypothesis that
is most likely to be true on your revised view. However, it could be that the
most likely hypothesis is barely more likely than others.
Combine all your hypotheses when making forecasts and decisions:
This is the same as the previous approach except that when making forecasts
and decisions you do not take the most likely hypothesis. Instead, you average
the predictions/decision values of all the hypotheses, weighting each by its
probability of being true. This explicitly shows the uncertainty you have
about what model to use and tends to produce more widely spread distributions
for future predictions. Other modelling approaches described above tend to
understate uncertainty.
This approach is called ‘Bayesian model averaging’ but it doesn't have to
be complicated to do.
This list of possible approaches is not complete. For example, sometimes it
is possible to take a decision without needing to know much about prior beliefs.
It may be that one strategy is very attractive across a wide range of hypotheses
so it can be chosen without having a clear idea of how likely each hypothesis
is.
The approaches that take into consideration prior beliefs as well as new
evidence are usually more suited to business situations because so often the
data available are far from conclusive. I personally find it helpful to think
about my prior beliefs, especially when the evidence of experience is weak, as
it so often is.
Summary
Science has made a huge impact on the human race but there are times in
business when it seems to have no relevance. This is because the circumstances
in which we work often do not suit the experimental designs most of us learned
at school or university. But if we adapt the principles to our circumstances we
can learn more from experience and build more powerful cases for good business
ideas.
Readable, informative, and charming. I love this book and highly recommend it.
Hundreds of people receive notification of new publications every month. They include company directors, heads of finance, of internal audit, of risk management, and of internal control, professors, and other influential authors and researchers.