This paper is about designing internal control systems (precautions we take
to guard against error, fraud, or other perils) for business processes, such as
billing, purchasing, and treasury management.
But why recognise internal controls design as a discipline in its own right?
Why not assume it is done by everyone in the normal course of their jobs?
When a business creates these processes for the first time, or makes
significant changes, internal controls will be established. This usually happens
because people involved know from their education or past experience that bad
things happen and they need to take precautions. For example, the IT people will
worry about hackers, viruses, and disasters like fire and flood in the data
centre. Accountants will be looking for reconciliations and approvals. Managers
will want reports. And so on.
This organic, decentralised process works pretty well but it does have flaws,
and these lead to some serious risks and inefficiencies.
Data quality meltdown
Almost all organisations have a huge investment in data - data about
customers, suppliers, products, employees, and so on - gathered, checked, and
stored in databases and files. Consequently, one of the most costly problems
with new systems or processes is the data quality meltdown. Here is how it
happens.
At the start of the implementation project task lists are drawn up and lots
of good ideas about quality assurance and internal control are usually captured
and put into the plan. However, as time goes on and things take a bit longer
than expected the pressure builds. As people become stressed their focus
narrows. Meetings are held at which people ask ‘What do we really, really
need to do?’ Little by little the quality assurance and internal control
tasks get de-scoped and eliminated. Go live weekend arrives (three months late
but it still seems like a triumph) and the champagne is opened. For a while
everything seems to be going well, though people are struggling with the
unfamiliar way of working.
Then the first evidence of problems starts to emerge. Someone runs a suspense
report for the first time, not having seen it before, and discovers thousands of
errored transactions have already built up. More checks are done and more
problems emerge. Time to panic. More temps are hired and a crisis team is
formed. Already overtime is huge, possibly with shift working.
But by this time the vicious cycle has got a hold. People often make mistakes
when they try to correct mistakes. Reference data is already contaminated with
errors and is generating more and more incorrect transactions. The extra work of
correcting mistakes is leaving people tired and stressed, so they make more
errors, especially when trying to correct errors, late at night, for the third
time.
Recovering from this sort of meltdown can cost more than the original
implementation. It is better to apply expertise to internal controls from the
outset to minimise the risk of meltdown.
Persistent waste
Fortunately, data quality meltdowns are not common. However, wasted time and
damaged customer goodwill are an almost universal effect of not designing
internal controls.
The most easily quantified and understood effects are external quality costs.
In other words, situations where errors cost cash. For example, paying the same
invoice to a supplier twice, or failing to bill a customer for all the goods and
services provided.
In telecoms, ‘revenue assurance’ is a well known buzzword. It refers to
searching for and correcting system and process errors whereby customers are not
being billed correctly for all the services they receive. In many telecom
companies this has been found to be several percentage points of revenue - a
vast amount of money and well worth doing something about it. Certain other
industries will probably find the same.
There are also internal quality costs, principally from the extra labour
involved in finding and fixing errors.
This persistent waste occurs because most organisations leave internal
control to a natural, organic process of development and decay. Although this
process is surprisingly effective it has enough flaws to justify deliberate
design. The natural process works like this:
The first controls are put in place because individuals think of them -
usually prompted by bad past experiences. This starting point is not ideal, but
from time to time things go wrong prompting improvements which are normally
specific responses to the issue. Over time, a set of internal controls grows.
Occasionally, rising workload and/or staff cuts and turnover cause controls to
fall away. Sometimes this is a good thing because they are no longer needed, but
sometimes it is a bad thing, leading to renewed error and fraud, perhaps not
directly affecting the person who should have been operating the control.
Overall this process is surprisingly effective and controls tend to be more
developed where they are needed most. However, there are some problems:
The strength of each type of control implemented initially depends
on the enthusiasm of the person who thought of it, not actual need. There
are usually large gaps in some areas, especially when a business sets up an
entirely new process and this is done by young people with limited practical
experience of operating such processes.
Things have to go wrong before action is taken, so some losses
are inevitable.
Rare but very serious risks are not controlled as well as
common, but low impact events. Normally, rare but serious events have not
happened to the organisation, so the risk seems rather hypothetical.
Routine errors are tolerated. People seem to accept low but
persistent levels of error and other losses as inevitable and acceptable, even
when they are not. Losses of revenue in telecoms are small as a percentage of
total revenue and many are persistent problems that have been there since the
beginning. By contrast, losses due to dramatic events tend to get more
attention.
Cuts in internal controls are unmanaged. Some companies build up
complex, costly procedures that are largely internal controls that have built
up over a long period of time without being cut back or refined for
efficiency.
One reason for designing internal controls in a deliberate, skilled way is to
reduce the routine waste that results from the organic, natural process.
Regulations
Some very important corporate governance regulations affecting listed
companies say that directors are responsible for ‘designing’ internal controls.
The Sarbanes-Oxley Act of 2002 affects companies listed in the USA, even if they
are based elsewhere. In section 302 it states that the CEO and CFO must certify,
among other things, that they have ‘designed’ the internal controls to achieve
various objectives. There is also a need to be able to report on significant
changes to internal controls or other factors that could significantly affect
internal controls. This implies a degree of monitoring of such factors, as will
be described later in this paper.
Stress
The most compelling reason for designing internal controls deliberately, and
with skill, is to reduce stress. Internal control problems cause stress for
everyone involved, from the clerk working late to clear billing problems, to the
CFO explaining to investors how his project to consolidate accounting in one
global centre caused a £25m billing backlog, everyone suffers. The stress
stretches out to irate and frustrated customers, exasperated suppliers, and
horrified investors.
What to expect
Many people given the job of designing controls for a new or revised process,
and joining a large project, get into difficulties because the project is a lot
rougher than they expected. Most people who get involved with internal controls
are auditors or accountants unused to large projects and un-prepared to deal
with some of the more aggressive groups they are likely to meet. Events unfold
more quickly than they expected, and the timing of project milestones seems to
make success impossible.
Every project is different, but typically you can expect to deal with the
following groups:
Software people, including ‘business analysts’ and programmers.
This groups tends to be the most proactive and tough minded. They know their
job is complex and time pressured and they don't want to be blamed for things
going wrong. If they are consultants they will probably generate gigabytes of
documentation (much of it boilerplate and not very interesting) and demand a
lot of sign-offs from everyone else. Most importantly, the software people
need to know everyone's requirements very early on if they are to deliver on
time so they start asking for them and setting deadlines at what seems like a
very early point in the project. This often goes on for some time, with
successive ‘final deadlines’, but it is still very easy to miss your
opportunity to register your requirements.
Virtually all deliverables will be worked on until the last possible moment
and sometimes beyond.
These software people often have a very narrow view of internal controls
and left to themselves will focus on data validation (which all systems have
to do to work anyway), disaster recovery, and perhaps access
restriction.
IT infrastructure people are usually easier to deal with. They
provide development and test environments and will usually run the computers
for the live system. Sometimes they have a lot to do. Left to their own
devices they will concentrate mostly on disaster recovery.
A finance team will be included in the project if the
system/process will handle financial information. These may be on loan from
the accounts department, supplemented by contractors/temps. Typically, the
finance team is not so wise to the workings of projects as the software people
and tends to fare worse. The finance team may well be responsible for the
internal control aspects, but will often focus exclusively on reconciliations
and which reports are needed.
Internal auditors may be there just to review and report, or
they may be the ones designing the controls. Internal audit involvement can
vary from just looking to see that the project manager's documents are up to
date to attending all important meetings and looking in detail at proposed
controls. Skill at designing internal controls and internal control systems is
extremely rare so internal audit usually approach design as if it is an
audit.
The project manager and project office are keepers of the
project plan. The most important point for the controls designer to remember
is that the project manager's plans probably don't include much if anything
for controls design. Virtually none of the ‘methodologies’ used for running
these projects recognises controls design as a separately identifiable
activity with its own specialists and most projects don't have them.
The steering group is the most important group on the project
and usually includes at least one board member if the project is of such
importance to the organisation that internal controls specialists are
involved. Many steering groups are, rightly, very worried that they are not
being told the whole truth by people on the project. The steering group is
usually capable of seeing the bigger picture, which makes them very useful
allies for the controls designer.
Shortage of information and time
Because the software people work down to the wire on most deliverables agreed
process descriptions and system designs are not available until it is too late
to respond to them from a controls point of view.
Resistance borne of desperation
Also, the software people will often be reluctant to accept some security and
control requirements because they see them as non-essential and as getting in
the way of them delivering on time. The more desperate they get the more the
resistance. Sometimes only the steering committee can get them to accept control
requirements as necessary.
What not to do
If you want to achieve nothing as a controls designer on a big project the
steps you should take are as follows:
Ignore the project manager or submit a plan for your work and
assume it will naturally be linked into the work plans of others on the
group.
Controls design is not a traditional aspect of system development and
implementation ‘methodologies’ so it probably won't be in the master plans
unless you have gone to some lengths to get it there. To add to your problems,
although most project managers are great, some don't even know how to link
team plans using their project planning software.
Wait for process and system designs to be agreed before you
start work on the controls for them.
This will ensure you start work too late to get anything worthwhile
done. The process and software people will continue working until the last
possible moment leaving you with hardly any time at all to react before many
design decisions you should have influenced are set in stone (i.e. subject to
‘change control" and therefore more work to get changed).
Write a long list of ‘best practice’ internal controls from an
old auditing textbook you borrowed from a friend.
It will be obvious to everyone what you have done and people will feel
you have not thought about them before giving your opinion. Confronted with a
list of stringent but inconsiderate requirements people will rebel against
your work.
On no account summarise what you are proposing as any kind of
coherent system or scheme and do not give reasons for the controls you are
proposing.
This will minimise the value people see in your input and help ensure
nobody knows what you are doing. You will have no way to prioritise or plan
your own work.
Use risk-control mapping matrices that list risks in a column
and have controls listed next to each risk to show how the controls cover the
risks.
Leave the software people to work out the details of how they
will respond to your control requirements. Assume they will do it. Never help
them with suggestions of your own. If they report difficulty just read out
your requirements slowly in a loud, clear voice.
Some software people don't know much about internal controls so they
need your help. Even those that do know will probably appreciate the
opportunity to discuss what the requirements really mean and talk about
alternative ways of meeting them. If you can suggest something that is easier
than they expected you will be popular and listened to.
Stick to the controls you know best - reconciliations
perhaps.
There may well be controls that are very important but that you neglect
simply because you don't feel confident talking about them.
Try to do everything in the internal control textbook.
The textbook will list everything the author could think of that might
be necessary. In practice you don't have the infinite resources needed to work
through every checklist with equal diligence. The things you have to work on
first will get done better than the things you were planning to do later,
which probably won't be the best allocation of resources.
Having written out the controls assume they will be put into
operation by line management. Do not pay any attention of user acceptance
testing.
Since everyone will be busy and under pressure, especially in the later
stages of the project as ‘go live’ gets near, your work is more likely to be
ignored. If people haven't rehearsed the internal controls they will need they
will struggle to learn and perform them once they go live. The months
immediately after going live are the most dangerous so not being able to
perform the controls properly from the start is very dangerous. You could have
problems that nobody has noticed simply because nobody has checked for
them.
If there is no evidence of issues once the system/process goes live
assume everything is fine. No news is good news.
Actually, no news is extremely worrying. It suggests people are not
checking and so not finding problems or reporting them upwards.
Assume the controls as designed will work fine and do not need
refinement.
In fact it is difficult to predict error rates to within even an order
of magnitude so it is virtually certain that some adjustments and refinements
will be needed. Also, some people will not do what they are supposed to and
this will need to be solved somehow.
How to do it right
To succeed at controls design on a big implementation project you need to:
Ensure your work is properly represented in the project plan.
Very quickly provide a high level design for the complete control
system, showing what types of mechanisms will be most important, where, and
why. Promote it strongly.
Provide a plan of work packages for the detailed design and implementation of controls.
Design a multi-layer system of controls using the right design tools. Consider economic and cultural factors as well as risks. Consider ergonomics.
Encourage people to rehearse controls during user acceptance testing.
Refine the controls quickly once they go live.
The following subsections discuss these in more detail.
Getting into the project plan
You can get into the project plan in two stages.
Initially, you will have only a hazy idea of what controls will be needed but
you still have to see the project manager and make sure that everything you
can say is in the plan. Your initial work will be to produce the high
level design and propose packages of work to design and implement key components
of that scheme. Explain this, and that you will provide more certainty as soon
as you can. You may agree to put some general sounding tasks into the plan as
place-holders.
Once you have the high level scheme and work packages you can go back to the
project office and provide more detail about what you will do and how much work
is likely to be involved. There may also be implications for other people on the
project.
At all stages, the timing of your work will be determined by the timing of the work of the bigger, more important teams on the project.
How to do high level design
This is probably the most important step and the one requiring the most
skill. This is how you will demonstrate the value your work provides and begin
to get the support for internal controls you will need to survive the pressure
to de-scope that will probably arise later on.
High level design must be done quickly, but convincingly. It should take no
more than 10% of your total elapsed time to do it. Adjust the level of detail to
fit the time and resources available. Do not wait for system and process details
to be agreed.
The basic technique is to look at the factors that will drive the scheme of
controls, and deduce what those factors tell you about how a typical, vanilla
flavoured scheme of controls should be adapted and tailored to fit the specific
circumstances faced. Some key points are:
Risks are only one type of factor shaping the control system. As
well as noting implications via risks you must also recognise implications for
controls via economics, time to implement, and culture.
The deductions lead directly to inferences about the internal
controls. Forget ‘completeness, accuracy, and validity’. There is no stage
at which you try to think of controls to meet control objectives. The idea is
to work quickly, sketching out large portions of the controls scheme very
quickly but without details. Often, this is done by putting together fragments
of control systems you have seen before, or read about, and adapting them
further to build something that fits the needs perfectly.
The control system is multi-layered, and many of the deductions
needed involve noticing how some layers can compensate for weakness in others.
For example, if only a few people use a system and it is hard to segregate
duties fully then some other ways of discouraging fraud are needed.
The scheme must specify control mechanism types and some specific
mechanisms. It is not enough to develop control objectives because this is
useless for planning further work. The key question is ‘What must we build?’
Is it computer code, reports, reconciliation spreadsheets, paper sign off
forms, user access profiles?
The reasons for proposing a particular design are important and
should carry through into your presentations of the high level
design.
Controls design specialist(s) should not do all controls design
work. In any project there will be many people working on controls and
many controls can be left to others to work on, perhaps with a review by the
specialist(s) to check that things are going well enough. Other aspects of the
controls scheme may be critical to overall performance or not within anyone
else's responsibilities, so the controls specialist(s) should do
them.
Data and deduction
It is helpful to organise the initial thinking in a table where you list the
facts observed, and the implications for internal controls. Divide the table
into sections using sub-headings, one for each group of observations/factors.
Divide the implications into five columns to contain implications for controls
arising from deductions from the observation that relate to:
risks i.e. what sort of thing is mostly likely to go wrong so
what sort of control is most needed. Conversely, what is very unlikely and
therefore allows you to ease off on controls?
economics i.e. what sort of controls might be suggested because
they could be done very cheaply, or have to be ruled out because they would be
too expensive.
time constraints i.e. what sort of controls might be ruled out
because there isn't time to implement them, and what sort of controls are
suggested because they can be implemented in the time available.
culture i.e. What sort of culture does the organisation already
have and what does it want to encourage? What implications does this have for
controls?
control priorities. This is a place to note specific controls
and control mechanism types that are going to be important. It is a summary of
the other columns but can also be used to record things that just jump out at
you so obviously they hardly need explanation in the other columns.
It takes knowledge and practice to become fluent at this. You need to build a
repertoire of things you know to be common drivers and be able to recall quickly
their typical implications. As a starting point, here are the sort of factors to
look for and some of their most common implications. Asterisks mark the factors
that are almost always key ones.
Drivers/observations
Some of the possible implications
Control performance requirements (from competitive strategy)
Very quick processing is required.
Need controls to catch delayed items.
Try to get controls off the critical path. Replace pre-transaction checks with post-transaction reviews where possible.
Automated controls preferred.
Very flexible processing is required.
Probably complex and allow easy adjustments, so easier to defraud.
Probably more parts of the process are manual.
Control mechanisms also need to be flexible.
Low hassle to the customer is crucial.
Fewer opportunities to impose controls on the customer.
Must avoid errors affecting the customer.
Exact timing is needed.
Controls are needed to catch delayed items, manage fluctuating workload, etc.
Reliable service to the customer is essential.
Mistakes affecting customers to be reduced.
Very economical processing is required.
A subtle one because errors create quality costs so a balance is needed. Probably need automated controls if the processing is on a large scale.
Covering compliance risk will take more work. Need to monitor forthcoming regulations/laws and start changing processes in good time.
Cultural features
Behavioural norms encourage fraud/theft. Patterns of crime already established.
High risk of fraud/theft. Collusion a possibility. Could be social/staffing problems if established fiddles are tackled strongly without supporting action by top management.
Company wishes to empower its people.
Old fashioned controls undermine empowerment. Try to make teams work and give people the information they need to make good decisions themselves for the company. Talk of quality rather than control.
Functional silos.
Functional silos are a problem for process level monitoring controls so a cross-functional committee is needed.
Weak control environment.*
Undermines manual controls and may hinder controls design activity. Monitoring is unlikely to work well unless the control is environment improved. Expect control weaknesses at all levels.
Data features
Data is standing data.*
Data errors accumulate unless cleared.
Individual data items are typically more sensitive.
Data entry workload is probably uneven with few users trained to do it, so may need to pre-book work to ensure staff available.
Data is transaction data.*
Data errors probably will not accumulate.
Individual items typically less sensitive.
Data entry workload typically more even but much higher.
Data very complex.*
Difficult to enter correctly because of complex data entry screens. Will require more emphasis on usability engineering.
Harder to write software correctly so more software quality assurance needed.
Very high volumes of transactions.*
Tiny error percentages still produce many exceptions to be corrected. Often need system tools to prioritise, investigate, and clear errors.
Manual controls unlikely to be efficient.
Highly predictable data values.
Easier to control and to monitor. Favours computer filtering and assisted review.
Transactions can be divided into sub-populations which are highly predictable or at least have very common characteristics.
Look to split transactions into separate streams, each with their own controls.
Population contains some very high value and high risk items.
Requires either a very reliable process or a special approach to controlling the bigger items.
Data about individuals is held.
Privacy legislation applies and confidentiality breaches could seriously damage customer confidence.
Very abstract business based on rules, definitions, possibilities (e.g. insurance, derivatives).*
Higher error risks. Trouble can arise through not understanding the business clearly.
Small actions can have big effects, perhaps not immediately visible to those involved.
Process features
The process is highly complex.*
This has different implications depending on whether the process is automated or manual. Complexity is one of the top drivers of controls development effort.
Customers capture data (e.g. type it into a web site).
Error prone, especially if the product is complex e.g. insurance, mortgages. Companies assume their customers are interested in, and understand, their products; often this is wrong.
Rarely possible to provide training to customers. Usability is key, as are edit checks.
Customers could include professional fraudsters.
Suppliers capture data (e.g. type it into a web site).
Again, you can rarely train so the software must be usability tested with lots of edit checks.
Process is highly automated.*
Probably more reliable, but risk of systematic errors. More stress on IT controls.
Process is highly manual.*
Probably less reliable, especially if work is boring, flexible, complex, or under time pressure.
Manual controls are vulnerable to boredom and fatigue.
The assets are easy to dispose of if stolen.
Raises risk of theft/fraud.
High values of money are paid out.
High fraud risk including sophisticated computer attacks, and also risk of interest from money launderers.
Multilingual.
Communication difficulties. Multiple versions of software perhaps. Particularly high risk of misleading field names on data input screens and forms so usability testing needed.
International or geographically distributed.
Different sets of regulations to comply with. Harder to control small, distant offices. Nationalist distrust is possible.
Cultural differences in attitudes to fraud. Potential for misunderstandings.
Many separate databases and interfaces.*
More places for interface failures and opportunities for databases to get out of agreement. More chances to mis-map fields in one database to fields in another. Recoverability is more complex.
Existing business process controls are very good or very poor/there is no existing process.
If existing process and at least some good controls many people will be doing the right thing already. Otherwise, there will be more work on controls to do.
Immediate environment of the process is inside the organisation.
Look to protect the process from messy inputs.
Workload features
Workload is rapidly rising/falling.*
Staffing problems because many staff are new, or because they are insecure and disgruntled.
Workload is highly variable/constant.
High or low proportion of temporary staff affecting error risks and IT security.
Continuous work is required Vs periodic work only Vs slow response only is required.
Determines need for business continuity, among other things.
Environment is very fast changing or very stable.
Affects choice between very refined, automated controls Vs quick and dirty manual controls.
Many changes in processes, systems, or people.*
Lower inherent reliability is to be expected, so errors rates will be greater and controls and monitoring are more important than ever.
Very high/very low proportion of work in the existing process is controls.
Affects decision over how much effort to invest in refining controls in detail.
Project features
Poor project health. (e.g. uncertain sponsorship, politics, unclear or shifting requirements, over-ambitious objectives and impossible timetables)*
Expect delays, frantic efforts to meet deadlines, and pressure to ignore controls. Expect low reliability software and lack of adequate training of staff. Compensate with powerful monitoring controls from go-live onwards.
If you've never really tried this technique before you will be amazed at how
much you can predict from minimal facts, and how accurate your predictions can
be. There's nothing particularly clever about the inferences, but as they build
up many things become clearer.
There is no need to wait for the processes and systems to be agreed before
doing this. The vast majority of your predictions and design decisions will be
correct, with only a few changes being needed once you see the final system and
process details.
Finding out the facts involves talking to people, looking at product
literature, strategy documents, spreadsheets, and indeed anything relevant you
can get your eyes on. It is not necessary to understand all this material to use
it. For example, if the regulatory compliance manual giving rules for selling a
particular product is 5cm thick, printed on flimsy paper in microscopic letters,
and written in legalese you know regulatory compliance is going to take
time!
A common error is to try to think of risks directly, rather than starting
with drivers. This tends to lead to lists of risks that are theoretical rather
than likely.
Summarising the controls scheme
It is helpful to pull together the main conclusions of the initial deductions
into one short document. This is also where you can see the multiple layers of
the control scheme. The shortest summary omits the reasons for each control/type
of control. You will also need to be able to explain the scheme with reasons so
may have to do two summaries.
You may also want to do a separate summary for controls over loading initial
data into a new system. This might be a big exercise if the new system is
replacing an old one and there are perhaps millions of records to be copied,
checked, reformatted, re-checked, and loaded.
Here's the multi-layer model I like and recommend for controlling financial
cycles, starting at the top:
Management monitoring
Process monitoring
Monitor past effectiveness of the controls and take
corrective action, for example by tracking error rates, transactions via
exception streams, and lost revenue and changing the process to make it
inherently more reliable, or adding checks.
Monitor future events and adapt the process and its controls
in good time, for example through capacity planning, looking ahead for
high risk changes and spreading them out, and checking for forthcoming
contract changes that will be difficult and time consuming to
implement.
Monitor the controls to ensure they are operating, for
example through audit work, reviewing reports of control performance, and
control self assessment. Where reliance is placed on exception reporting
no news is good news - or the controls have stopped operating. This is
particularly important for controls that aim to cover risks that rarely
occur.
Business monitoring
Reporting trading performance through information derived through
the process itself. In a business unit there may be many business processes,
each with monitoring as above, each providing information about trading
performance. This is relevant to ensuring financial information is correct
because scrutiny of trading performance can identify unexpected numbers,
that may then be incorrect.
Control activities
Protect the process from interference, using physical and
software security measures.
Make the process recoverable, for example through data
backups, disaster recovery planning, and building resilience and
recoverability into every interface.
Make the process inherently reliable, for example, by assuring
software quality, testing the usability of software which interacts with
humans, and using reliable hardware.
Put checks on data and processing in place, with associated
corrective action, to detect process errors, interference with the process
such as fraud, and attempts to pass fraud through the process.
Put audit trails in place, so that auditors can gain assurance
of correct functioning, and so that errors can be investigated and corrected
easily.
Proposing work packages
Now you know something about what you want to build and where the detailed
work is most likely to be complex and time consuming you can propose work
packages for internal controls development and implementation during the rest of
the project, with estimates for timing and resources.
These will mostly be work packages for the controls specialist(s) but you may
need to propose work for others too. The controls specialist(s) will work on
important areas not already the responsibility of others, assist others on key
controls, and review progress in other areas. It may be necessary to introduce
other specialists to cover work not previously recognised in the overall project
plan.
The high level design gives you the ammunition for these proposals. It allows
you to say why the work is needed and what the deliverables will be.
Near ‘Go Live!’
The periods just before and just after going live with a new system or
process are very interesting and important.
Just before going live people are usually working hard on user acceptance
testing and loading data. These activities have a big effect on the initial
error rate and workload. User acceptance testing usually involves people who are
going to be real users once the system/process is live so this is a rehearsal
and learning opportunity for them as well as a last chance to find problems.
The user acceptance testing should be as realistic as possible, and that
includes people carrying out controls, such as checks on data and
reconciliations, just as they will when they are live. Unfortunately, it doesn't
always happen. Here are some of the warning signs:
The project is behind schedule and, because of externally imposed
deadlines, it has not been possible to allow enough extra time so user
acceptance testing has been ‘de-scoped’.
Not every part of the systems is available, so the testing is not end-to-end.
More bugs were found than expected.
The people doing the testing don't know what the system should do.
The people doing the testing are not going to be the users.
Just after going live is a high risk period because the inherent reliability
of systems and people is at its lowest. Systems typically contain several times
more bugs than they will in a year or two's time, while users are still
unfamiliar with their new ways of working.
Clearly, it is vital to be checking data and processes and monitoring closely
the health of the process. Statistics on error rates and backlogs are vital.
However, there is a further danger. With the best available knowledge and
techniques it is still very difficult to estimate most error and fraud rates to
within an order of magnitude. They could be ten times less, or more, than you
expected so controls need to be refined as quickly as possible so that they are
efficient and effective during the early months.
Tips on some key control mechanisms
Process monitoring
To manage the reliability and performance of a process in an organisation you
need to know what is going on. It is helpful to hear from people how they are
coping, but it is vital to measure the health of your process by collecting
statistics and presenting them in a regular report.
Since most processes of any significance cut across departmental boundaries
it is usually necessary to form a cross-departmental management committee to
receive the reports and agree actions.
The reports should show:
workload e.g. how many invoices raised, how many receipts posted
original error rates (i.e. as uncovered by controls and usually corrected later) and write-offs/uncorrectable errors
backlogs e.g. unmatched receipts, invoices on hold, errored orders
system support e.g. availability, response time
resources used e.g. head-counts, storage space
Too many reports just show workload and resources, which is not very helpful.
The ideal report will also contain information about forthcoming changes and
challenges, such as trends in workload and new software releases, so that the
process owners can take action in advance to manage the risks involved. People
who measure the health of their process learn that they must manage in advance
to keep their numbers looking good.
This kind of monitoring is extremely useful in meeting the requirements of
section 302 of the Sarbanes-Oxley Act as it helps meet the requirement for
notifying changes affecting controls and the stats themselves are powerful
evidence of the effectiveness of internal controls, which also helps with
section 404 of that Act.
One of the most important objectives is to improve inherent reliability and
so reduce original error rates. This is the only feasible, economic strategy for
most really large scale processes. To do this the report should also show
breakdowns of errors into error types, showing them in descending order of their
impact so that actions can be prioritised.
The report takes time to compile. To minimise that time follow these guidelines:
Don't be idealistic. Design your report to use the data you can easily get.
Don't assume you need a computer system to do it, but consider getting
one once you have some experience of doing it by hand. Initially it is usually
best to collect figures by hand and e-mails to colleagues and compile them on
a spreadsheet. You can start immediately and the knowledge gained about what
is available and useful, and what it means, is invaluable if later you decide
to automate.
Drop stats from the report once they tell you nothing new. If an issue
arises then track stats about it until it is resolved, then drop the detailed
analysis. Pareto analysis breakdowns of errors by type usually don't need to
be repeated often.
Don't skimp on presentation. Use graphs, calculate averages and ratios,
and generally make sure you get full value from the data collected.
Ergonomics
Ergonomics is the most overlooked, yet most important subject in internal
controls design.
Almost all errors arise, directly or indirectly, because of human error.
Mis-coded transactions, bugs in software, a wrong VAT code entered - all human
error. Even a computer hardware failure comes from the mistakes of the engineers
who designed the robot that built the component. Training helps, but ergonomic
improvements are more effective and far more cost effective.
Some human errors are outside your control because they happened too long
ago, are outside the company, or are caused by something you cannot change.
However, there are many errors you can reduce by paying attention to ergonomics.
It is also vital to consider ergonomics when designing the details of internal
controls.
The main tool in ergonomics is usability testing.
The following information comes from Thomas K Landauer's book ‘The trouble
with computers’ and is derived from a series of studies of usability testing in
practice:
User centred design typically cuts errors in user-system interactions
from 5% down to 1%, and reduces training time by 25%.
The average interface has around 40 usability defects in need of
repair. (About 50% of flaws found get fixed successfully, typically.)
Two usability evaluations or user tests usually will find half the
flaws; six will find almost 90%. This work will only take a day or
two.
After six tests, one can estimate accurately the number of remaining
flaws and the rate at which they are being found.
Usability assessment has very large benefits relative to cost. The work
efficiency effect of a software system can be expected to improve by around
25% as a result of a single day of usability testing. Intensive user-centred
design efforts have typically improved efficiency effects by about 50%.
(However, fundamentally flawed system specifications can lead to minimal gains
from user-centred design.)
While specialists are better at usability design and at finding flaws,
both systematic inspections and user tests can be done effectively by people
with modest training.
These results, and experience as well, indicate that usability testing can
reduce the difficulty and time for development while contributing dramatically
to quality.
In ‘Usability engineering’, Jakob Nielson surveys a wide range of usability
testing techniques. These do not include releasing a beta test version
and going ahead if nobody complains bitterly
enough! The most important techniques include:
Thinking aloud: a representative user is asked to perform
representative tasks using the software and says aloud what they are thinking
as they do so. This can given insights into confusions that did not lead to an
error for that person but would lead some people to make errors at least some
of the time.
Retrospective testing: after the user has finished a task the
experimenter asks them to go back over the experience and report the problems
and confusions they experienced.
Coaching approach: the user performs the tasks as usual, but can ask
for explanations or instructions if they get into
difficulty. This helps to identify the information that would improve the user interface.
Heuristic evaluation: this is different in that there is no user and
no task. Reviewers inspect the interface in detail using a checklist of common
usability faults as a guide.
Most work on usability is concerned with the design of new software. However,
this is only one area where usability improvements are an important control.
Here are some others:
Paper forms e.g. order forms
Computer reports from a report generator
Perl and other scripts used by IT support people to make things happen behind the scenes
Spreadsheets
Scripted conversations e.g. in a call centre
Reference data wording e.g. the descriptions of items in a product catalogue
Workstation comfort and lighting
Readability of written instructions and crib sheets e.g. the product
code sheets commonly seen at tills in shops, safety regulations, e-mails from
the accounts department about how to claim your expenses
If monitoring stats show errors arising then the most important action is
usually to find out exactly where and why the errors occur. Typically, confusing
design of something is the culprit and the cure is to improve the design so it
helps people get things right instead of tricking them into getting it
wrong.
Individual controls need to be designed with human factors in mind. For
example, imagine a control that calls for someone to read computer reports
looking for items that look suspicious and check them. If the report is long and
suspicious items are very rare even the most motivated and highly trained person
will glaze over after a while and miss items they should have noticed. The
control is ergonomically infeasible. It could be improved by designing a report
that searches for suspicious items, or sorts items in a particular way that
makes the search easier.
A very common mistake is to rely on people to spot errors in situations where
they don't have enough time or information at hand to do it reliably.
Comparing totals
Controls that involve comparing totals can be broken into three groups:
Agreements: i.e. two numbers should agree exactly and normally
do. Differences indicate errors.
Reconciliations: i.e. the difference between two numbers is
explained and the differences are checked for any that are errors.
Analytics: i.e. one number is a benchmark or expectation for the
other. Exact agreement is not expected and differences cannot normally be
accounted for exactly.
These controls are often good for detecting the presence of an error, but do
not directly help you identify what it is or how to correct it. For that,
further investigation is needed.
Despite this, comparisons are a vital component of most control schemes
because they are often very cost effective and can provide strong evidence that
there are no errors, or that the errors are small. Key comparisons should be
identified in high level design and specified even at that early stage. If there
are good opportunities for comparisons other controls are less important, but if
comparisons cannot be used it is vital to compensate with stronger controls
elsewhere.
There are some errors that many people make when working with comparisons.
One is to talk as if only one number is involved, not two. For example, ‘We need
a control account reconciliation?" is vague as to what the control account is
being reconciled to, whereas ‘We need to reconcile the control account to the
sub-ledger.’ is clear.
Another error is to overestimate the power of analytics. Analytics are good
at revealing problems that arise suddenly and are of high value. Analytics are
not good at revealing:
Problems that exist from the start of a business and grow gradually as
the business grows. This is because most analytics work by comparing new
figures with previous ones.
Problems that are small compared to the typical variability of the
numbers being studied.
Problems that are small but always or frequently present so their
cumulative impact is still significant.
Problems that arise before a solid benchmark can be established, such
as at the start of a business.
Deliberately false figures, where the benchmark is a budget or target.
Naturally, liars provide figures that match expectations.
The resolving power of analytics can sometimes be improved by using
statistical techniques to work out exactly how unusual a particular fluctuation
is, or to improve the graphics used to help people search for anomalies. For
example, rather than comparing today's figures with the figures for the same day
of the week last week, it may be better to build an average weekly profile,
adjusted with a seasonal fluctuation, and built on a long term trend to provide
a more precise benchmark.
One very interesting application of comparisons is in the so-called ‘end to
end reconciliation’. This is a misnomer because they are almost always sets of
interlocking reconciliations. For example, end to end reconciliations are
sometimes used to help control billing for telephone calls by telecoms
companies. The fact that a customer has made a call is initially recorded on a
switch in the telecoms network. That record is sent across a data network to the
telco's ‘mediation system’, which passes it on to the billing system (which
itself may have more than one stage), which generates data for posting to the
company's general ledger and data for producing the bills themselves. Typical
reconciliations making up the ‘end to end reconciliation’ are:
call counts and call minutes per switches Vs mediation
call counts and call minutes per mediation Vs billing (pre call rating)
call counts and call minutes per billing (pre call rating) Vs post call rating
call values per post call rating Vs bill values per bill production
bill values per bill production Vs values per general ledger Vs debtors' sub-ledger
values per general ledger Vs management accounts
At each stage there may be various reconciling amounts, some of which can be
accounted for precisely, such as numbers of records rejected by mediation.
The great difficulty in designing reconciliations is finding comparable
figures. Timing differences are one of the most common barriers. If they cannot
be accounted for exactly it is possible to track the cumulative
difference to find out if there are small but persistent problems.
Validation and edit checks
Computer people talk about ‘validation’. The system will perform ‘validation’
on user input or when loading data from an external file. Since the system has
‘validated’ the data it is ‘valid’ right? Wrong!
‘Valid’ in computer-speak just means the data conform to some basic
requirements that allow the software to process them. For example, fields that
should contain numbers are checked to make sure they have numbers. Text fields
that should not be longer than a certain length are validated for length and to
remove unprintable characters and trailing spaces. Field values that should
match those of another record are matched. Validation often gets more subtle
than this, for example to check that invoice detail lines add up to the invoice
total, or that a person's date of birth is before their date of death, and so
on.
‘Valid’ in computer-speak does not mean the data are the correct values or
that they are genuine. You could enter your name as ‘Mickey Mouse’ and expect it
to be accepted as valid. You could claim to have been born in 1853 and most
systems would be happy.
‘Validation’ does help filter out data entry errors, but be aware of the
limitations and examine the exact rules being applied before you decide what
control the software is giving you.
Segregation of duties
Segregation of duties is a way of making fraud more difficult. It involves
preventing any one person from doing all the things necessary to pull off a
fraud. Segregation of duties should be done sparingly and in conjunction with
other fraud controls.
Segregation of duties is a very traditional control that has become even more
common in the computer age. Almost all accounting software packages let you set
up a profile for each user showing what they can and cannot do on the system. In
the leading ERP packages (i.e. packages that do just about everything like SAP
and Oracle Applications) it is possible to set up fantastically detailed and
complicated profiles, though this takes a long time and is difficult to
maintain.
The downside of segregation is that it can make processes less efficient. One
of the most common strategies in business process reengineering is to let
individuals do everything and so minimise hand-offs between people and
departments. Segregation of duties can be inconvenient and frustrating,
especially in small organisations.
There are a number of bases for segregation and some can be used in
combination. These are the most common, written in a notation suited to high
level design work:
custodian of asset DOES NOT keep records of the asset
record keeper DOES NOT check the records
checker of the records DOES NOT review the checks
approver DOES NOT enter data
person who enters reference data DOES NOT enter transaction data
contract maker DOES NOT raise/receive invoices
raiser/receiver of invoices DOES NOT handle receipts/payments
It is rarely appropriate to apply all the bases at the same time. Choose the
most appropriate and vary the tightness depending on the risks and scope for
alternative controls. When designing controls in detail interpret the rules
according to the job roles that exist or are being considered.
Finally
Internal controls for processes in organisations, especially big processes,
should be designed with skill rather than allowed to evolve. The key to doing
that is to be able to design the controls at a high level, sculpting something
that fits the circumstances and needs of the process and organisation rather
than applying ‘best practice’.
This paper has provided an introduction to that skill. If you have any ideas,
questions, or concerns please feel free to contact me at matthew@workinginuncertainty.co.uk.
I normally reply within a couple of days.
Hundreds of people receive notification of new publications every month. They include company directors, heads of finance, of internal audit, of risk management, and of internal control, professors, and other influential authors and researchers.