Home > Alex, Analysis, Internal Audit, Professions, Qualification, Strategy > The Future of Internal Audit – Automated Predictive Controls

The Future of Internal Audit – Automated Predictive Controls

The increasing quantity of data produced by today’s businesses is old news (1,2,3), in fact we have written about it previously in our Power of Data post. The resulting shortage of individuals with skills and ingenuity to analyze and interpret all the data generated is also widely discussed (The Rapid Growth of Global Data).

So far the Big Data challenge has been addressed by familiar means. Importance of statistical analysis, open data initiatives, development of robust and unstructured databases, creation of integration methodologies, improvements in well-known statistical software (SAS, SPSS, MatLab) and design of new ones, all these instruments place human intelligence as the key to unlocking value stored in the data.

 But what do humans actually do during data analysis?

In my opinion, what we name as “valuable analysis” is nothing more than a process of building a predictive model. By looking at the data and passing it through statistical tests, we hope to discover relationships among different inputs, to enable us to predict and plan our future actions. For example, we want to know how much of the weekly demand for our product can be attributed to our marketing campaign and how much of it can be attributed to the random lack or factors outside of our control. Alternatively, we might want to know: On what occasions do people use Twitter (or any application X)? What do they talk about? Is the comment positive or negative? How does it affect their driving habits for the day? As you can see, the analysis is a process of collecting data or transposing it into a form usable by statistical software, running tests to find relationships, and creating models that would help us to make decisions. So if our advertisement investment did not produce the results we were hoping for, then we will redirect the investment to alternative projects.

Why do we need human intelligence for data analysis?

The main challenge that computers face in their ability to take on data analysis tasks lies in the fact that a lot of the data is unstructured, comes in a variety of forms, and conceals dynamic relationships. Statisticians, if you will, use their intuition to narrow in on important relationships, able to change the form of data to suit their purposes, and draw on their experience and knowledge, relating one piece of data to another. Computers are not able to do that, they have no purpose when they crunch the data, nor they are able to intuitively know that the amount of hens is directly proportional to the amount of eggs produced.

Will it stay like this?

Technology has been making great strides in enabling machines to decode new forms on information. Online translators are able to convey general meaning of foreign texts with ever-increasing literary prose, computer-enabled telephone support services have become common place (although most of us probably still prefer to shout “customer representative”), text recognition is more or less perfected, picture recognition is the next frontier. Although we will not see drastic changes tomorrow morning, over the span of the next 10-15 years we might expect that some of the data analysis tasks that we associate with human intelligence will migrate into the domain of computer ability.

How does it affect Internal Audit?

In Internal Audit (IA), two main terms are related to the aforementioned developments, they are “technology enabled audits” and “automated controls”.  A snapshot about what automated controls are can be found here (Protiviti- Automated Controls). The majority of automated controls come hand in hand with Enterprise Resource Planning (ERP) packages – expensive, enterprise-wide information systems. Since these ERP systems collect a vast amount of information, internal audit professionals can set acceptable variance limits on a range of input variables. A key point here is that IA-professionals are actually the ones determining what important relationships are, much like in the data analysis process already discussed.

How will it change?

I believe that a key function of internal audit departments in the future will be to maintain predictive automated controls and investigate problems flagged by the system. For example, imagine that we are in a trading business. Each employee, has a keycard to enter the building and a password to log on to the work computer. Now imagine that the computer was logged in, but the keycard was not swiped. Furthermore, a large transaction was placed from the computer. Quite suspicious, isn’t it? On one hand, it is a possible fraud, on the other hand maybe the employee innocuously forgot to swipe his card. The supervisor can be alerted and investigate which one of the two it is. Now, it is essential to understand the distinction between the two ways in which we can set up this kind of automated control. On one hand we can hard code everything, IA staff would essentially create a rule that says “If no card was swiped and the computer logged in, then alert supervisor”. In my opinion, this is a highly inefficient and rigid way of doing it. Alternatively, we can implement a system that monitors streams of data, i.e. card swipes and logins, and allows the system itself to build its own rules for what is “normal”. Under this scenario, the system would see that data stream 1 has an input (card swipe), followed by an input in data stream 2 (computer log in), followed by many inputs of varying degree from data stream 3  (transactions). This pattern repeats day after day, until one day data stream 1 produced no record, data stream 2 still occurred, data stream 3 was abnormal, if several streams produce abnormal results then the system contracts the supervisor for investigation. In this simple example only 3 data steams were used, but we can conceivably add computer ip addresses, employees work phone’s gps locations, etc. With additional streams of collaborative data, the system could be trained to predict potentially hazardous situations more accurately. And, without the need to reconfigure the system to each specific situation, it is possible for IA to delegate the task of building predictive models to computers, while concentrating on the investigation of anomalies.

When will such systems be built? How to build them?

It’s an interesting cross-disciplinary topic, that combines aspects of IT, such as machine learning and data storage, Internal Audit, such as control environment and technology risks, and human psychology.  I would be very interested to know your thoughts and insights!

~Alexey

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: