By John Cancio, Managed Service Consultant, ITRS Group

As the proverbial saying goes “there are more ways than one to skin a cat,” so there are more ways than one of improving efficiencies in technology while keeping the business profitable.

In a previous post, Guy Warren, ITRS CEO, talked about how such efficiencies can be gained through a capacity management solution that encompasses analysing and learning from historical data; monitoring capacity in real time; and predicting capacity for the future. As we journey through the large IT estates of financial institutions in search of improvements, we reveal another piece of the “Operational Efficiency” puzzle to show how analytics can be used in investigating log files from applications and components.

We can obtain a lot of knowledge about our IT estate from the data, whether real-time or historical,  in log files. That is, raw information on transactions, trading volumes, exchanges of communication, metadata, counterparties, timestamps, or simply anything that can be represented in bits or bytes.

Every organisation has some form of capturing log files and storing them for future use. However, storing data is not enough, one needs to make sense of it first to turn it into useful information, then infer knowledge from it to draw useful conclusions, spot trends and, most importantly, to make effective decisions. That is exactly what log file analytics does and here are three areas where I think it can be used to improve efficiencies in a financial services business.

Historical event analysis and descriptive statistics

A few questions come to mind when trying to perform historical analysis. For instance, an investment bank or brokerage might want to know what yesterday’s level of commission was for all of its markets. Another one might be, what is the correlation between the order volumes and FIX transport performance for all my FX trades? Or can you give me the list of clients with the highest amount of network latency in their exchange connectivity that traded within a parameter?

All of these are a form of the same question: “How do I describe the past”? The past refers to any point in time that an event has occurred, whether it was one minute ago or one year ago. And descriptive statistics, such as mean, standard deviation or skew, are essential in explaining these events. Hence, log file analytics is useful in attempting to understand your IT environment at a particular moment in time and creating data visualisations, which can be aggregated or sliced and diced by, say, asset class or counterparty.

Trend spotting and showing what “normal” looks like

The human-eye is ill-equipped to spot trends in huge volumes of data through a given period of time, especially if that data comes from various sources. Summarising data to make it something that is interpretable by humans is an area where log file analytics shine. One must be able to model data or present it in a visualisation where the consumer is able to easily make inferences and to use it to make meaningful decisions.

However, spotting an outlier data point or a spike in performance metrics holds little value if you can’t relate it to wider system behaviour. The true power of log file analytics lies in the linkage of these patterns through time to form the “normal” behaviour of a system.

But what if you can predict and prevent failures or critical events before they actually occur? That is the next application of content analytics that is proving important.

Future event correlation

Once you have found the events of interest to your organisation, you probably want to know when a similar event may occur in the future and how to prevent it if it’s a concern. Such is the task for predictive analytics, that is to use data to determine the likelihood of a situation occurring and its probable outcome. 

The knowledge gained from the information in log files is an essential building block in prescribing a set of actions to tackle an issue that will happen in the future. Whether you are reading it from log files or directly tapping it from an application output via JSON or various APIs, performing analytics on this data will play an important role in the maintenance and optimisation of banks and other parties’ IT estates.

It goes without saying that your analysis is only as good as the tools that you can access and an all-encompassing tool, capable of data ingestion and storage to processing and visualisation, at the fingertips of different users will help financial institutions to unlock the full potential of log file analytics.

In sum, what we have in log file analytics is a powerful way to understand how and why an event occurred in our environment, draw meaningful statistics and trends from it, and scientifically relate it to future events. 

For more on log file analytics in financial services, click here

Blog

Exploring ITRS’ Ecosystem for trade & order analytics

Blog

The merits of real-time log analytics in financial services

Request a call from one of our team