Financial services firms are stuck in a tangled web of data and legacy systems. This model will not sustain today’s business environment, which is largely defined by new regulations and the changes firms need to make to comply. The industry is also challenged with low trading volumes, shortened settlement cycles, and traders looking to new markets for revenue. Within this environment, firms are under pressure to cut operational costs, increase profitability, and better manage risk; however, effectively managing data often becomes a limiting factor in accomplishing these goals. Using spreadsheets as short term solutions to business problems continues to hamper long term performance.
Financial services firms need to be smart in their deployment of technology. Legacy systems and processes must be traded in for innovative business intelligence (BI) tools that allow firms to automate processes and implement effective data strategies.
Manual processes lead to operational risk
Manual forms of data entry and manipulation pose inherent risk to any crucial enterprise function. Used at the core of almost all business processes –reporting, analysis, risk, and as the basis of decision making–the financial services are heavily spreadsheet dependent. Due to the abundance of manual processes involved in creating, maintaining and updating them, spreadsheets are highly susceptible to operational risk and spreadsheet based calculations can result in massive errors with devastating consequences.
Spreadsheets are adjusted regularly to reflect market fluctuations, price adjustments and foreign exchange currencies or to take into account new strategic positions. They must also be manipulated in order to drill down on information for deeper analysis or in response to specific requests, such as price derivations.
Spreadsheet changes, especially those with multiple formulas, are prone to many undetected errors that will have an overarching effect on the data. When data is entered manually, the link to the original data source is broken. With these types of changes, certain properties of the data will be lost, and unless specifically noted, they become hard to trace back to the original source. When additional calculations are added or records are deleted, quality control becomes further jeopardized.
Common spreadsheet changes and errors:
- Cell or name references changed to a static value. In this case cells are set as constants when they should be computed or derived
- When the parameters of a built-in function are changed to static values, ‘what if’ scenarios will be affected if a computed value is not used or referenced
- Failing to include certain spreadsheet cells in calculations will over or understate totals
- Risk comes from simple name changes to spreadsheets, especially when there are multiple users and all the users are not notified of the change
- Mechanical errors – The integrity of a spreadsheet and its data will be lost if improper sorting is applied or if formulas are overwritten
- Misinterpreting the situation to be modeled
- Logic errors – Choosing the wrong formula or creating the wrong function
As spreadsheet complexity increases, so do the potential for error and enterprise risk.
Spreadsheets are real culprits for enterprise risk
Spreadsheets are real culprits for enterprise risk and can create major financial losses for banks, as in the case of the London Whale. In 2012 a leading American bank lost approximately 2 billion dollars and faced over 900 million dollars in penalties due to errors in a new trade strategy designed to hedge the firm’s overall risk exposure. The chairman of the company blamed the trading strategy to be “flawed, complex, poorly reviewed, poorly executed and poorly monitored”, however internal investigations into the cause also revealed the role of faulty spreadsheets.
The abundance of manual processes contained in the bank’s spreadsheets left the bank exposed to operational risk and error. Specifically affected were those spreadsheets containing VAR computations, these computations were performed using a manual process that involved copying and pasting large amounts of data analytics into the spreadsheets. Spreadsheet based calculations were conducted with frequent formula and code changes and additional changes were made to a spreadsheet that inadvertently produced material calculation errors.
Manual forms of data entry and manipulation pose inherent risk and make it difficult to trace data back to the source; in turn certain properties of the source data are often irretrievable. Spreadsheet errors are common, yet in the absence of a thorough audit, they are often hard to spot. In the case of a leading American bank, insufficient spreadsheet controls, combined with a multitude of code changes and the lack of a vetting process, allowed faulty spreadsheets to slip through the cracks.
Moving beyond spreadsheets
Business users like spreadsheets because they are simple to set up, easy to navigate and offer speed and freedom in completing a task. However this freedom is dangerous because it allows spreadsheets to be easily modified and errors can be built into them without much difficulty. It is not always possible to validate or control the contents of spreadsheets and historically quality control measures to validate spreadsheets have been insufficient.
These risks and losses highlight the importance of embracing new technologies. Today new BI tools offer many alternatives to these legacy systems. BI software removes many manual processes and creates automation by retrieving data directly from the source and displaying it for the business user. Those most familiar with the data are able to manipulate and analyze it directly through easy to use dashboards or interactive interfaces. Business users can focus in on the requirements driven by their roles, responsibilities, daily tasks, clients and internal process, and most importantly without trumping the needs of IT for managed risk and security, standardized processes, and operational efficiencies.
Automation and BI systems address the most important industry problems and challenges; minimizing enterprise and systemic risk, achieving economies of scale and creating consolidated views of data by removing silos. The entire life cycle of the data is accounted for, from the source to the end user and the ability to respond to market activity is improved. Financial services have become a network of systems and creating an effective strategy to manage these systems and the data within them is key to improving overall business performance and profitability in the long run.