Data management itself has never driven financial institutions to undertake a paradigm shift — it was the realm of the IT person hidden in the basement. However, in today’s climate of mistrust, risk management is an even greater priority, and data management programs are having their time in the sun as the industry realizes that a risk management system is only as good as the data filtering into it.
So, data management and risk management go hand in hand, and the only way to “do” risk management is to capture and ensure the accuracy all of the related data.
For many risk managers, the pressing issue is not the need for more risk management nor is it the technical methodology, as there are more algorithms and risk engines on the marketplace than there are financial institutions to apply them. Risk managers’ most important need is clean, timely data from which to conduct analysis.
Without it, they can’t succeed. Without it, the garbage that went in just comes straight back out. If the data you put into your model — enterprise-wide or not — is wrong or out of date, the analysis produced will also be inaccurate, placing the entire risk ecosystem on the wrong foot from the start.
Where’s the Data, Anyway?
Frustratingly, many financial institutions had the data they needed to work from last year, but it was simply strewn throughout the organization. Information was held in different systems or in different formats — even within the same information system — and different pieces of paper were in different buildings.
Data collection and systems initiatives emerged from one part of the organization without thought to their impact on other parts, or even to the strategic data management capabilities of the institution as a whole.
Moreover, when data was stored, it was too often kept without reference to the relationships that turned that data into information. For instance, credit data on particular customers was held by the accounting department without acknowledgement or reference to the information kept by trading desks on the same customers.
Market data collection processes posed their own perils. Different market data providers had different standards, each of which needed to be integrated with internal systems in a way that enabled the market data to retain its accuracy. At the same time, the market data needed to be aggregated up to the enterprise level using valid relational rules and comparisons.
Often, market data from one part of the organization showed significant discrepancies when compared with the same data used in a different part of the same company, simply because the source of that data was a different external system, or it came from a different time zone or in a different language.
If data is key to robust risk management systems, and therefore successful risk management overall, how sophisticated is your data management strategy? A key issue with many risk management analytics systems is that the database is integral to the system.
This often means it is impossible to support products and actions that are not integral to the risk engine’s database itself. This limits the use of the data and risk managers lose the opportunity to utilize different risk engines for diverse analyses, because the data is trapped in the proprietary database of a particular risk engine.
Furthermore, the diversity of middleware, data import/export and database management system interfaces demanded by risk analytics engines is baffling. The sheer number of options means many organizations first select the risk engine they are going to use and then interface back into the data environment. This is like building a house starting with the roof.
A better method is to build from the ground up. Develop a timely, accurate and consolidated data structure that forms the foundations and can be drawn on by whichever risk engine the risk manager wishes to employ — as well as other bank functions.
In the end, it is not feasible for one institution to run different data management systems for each of the functions that need data. Data management is mission critical and is becoming more so for financial institutions falling foul of a suspicious market. Only once critical functions such as risk analysis, accounting, audit, economics and operations research are run on trusted and shared data, will the basic business of banking be possible again and will trust be restored.
A Rallying Cry for Transparency
The installation of new risk systems has kick-started a review of current data strategies. A complete, up-to-the-minute view is more important than ever, spurring the drive to remove data silos and create golden copy master data that can be accessed from across the organization.
It is vital for institutions to have a single consistent and up-to-the minute view of the instruments they hold and of all related information on customers, counterparties, issuers and collateral. It is only when institutions have a clear view of both their exposure to risk and the collateral they can offset against that risk, that they can begin to regain their viability in the eyes of investors.
Ultimately, data management today is about improving transparency, or having a complete set of data to establish transparency in processing, particularly with complex instruments like derivatives. With a consolidated and transparent view of data, firms will have a better understanding of complex securities and can genuinely improve their risk management processes.
So, no more hiding from the obvious. With the role of the CRO or risk professional significantly elevated post-crisis, these projects now have the pre-eminence and senior backing they always deserved. There is a unique opportunity for risk managers everywhere to step back and look at the foundations of their risk program.
Michael Meriton is CEO of GoldenSource.
Social Media
See all Social Media