Why insurance companies are strengthening data management
Facing demanding regulatory changes and mounting management and shareholder expectations, insurance companies are more than ever seeking to tap into the power of data-driven insights.
According to Dominique Vande Langerijt, partner at financial services boutique Mount Consulting, there are a number of drivers behind the push towards better data management. “For one regulation,” he said, “Solvency II prompted a big shift in the past decade, and currently IFRS17, and to a lesser extent IFRS9, are acting as material drivers for change.”
These regulatory interventions demand more robust and effective data management to deal with larger data volumes, tighter reporting deadlines and more stringent data quality requirements, with insurers facing higher operational costs and the risk of non-compliance if nothing is done.
More important however is the case for the upside – tapping into the benefits of data-driven analytics. For starters, using insights from data, insurers can increase their operational efficiency, among others through standardising and streamlining data management processes, enabling faster processing and increased accuracy.
Second, “insurance companies can leverage their data frameworks to better connect with and serve their customers,” said Vande Langerijt. Insurers sit on a wealth of data, which in a data driven organisation, can lead to better and innovative insights to the benefit of the customer-facing operation.
Meanwhile, data can help insurers optimise their product and service portfolio, informing pricing, product offerings and risk management, while improving the way how data is transformed into information.
Challenges along the data journey
“With an increasing demand and focus on more granular data, it becomes apparent that also the underlying data must be more consistent and easier to reconcile,” said Joost Jan Noordegraaf, partner at Mount Consulting. Getting data management right is however easier said than done, he added, addressing a range of pitfalls along the journey.
Garbage in – Garbage out is one of the most cited phrases, and as data heaps grow, so too does the need for maintaining data accuracy. This requires a holistic approach, one that exceeds the siloed approach of just one function. This also requires a common understanding of data across the different domains. This calls for an initial effort to come to this understanding, as well as continued persistence from the organisation, ultimately leading to visible and tangible benefits throughout the value chain.
This is not helped by the fact that “the insurance industry, unlike banks, does not have a BCBS239-like standard enforced by external regulators,” remarked Noordegraaf.
Then there is the need for a paradigm change. “Being effective in data management needs different stakeholders to share the same data governance, if not, discrepancies can occur. Expert resources and knowledge are scarce, thus implementing this governance should lead to efficient deployment of these resources. For example, in the finance domain, finance people know all about the data but are not an expert in maintaining data models, while vice versa, data modelers are not equipped to sufficiently understand finance requirements. In practice such gaps play a major role.”
The legacy challenge is a notorious stumbling block faced by most large insurance companies: “due to decades of consolidation, insurers tend to have quite a complex and fragmented application landscape. This hampers the transition to modern data management, as legacy and new solutions need to be integrated with each other,” said Noordegraaf.
A robust implementation
In its work with insurers, Mount Consulting has come across many of these challenges. Learning from the pitfalls, the Netherlands-based consultant has developed a robust implementation approach, based on a business process-driven architecture, combined with data management, technology offerings and practical data governance.
“It starts with a strong vision on data management, backed by commitment from senior management and execution power to make change happen,” explained Vande Langerijt. This is then followed by an architectural blueprint detailing how data will support business processes, and an implementation case which outlines the costs and efforts required.
“The success of any data and data integration project lies in its architecture. A good architecture consists of identifying and designing business and data processes and the supporting IT solutions to effectively integrate the data platform in the business processes. This has many facets, such as the operating model, the end-to-end reporting chain, data transformation processes and the definition of logical data model(s).”
This must be supported by a solid technological foundation. Vande Langerijt: “The infrastructure should be able to facilitate efficient data processing and data integration. On this layer, one can implement a future-proof data operating model and a solid data governance process.”
With the building blocks in place, the real work starts. “Legacy systems, processes and thought patterns need to be changed towards the new way of working. This is built up from a vision, anchored in the foundation and architecture and executed by a multi-disciplinary team that combines business knowledge with data integration and data modelling expertise. An iterative implementation approach is key, as clients want to see benefits early on and all along the journey.”
As with any change, addressing the human side of change is important. “Assess the impact on people, and think about how to take them along the journey, build their engagement and keep them committed. Embed the change through tailored communications, training & development and on the job coaching.”
Ultimately, “the journey will be on rocky roads, but the result, if done properly, will hugely benefit the organisation in terms of strategic insights, operational efficiency and data reliability.”