How can the Insurance Sector Increase the Value from its Data?

Since the world's first mutual insurer pioneered age based premiums based on mortality rates - laying "the framework for scientific insurance practice and development" - the value of data has evolved.

Insurers have been major users of computing since early data processing machines took up almost entire floors with large mainframe computers occupying huge amounts of space that often required heavy duty air-conditioning. Some data machinery needed elevated floors to hide cables and wires while consuming entire forests of paper.

According to some mind-boggling stats compiled by the Association of British Insurers, 90% of all the world’s data has been generated over just the last two years while our digital universe already contains as many digital bits as there are stars in the universe.

Today’s insurance companies collect large volumes of data across all of the lines of business that they write. This includes data about the policy holder (be that a business or an individual), the coverage, the broker who presented the risk, the underwriter who wrote the risk, the risks that were not written (NTU) and the reasons, the rating information, information about the risk, and much more.

So much data is collected that in our sector that the challenge for insurers is not so much how to manage their volumes of data, but far more about the collation of that data in a way that can be analyzed, reported upon and crucially transformed into useful information. In my next blog on this topic I ask the question “Is this Big Data?” and outline the first of four steps to help organisations improve the value of their data.

Darren Wray