Find an article
Connect with us
Archive
THE ARCHITECTURE FOR MULTI-CLASS SPECIALITY RISK MANAGEMENT
Thursday 25 August 2011
Author: Russell Group
 

Continuing on in Russell’s theme of risk management for the specialty classes, Suki Basi describes the enabling architecture required to deliver a framework which meets the varied and complex needs of the specialty insurance and reinsurance markets.

Let’s begin by defining the key requirements for such a framework. The specialty classes vary in risk structure from one class to the next which means an integrated risk model, that can easily process insurance, reinsurance and retrocession placements, is essential and would be the first requirement for any framework. Given that data structure and volume vary considerably too, the next requirement calls for robust data capture and management. Once we have addressed data capture and risk structure for each class, the third requirement is for an internal engine to process events and scale to the needs of each class or multiple classes. Finally, any framework should be open to user interaction whether it is for userparameterisation of event sets or to integrate within existing processing streams.

Integrated risk model
The specialty classes are dominated by two main coverage types - physical damage and pure liability.

The physical damage specialty classes are easy to identify as there is an actual object around which the (re)insurance is placed, with various combinations of property damage, first party liability, and third party liability coverage. Examples include aviation, marine, cargo, energy, and specialty property – with the subject of each being the airline, vessel, shipment, rig, and building respectively. For each subject a number of risk characteristics can be captured, usually a combination of third party data and proprietary data, for either segmentation or reporting purposes or for use when creating event tables. For an enterprise solution the software does not need to know the nature of the subject, only that it exists and may have risk transfer agreements attached to the various parties that have an interest in it.

The pure liability classes are less easy to identify as there is an entity around which the (re)insurance is placed, with various types of legal liability that the entity may be responsible for. Examples include professional indemnity, crime, directors and officers, and general liability, all of which are, by market convention, often classified according to the entity they attach to, such as financial institutions, pharmaceuticals, and small and medium sized enterprises (SME). Although more difficult to define once the subjects have been established the (re)insurance relationships are generally easier to model as each entity has a 100% insurable interest in itself, unlike the physical damage classes where the situation is not always that simple.

Any risk model needs to support the similarities and differences encountered between these two main coverage types, and how each coverage type is applied in practice to a class of business. This suggests that the risk model must be rule-based so that appropriate logic can be applied to specific coverage type and class of business combinations. One further complication is that the risk model also needs to support method of placement that is risks which have been underwritten on a direct, reinsurance or retrocession basis. Finally, the risk model needs to support the differences in data content and relationships between classes. To ensure full support for the specialty classes the risk model must integrate the underlying complexity.

Robust data capture and management
At the heart of good risk management is good data quality, inevitably in the specialty classes, data quality will vary with the quality of data structure and volume in each class. Moreover, traditionally organisations have tended not to invest in electronic data validation, capture and management. In my view it essential to invest in tools which ensure robust data capture and management as this is the initial step to improved data quality and therefore good risk management. Such tools should be template driven, have rules to handle data quality and fuzzy matching capabilities to manage naming conventions. Having established that robust data capture is essential, what data needs to be captured? To answer this, I would like to introduce four categories of data - reference, market, underlying and portfolio data, the content of each will vary across the specialty classes. Reference data is all to do with coverage criteria, event definitions, geographic regions and analytical assumptions such as target loss ratio. Market data captures currencies, rates of exchange, losses, the definition of the risk being insured or monitored, also known as subjects and the companies which have a trading relationship with you. Underlying data is industry exposure data which is normally supplied by third parties and defines the risk within a class in its raw form, for example details of an aircraft, satellite, ship, oil rig or financial institution and the relationship of that risk to the policy placement. Portfolio data is defined as the collection of risks which have been underwritten over time regardless of whether they are insurance, reinsurance or retrocessionplacements and the reinsurance programmes purchased to protect the portfolio.

Internal and scaleable engine
To ensure consistent and timely analysis, it is essential that the framework has an internal engine which can consistently process events regardless of the size and complexity of a portfolio. By analysis, I mean a pre-defined algorithm which serves a user request according to a particular collection of mathematical processes, which can be deterministic or stochastic. Such an engine would see any analysis as a collection of events which need to be processed according to the pre-defined algorithm, and would need to be database-driven to differentiate event processing which can be done in computer memory from database processing which is done on computer disks.

A feature of the specialty classes is data volumes, which vary tremendously, requiring the framework’s engine to scale with the demand placed on it. This is achievable as long as the engine is multi-threaded, that is to say that it utilises as many CPUs that are available on the computer (scale-up) or as many computers (scale-out) as possible. Indeed, a multi-threaded engine would be ideally suited to the generation of a stochastic event set for deal and/or portfolio pricing, as processing occurs in parallel rather than sequentially. The net effect of this is that more complexity can be processed for a given time period than would be the case without, ensuring the time and therefore the cost of performing an analysis is optimised.

Open to User Interaction
Given the variety and complexity inherent within the specialty classes, an open architecture would be more beneficial for flexible analytics than a closed architecture. Users would have the flexibility of being able to construct their own event sets and use the framework to process such events, furthermore openness would promote more flexible pricing data, enable easy integration with existing application and processing streams, whilst also facilitating improved audit capabilities as the data can easily be accessed and interpreted.

In a competitive marketplace this approach guards commoditised pricing, as underwriter assumptions and corresponding event sets will differ.

Suki Basi is the managing director at Russell Group, a leading risk management software and service company that provides a truly integrated approach to aggregate management, pricing and portfolio modeling, by supporting insurance, reinsurance and retrocession needs across the specialty classes.