Regulatory reporting requirements are increasing each year, placing more complex mandates on insurers. Ongoing change and expanding needs have companies searching for streamlined solutions that simplify the reporting process, improve accuracy of data, and increase speed to compliance.
Timely and accurate reporting begins with a solid data warehouse—a dedicated platform built to aggregate information from disparate sources and structure data for consistent reporting and analysis.
Aggregation
Quality data collection and analysis drive profitable, efficient, and compliant operations. Gathered from various data sources, insurers collect a large volume of data throughout underwriting, billing, claims, and actuarial processes. All of this information presents accuracy, retention, analysis, security, and quality challenges that can be addressed by a dedicated data warehouse.
Built to store data in a way that is structured and accessible to end users, a data warehouse acts as the centralized location for truth. It cleans and categorizes overwhelming amounts of information to ensure the stored data is accurate and consistent. Here’s the process:
ETL
The data warehouse receives information from your operational and transactional systems. The warehouse then makes sense of that data through a process called ETL (Extract, Transform, and Load).
In the Extract phase, data from various core systems—policies, billing, claims—is read and retrieved for storage. Extraction is accomplished through a range of transport mechanisms, such as table export, flat-file transformation, or direct connection.
During the Transform phase, data is automatically cleaned and formatted according to business rules and balancing equations to maintain consistency. Transform allows for normalization, the cleaning and preparing data for downstream consumption. For example, an incorrect numerical entry of 150,52 might be corrected to 150.52, or a clear text credit card number might be reduced to the last four digits.
Once data passes through the Transform phase, it enters the final Load phase. During this phase, data is written to the target database to be stored and queried as needed. Data can be loaded into multiple formats for subsequent analysis. Highly structured data is frequently written to a relational database where the schema is predefined and validated. Less structured data is frequently written to a NoSQL database, allowing for variable schemas. Unstructured data is most commonly deposited into a data lake where records are stored in their format of origin.
Security
Secure data is encrypted in transit and at rest. Transmission during the load phase occurs over secure connections such as https to prevent interception through network vulnerabilities such as man-in-the-middle attacks. The database files themselves are encrypted on the disk to secure access in the event of a physical or logical access breach. Finally, sensitive information within the dataset remains encrypted within the dataset itself so that administrators cannot access content.
All encrypted information requires a unique key at each level of encryption to prevent unauthorized access. Decryption requires access to secure keys, which are frequently as many as 4,096 bits long (unbreakable with current technology). Data secured at the highest level is protected three times over with https encryption, disk encryption, and field encryption, providing a high degree of protection.
Query and Analysis
Once data is organized and stored in a central solution, it’s ready for reporting and analysis. Structured warehouse data is loaded into tables where columns represent attributes and rows house entries. Warehouse rows are much more powerful than rows in consumer tools like Excel, because data is indexed, allowing tables to be related to each other for complex analysis.
Data is retrieved from the warehouse using queries. Queries ask a question or set of questions in search of a specific answer. The concept is similar to a Google search. However, instead of a freeform question, data queries use specific code called Structured Query Language (SQL) to achieve the desired output. SQL is written as a statement using commands like SELECT, LIMIT, and ORDER BY. These commands filter the entire database for the selected criteria and perform calculations and summaries for the end user.
Data for each regulatory or compliance report can be codified into a query or series of queries. Queries can be saved, allowing reports to be built and run on a schedule or on demand. Queries return data in a structured format that can be consumed by a presentation layer.
Presentation
The presentation layer formats query output into visualization for end users. A presentation layer can be as simple as an Excel spreadsheet, or as complicated as an interactive online dashboard. A great presentation is organized hierarchically to draw a user’s attention to important and actionable summaries, with drilldowns into supporting information. Common visualization tools include Microsoft Power BI, DOMO, and Amazon Quicksight.
Conclusion
Insurance companies must file reports using formats specified by regulators, yet not all systems are flexible enough to create new or modify existing data models to meet these requirements. A standardized data warehouse helps insurers meet regulatory reporting requirements through data collation, sanitization, and presentation.
Call to Action
Most enterprise-level software solutions come equipped with a fully-integrated data warehouse solution as part of their core offering. Cloud solutions are particularly powerful as security, scalability, availability, and durability are fully managed for you. If you are not currently engaged with a vendor partner who manages the ETL process and allows you to build reports on schedule or on demand, consider this investment. For more information on data warehouses, solution providers, or system functionality, contact our data experts at info@britecore.com.
This article first appeared in the Winter 2019 edition of PAMIC Pulse and can be viewed here.