While data storage and management is a critical issue for IT professionals across industries, its severity gets accentuated for the insurance industry. Here's a brief on various techniques to control it.Data is all pervasive � it begins much earlier than the initial stages of client understanding and diligence, and extends far beyond helping revenue generation, encompassing cross and up-selling products or services. It also helps to understand the business risks and verify whether the regulatory compliance needs are met. The insurance industry depends on promises made on paper, which are eventually converted into supporting databases and document repositories. This article elaborates on the types of data, modes of data acquisition, data checks and usage, and the prevalent techniques for data management.
Insurance industry's data can broadly be classified as employee-related, distribution-related, customer-related, product-related, operations- related and accounting-related. Of these categories, employee-related data is required purely for internal workforce operations management and the rest have a direct impact on the cost and revenue of the insurance company.All data is collected and stored in databases, data warehouses and as documents or images.
Data management stagesManagement of data could be defined in three major stages: Data Acquisition, Data Quality Management, and Data Exploitation or Data Utilization. Let us look at these in detail:Data acquisition results from new business management, internal operations (HR, accounting, distribution and product & policy management systems). These are made available in unique respective data structures, in an integrated way. One step up, they can be consolidated into data warehouses and document management systems, jointly referred to as the universe of the insurance enterprise data.Data exploitation could be done to cater to different needs like planning or analyzing growth of revenue, cost control, improving efficiency of operations, planning and executing business expansions, conceptualizing new products, and to provide data-related services to customers, distribution networks and employees.Data Quality Management: Most of the big insurance enterprises have been operational for several decades and hence the data available with them may not be 100% accurate. Many such insurance enterprises still use green screens for systems support and policy administration. Data quality could be maintained and ensured, by continuously checking, correcting and preventing data errors, thereby making data ready for exploitation.The link between data acquisition, data quality management and data utilization could be described in the ICO (Input-Check-Output) model.Data management stagesManagement of data could be defined in three major stages: Data Acquisition, Data Quality Management, and Data Exploitation or Data Utilization. Let us look at these in detail:Data acquisition results from new business management, internal operations (HR, accounting, distribution and product & policy management systems). These are made available in unique respective data structures, in an integrated way. One step up, they can be consolidated into data warehouses and document management systems, jointly referred to as the universe of the insurance enterprise data.Data exploitation could be done to cater to different needs like planning or analyzing growth of revenue, cost control, improving efficiency of operations, planning and executing business expansions, conceptualizing new products, and to provide data-related services to customers, distribution networks and employees.Data Quality Management: Most of the big insurance enterprises have been operational for several decades and hence the data available with them may not be 100% accurate. Many such insurance enterprises still use green screens for systems support and policy administration. Data quality could be maintained and ensured, by continuously checking, correcting and preventing data errors, thereby making data ready for exploitation.The link between data acquisition, data quality management and data utilization could be described in the ICO (Input-Check-Output) model.Data management stagesManagement of data could be defined in three major stages: Data Acquisition, Data Quality Management, and Data Exploitation or Data Utilization. Let us look at these in detail:Data acquisition results from new business management, internal operations (HR, accounting, distribution and product & policy management systems). These are made available in unique respective data structures, in an integrated way. One step up, they can be consolidated into data warehouses and document management systems, jointly referred to as the universe of the insurance enterprise data.Data exploitation could be done to cater to different needs like planning or analyzing growth of revenue, cost control, improving efficiency of operations, planning and executing business expansions, conceptualizing new products, and to provide data-related services to customers, distribution networks and employees.Data Quality Management: Most of the big insurance enterprises have been operational for several decades and hence the data available with them may not be 100% accurate. Many such insurance enterprises still use green screens for systems support and policy administration. Data quality could be maintained and ensured, by continuously checking, correcting and preventing data errors, thereby making data ready for exploitation.The link between data acquisition, data quality management and data utilization could be described in the ICO (Input-Check-Output) model.Data Acquisition (Input)Structured data acquisition is critical to perform all subsequent data-related functions in an efficient and integrated manner. Data that is unstructured and not collected in databases is likely to create vacuums in data analysis. In today's insurance industry, data acquisition happens in five different broad segments:Customer data: Customer relationship management, customer self service portals, new business management systems and other customer touch point systems are the sources for acquiring this data. It comprises of customer's personal data such as family, contact, activities, complaints, service requests, financial, health, campaign offers, policies, loans and benefits info. This group of data is generally administered in CRM, customer portals, and IVRS.Distribution data: Distribution administration, sales & service management, compensation, compliance and other distribution touch point systems are the sources for acquiring this data. This group of data is generally administered in Distribution or Channel management systems, IVRS, FNA, quotation, applications and compliance management systems.Policy administration data: New business, underwriting management, claims, accounting and actuarial data are the sources for acquiring this data. It comprises of financial needs analysis, quotes, new business applications, cashier entries, lock/collection boxes, accounting, valuation, loss ratios, document images, turn around time, underwriting, claims and policy services info. This group of data is generally administered in legacy policy administration, claims, accounting and actuarial systems; however, there could be number of separate systems for underwriting, policy services and new business support systems.Product administration data: Product administration and pricing are the sources for acquiring this data. It comprises of product setup & management, profiling, pricing, profitability and product performance. A very few industries maintain market research data too. This group of data is generally administered in product management systems, actuarial systems, DWH and data marts.Employee data: It comprises of employee personal details such as contacts, activities, payroll, education qualifications, certifications, credentials, job history, training and development info. This group of data is generally administered in HRMS; however, in some cases there may be separate payroll and training & development systems.Missing, unstructured or disintegrated data acquired in any of the above five categories would create a gap in the data management chain and hence it is recommended to fill up these gaps diligently.
Data Quality Management (Check)Data acquired through various systems and databases needs to be checked for desired quality before being exploited. Data quality errors could result from inadequate verification of data stored in legacy systems, non-validated data leaks from the front end, inadequate integration, redundant data sources / stores, direct back-end updates, etc. In today's insurance Industry, data quality management is mostly ignored. Where implemented, it is done in one of the two ways described below.Unstructured approachMost enterprises rely on a few batch programs to check some portions of the data acquired, and most of the times, these programs are triggered by a serious problem identified in customer or financial data. Some enterprises schedule these batch runs and some still pursue to run only on demand. Such intermittent and unorganized batch runs can neither help to scale or integrate, nor make an impressive improvement to the overall data quality of the enterprise.Structured approachStructured data quality management, greatly contributes to scale up, integrate and thus create a big impact to the overall enterprise data quality. A structured data quality management model would pass through the following stages:Extraction of data from source and/or target systems.Run quality checks for identifying data transfer errors, data link/reference errors and domain integrity errors.Create a data quality mart to keep all the error records and all error-related details, to help in tracking and monitoring the aging of the problem and to do other analyses.Integrate the data quality errors into problem/incident trackers so that closures can be tracked.Provide online data quality error reports to the data owners along with its aging so that they can be fixed by them.The data volume, sensitivity/criticality and the data quality error exposure risks play a vital part in designing the right frequency to run, level of follow up and escalations settings, etc.The data quality errors are critical to be fixed & prevented in time so that businesses can stop revenue/opportunity losses, cut additional recovery expenses and build confidence of all stakeholders in the value chain. (There would be a separate paper discussing in detail on the evaluation of the existing data quality management along with gaps to help insurance industries to implement a proper data quality management system.)Data Exploitation (Output)Data acquired and checked thoroughly, is ready for exploitation. Data exploitation is the key stage which, if properly done, will help to reap the benefits of efficient data management. In other words, this is the value generation stage - which includes revenue growth, cost savings, operational efficiency gains, risk controls, etc, which are very critical for any business. This stage is also viewed as the information management stage. In Insurance industry today, the data exploitation which is the Output stage of the data management, is done in one of the two ways described below:Legacy approachMost enterprises extract data or information required on an ad hoc basis from their operational systems and use their applications or batch programs to generate some reports to help in decision making. This method is not sustainable when the demand grows or multi-dimensional needs come up or when data becomes voluminous. Moreover, data users need to trail behind a big Q number which might render it too late to initiate desirable action on an issue for which data was originally extracted.Structured approachWith the advantages of structured information management already reinforced in the last couple of paragraphs, an enterprise would be easily able to adapt to any volume or time challenges, thus creating a big impact to the overall information needs that are critical to the functioning and growth of the enterprise. Structured information management implementation can be done as laid down below:Enterprise Data Ware House (EDWH): Most enterprise data, which is called universe, needs to be extracted, loaded and transformed for information needs, and then segmented for summaries and details.Data Marts: Specific business functions (for example � accounting, compliance, etc) can have their data marts to address the key business problems in their functions.Reporting Needs: Detail lists and structured (authored and custom) reports can be published from DWH, data marts and operational data stores.Analysis Needs: Summaries need to be done with appropriate dimensions and measures to enable multi-dimensional analysis from DWH and data marts.Information management should be viewed from the perspective of enterprise needs that would cover all functions of the enterprise that would minimally or majorly impact the business. All functions of the enterprise can be seamlessly integrated through suitable enterprise information management systems.Frequency of refreshing the EDWH and data marts, extent of data integration, efficiency summaries depend on the business need or pace; hence, they need to be worked out during the design stage. The data needs to be exploited by creating data marts, reports and analysis to bring value to the enterprise.ConclusionIt is recommended that Insurance industries do a stock check of their data management implementation at all three stages: data acquisition, data quality management and data exploitation. The value of data management should be clearly understood and structured approaches need to be adopted at all stages. With these implemented, an enterprise can make informed decisions, refrain from information starving, remain highly integrated and scalable, and most importantly, stay ahead of competition.




One of the major barriers to test automation is the volatility of the application to be tested. Even benign changes such as moving a button to a different part of the screen or changing a label from 'Next' to 'Continue' can cause test scripts to fail. This problem is especially acute when testing an application's functionally at the GUI level because this tends to be the area in which changes are most frequent.For this reason, test teams have historically tended to avoid functional test automation until an application has become stable. However, this approach does not work when building SaaS products or developing products in an Agile environment where change is a constant. In this situation, a more sophisticated test approach is required to avoid incurring 'technical debt' (i.e., quality issues or tasks that are deferred to be addressed later or not at all).Develop a Reusable Subroutine LibraryThe first level of sophistication in test automation is to develop a library of reusable subroutines that will encapsulate and hide an application's implementation details from the test scripts themselves. For example, you might implement a test subroutine called 'login_to_my_app()' that accepts two parameters: 'UserName' and 'Password'. At runtime, this test subroutine will first find the appropriate text fields in the application GUI and then execute the keystrokes and mouse movements required to fill in the given user name and password, thus completing the login operation. If the application's login GUI changes (e.g., if the on-screen label for 'User Name' is changed to 'Login Name'), then the 'login_to_my_app()' test function might also need to be updated. However, the scripts that called this subroutine would not need to change.The reason this type of abstraction is a win for some projects is that, in this scenario, a change to the application now only requires you to maintain a single subroutine rather than potentially hundreds or thousands of test scripts. Since complex applications frequently have thousands of test scripts, the savings in effort and time is clear. The trade-off is that the level of skill required to maintain a subroutine library is generally higher and of a different type than that needed to maintain test scripts. Obviously this requirement will change your staffing profile and cost structure.However, the benefit of this approach is that you can accommodate change much more readily. More importantly, it will allow you to safely write test cases before the application has been fully implemented. Test script writers only need to know (a) the software's functionality and requirements and (b) the 'stubs' of the test subroutines that will be implemented. You can significantly compress your implementation schedule by enabling your development and test automation processes to function in parallel.Use a Domain-Specific Language ApproachThe next level of sophistication in scripted test development is referred to as 'keyword' or 'abstract' test automation. In this approach, words from the problem domain, not the implementation, are used to describe test scenarios in what is sometimes referred to as a 'domain-specific language' (i.e., DSL). Translator or interpreter software is then used to map the abstract keywords that make up the script into the keystrokes and mouse movements that are required to drive the application under testing.Sometimes this process utilizes verb and noun analysis of the use cases or user stories to produce a suitable set of keywords or abstractions to describe test scenarios - even before any code is written. This functional test automation approach is the most Agile because test cases can be written first and thus enable true test-driven development, even at the functional level.Let us look at a test script for a (simplified) ATM / Cash Machine as an example. Using a simple self-explanatory syntax for this illustration, a fragment of one test script might look something like this:VERIFY ACCOUNT_BALANCE = 10000 INR MAKE_WITHDRAWAL 1000 INR VERIFY ACCOUNT_BALANCE = 9000 INRIn this example, 'VERIFY' and 'MAKE_WITHDRAWAL' would be verbs in our scripting language, and 'ACCOUNT_BALANCE' would be a noun. Note that the terms used to describe the test scenario are taken from the problem domain and make no reference whatsoever to the implementation. All knowledge of the implementation is completely hidden in the translator software. In many cases, this means that the translator must be quite sophisticated. For example, it may need to know the page structure of the application and then navigate from page to page in order to execute the required functionality.Developing or customizing a complex translator can be a larger investment up-front, but the long-term benefits are significant. And although translator software is typically custom-built for a specific application, general-purpose programmable translators are beginning to appear more frequently. Regardless of its origin, the advantages of a sophisticated translator include the following:1. Test scripts that are written entirely in the problem domain are robust in the face of nearly any change to the implementation since all possible implementations must solve the same domain-specific problem. When you have thousands or even tens-of-thousands of scripts accumulated over the course of years, the scripts themselves become your major investment. Developing the translator is a relatively minor cost compared to this investment.2. As mentioned earlier, you can easily write scripts before implementing the software, thus enabling a true test-driven development approach. Although new features will sometimes require you to extend the scripting language, this process is generally easy to perform since it is taken directly from the domain.3. Non-technical domain experts can easily author test scripts because the concepts come directly from the problem domain. With a good architecture, the translator itself can be made robust in the face of changes to the application under testing. This can be done by (a) using a state table to describe page navigation, (b) taking advantage of the object structure of the UI, or (c) using a data-driven approach.Using XML as a scripting language enhances the language's extensibility and can provide a guided script-driven approach with syntax driven by the XML Schema itself. XML is also easy to parse using off-the-shelf tools.ConclusionWithout a doubt, developing a library of reusable subroutines and a domain-specific language to test your system requires an up-front investment. However, in situations where you (a) need to build or customize a SaaS product, (b) wish to truly gain the benefits of an Agile, test-driven development methodology; (c) need to compress your test and development schedule as much as possible while still ensuring high quality; or (d) must cope with constant change during pre- or post-deployment, taking a more sophisticated approach to test automation can be a lifesaver for your project.




