Over the past few years, players in the sector have naturally begun to explore how new technologies can come to their aid. Thanks to AI, it is actually possible to simplify and optimize decision-making. A question nevertheless arises: how to guarantee an ethical AI in such a context?
AI and the crucial question of trust
Considered a “black box” type of technology, that is, when a system is represented without considering its function, the AI, or rather the AI’s ability to make the best decisions, can be legitimately challenged. How do we ensure that the algorithms we trust are not affected?
Market players have been studying this issue for several years and are trying to guarantee the impartiality of AI by providing the best possible framework for this technology. Already in 2019, a group of experts brought together by the European Commission published Ethical guidelines for trusted AI. The goal: to support the development and adoption of artificial intelligence that is “ethical, sustainable, human-centered and respects fundamental values and rights”, in all economic sectors. Today, three years later, it is a European regulation on AI (Artificial Intelligence Act), which is on the institutions’ table and on which negotiations are progressing.
Build an unbiased algorithm
In the insurance sector, we are already able to significantly reduce the risk of impartiality. How ? By designing the algorithms not with the aim of solving a single end-to-end problem, but by dividing it into several sub-problems that will be treated individually to achieve an optimal result.
The AI used to track insurance fraud is a perfect example of this concept. The goal of the algorithm should be to identify suspicious behavior, not just statistical correlations between variables. For example, the place of residence or the insured’s surname, although it may have a connection with the risk of fraud, in no way defines a behavior. This data is only used for comparison purposes to calculate distances or kinship ties, but certainly not as indicators of suspicion. Questions about ethics.
No optimal decision without human intervention
Also note that the AI should not be designed to make the final decision. Once the algorithm has indicated the cases where it suspects the existence of fraud and has justified why, it is up to a human to take over. To review the opinion submitted, make a decision and issue one feedback to the system. It is crucial to let the AI know when it is right and when it is wrong – according to the principle itself.
Building effective AI is a process where people and technology work side by side, learn from each other and adapt accordingly.
Insurance companies at the forefront of innovation
Players in the insurance market, especially in France, are fully aware of the innovations that AI brings. The insurance sector is one of the first sectors of activity to adopt artificial intelligence and is investing significantly in this technology. In addition to detecting fraud, many application cases have emerged, such as improving the customer experience and relationship, automating management processes and risk analysis.
These investments in research and development, as well as in various market solutions, should enable insurance companies to eventually move towards using these innovations on a larger scale. To fully exploit this potential, the sector must prevent the risk of bias at all costs by creating and implementing technology that is bias-free and based on evidence-based decision-making.