We have adopted a set of data ethics principles that support ethical decision-making when using data across the value chain. The principles draw on established concepts in privacy, bio- and healthcare ethics, human rights, and business ethics to ensure we work with data in a way that maximizes benefits and minimizes harm for individuals and society. We are in the process of expanding our global data protection programme, anchored in the Business Ethics Compliance Office, to cover data and AI ethics through policies, trainings, communication, monitoring activities, and audits.
The Data Ethics principles cover all types of data collected, analysed, stored, shared, and otherwise processed. The AI Ethics Principles are relevant and apply to all forms and use of AI including, but not limited to, research and business operations.
Data should be collected and used in ways that are consistent with the intentions and understanding of the individual. Best efforts should be made to make individuals aware of how their data will be used and, where appropriate and possible, offer them choices about who has access to their data and how it may be used.
Individuals should be informed, in a manner that is appropriate and understandable to the relevant audience, regarding:
Legally permissible limitations on such rights should be clearly explained. Data governance standards and practices should be made available for public review, when appropriate.
Data use should include processes to identify, prevent, and off-set poor quality, incomplete, or inaccurate data.
When data quality, completeness, or accuracy presents risks of bias or harm to the individual, processes for the mitigating these risks should be pursued and documented.
Engaging a diverse set of stakeholders in decision-making around data use and development of technologies to leverage data can build trust and support efforts to eliminate harmful biases. Technologies leveraging data should also include data-driven processes for quantifying the potential for bias in the populations in which they are being deployed.
This includes having processes in place to identify, assess, and mitigate risks of intentional and unintentional discrimination and bias, breaches in privacy and security, physical harm, and other adverse impacts on individuals.
Protecting privacy also includes applying strong cybersecurity standards (as well as notifying individuals when their data is breached, where the risk to the individual is deemed high) and appropriately preparing the data for use (e.g. anonymization and pseudonymization techniques where relevant) and restricting re-identification of anonymized data without permission.
Data should always be obtained by legitimate means, and there should be designated individuals accountable for protection and confidentiality of data.
Third parties working with IFPMA members should be informed about and expected to adhere to these principles.
In addition, data interoperability initiatives should prioritize,
include, and support ethical and responsible data sharing
practices.
Senior management should be aware, and ensure the application, of ethics principles in decisions around the use of data in strategic activities.
AI systems should be designed and utilized with the idea that the use of AI needs to respect the rights and dignity of all people. When developing AI systems, we will consider both the societal benefit and any impact to individuals. Where applicable, the responsible individual or organization should strive to utilize AI as a means by which those impacted by AI can retain control of their own healthcare according to their evolving needs.
Accountability for the use of AI systems, including those developed by third parties, throughout the lifecycle of AI. This includes establishing proper governance, appropriate deployment of risk and impact-based controls, and incorporation of strategies for any unintended negative consequences of AI systems, including continual monitoring and feedback loops as AI evolves over time.
AI systems should be deployed with an appropriate level of human control and oversight, based on the assessed risk to individuals. Where there is a potential for direct and significant impact on individuals because of deploying AI, AI should not be given complete autonomy in decision making.
Developers and owners should strive to minimize bias and maximize fairness in AI systems. Any development of an AI system should include a process to review selection of datasets used in training and assumptions used in the design to evaluate if those assumptions minimize any bias of the developer or a bias that is present in the data, design, or architecture the developer has relied upon. Continuously monitoring and adapting AI systems to correct for bias throughout the AI lifecycle, hereunder ensuring diversity among the designers and developers of AI.
Privacy and security should be considered as part of the design of any AI system by implementing adequate measures to mitigate risks to the privacy, security, and safety of individuals, including where relevant, compliance with applicable data protection regulations and technical limitations on the re-use and use of data, and state-of-the-art security and privacy-preserving measures, such as pseudonymization, anonymization, or encryption.
When deploying AI, it should be described, to the extent possible and where appropriate, when and how AI is used; how personal data, if any, is used; the goals, underlying data and any limitations of such data, and assumptions that power a given AI system; and the limitations of that system. When using non-explainable AI in a context that has the potential for direct and significant impact on individuals, we must ensure extra focus on transparency, human control, and elimination of bias.
Your Career Guide