AI Policy
Purpose
An Artificial Intelligence (AI) policy is crucial for organizations to ensure ethical use of this technology, the protection of personal data, and operational efficiency by ensuring that these technologies are implemented in a way that maximizes their benefits while minimizing risks. It aids in maintaining accountability, legal compliance, and public trust while fostering innovation and strategic resource allocation.
As such, this policy is to be used to govern the responsible use of generative AI at Right To Play and to protect the interests of the organization from the potential risks associated with the technology.
Scope
This policy applies to the use of open generative AI (Gen AI) systems (e.g. Co-Pilot, ChatGPT) and any AI or machine learning (ML) models or systems Right To Play develops internally.
This policy applies to all employees of Right To Play International who use AI as part of their role.
Definitions
AI model: The algorithm used to interpret, assess, and respond to data sets based on the training it has received.
AI system: The infrastructure that uses the AI model to produce output based on interpretations and decisions made by the algorithm.
Public AI: An AI system that a vendor makes available to any user who wants access and that collects and uses their inputs to improve the algorithm’s performance. Unlike private AI systems, public systems send data outside the organization.
Private AI: A proprietary AI system developed and used by the organization, keeping data within the company. RTP does not currently have any Private AI systems in place as of the date of this policy.
Responsible AI: A set of guiding principles to promote ethical use of AI.
Data Owner: A data owner is an individual who is entrusted with the authority over specific data and is responsible for ensuring that the data is accurate, complete, and that it is used in compliance with relevant laws, regulations, and policies. In the context of this document, data owners must give formal approval before a given data type can be used in an AI system.
Policy Statements
All organizational use of AI must meet the principles defined in the Responsible AI Framework:
- Privacy: Individual privacy must be respected, and any use of AI should be compliance with data privacy laws such as GDPR (General Data Protection Regulation).
- Fairness and Bias Detection: Unbiased data must be used to produce fair predictions.
- Explainability and Transparency: Decisions or predictions should be explainable, meaning that the reasoning behind the decision or prediction can be understood and traced by humans. This is important for ensuring accountability, fairness, and trust in the use of AI systems.
- Safety and Security: The system needs to be secure, safe to use, and robust (i.e. it withstands errors or unexpected inputs)
- Validity and Reliability: Plans must be made to monitor the data and the model.
- Accountability: A human needs to take responsibility for any decisions that are made based on the model.
- All existing data confidentiality controls and best practices must be in place and observed when using AI as part of a business process.
- Privacy regulations and the organizational processes designed to comply with them must be followed when entering data into the AI system, especially in cases involving a public AI system (e.g. ChatGPT).
- All suspected or confirmed cases of compromised data confidentiality must be reported to the IT department, using the established channels (i.e. Helpdesk Ticket) as soon as possible.
- Data owners must give formal approval before a given data type can be used in an AI system.
- Data must be verified to meet quality standards (where they exist) before being incorporated into organizational data repositories to avoid degrading data integrity with erroneous or otherwise low-quality inputs.
- AI-generated data must be labeled as such so it can be quickly located if associated data sets must be reviewed, corrected, adjusted, recalled, etc.
- Regular user access monitoring must be in place for the AI system, model, and training data.
- Appropriate data access controls must be in place for the AI model and training data.
- Multifactor authentication must be used when signing into the AI system or accessing the AI model and training data.
- AI-generated code must not be incorporated into any of Right To Play’s systems without proper authorization.
- Private AI systems are only to be used by authorized personnel who have completed appropriate training (as determined by the AI steering committee) to protect data confidentiality and integrity and should only be used it as part of approved business processes.
- Employees may use Gen AI for approved business processes such as research, data analysis, and communications, provided that organizational standards to protect data confidentiality and integrity, as laid out in this policy and elsewhere, are upheld.
- Employees are not permitted to enter sensitive data into public AI systems.
- Any exception to the use of sensitive data in public AI systems must be formally approved by the data owner before any action can occur.
- Employee use of Gen AI systems must be lawful and not jeopardize the organization’s professional reputation or brand.
- Employees will be accountable for any issues arising from their elective use of Gen AI as part of business processes, including, but not limited to: copyright violations, sensitive data exposure, poor data quality, and bias or discrimination in outputs.
- Prior to use of Gen AI, employees must complete training related to data protection, privacy, and responsible AI use.
- Employees must not violate any privacy or data protection regulations when using Gen AI systems.
- AI systems should be designed and operated in a way that respects human rights, values, and dignity. This includes ensuring that AI systems do not perpetuate or amplify discrimination, bias, or unfairness.
Employer Obligations
- Evaluate and pre-approve AI systems that employees are permitted to use at work.
- Ensuring that approved systems follow privacy, security, and human rights principles that align with the organization’s values and code of conduct.
- Take all reasonable steps to mitigate the risks associated with the use of AI systems to protect the data privacy and security of sensitive company information.
- Conduct regular audits of the AI systems that are permitted for employee use to ensure they continue to meet the organization’s standards.
- Inform employees of any consequences they will face if the organization’s policies regarding AI systems are not followed.
- Complete an annual review of this policy to ensure that it is still in line with best practices.
Manager Obligations
- Ensure that employees have read and understand the organization’s AI policy.
- Serve as a first point of contact for employees should they have additional questions about the policy.
- Inform employees on how they can and cannot use AI in their work. Managers should outline what types of tasks the technology is permitted for and let employees know about any restrictions.
- Supervise employee use of permitted AI systems to ensure they are not being misused.
Employee Obligations
- Read and understand the organization’s policies and procedures regarding AI use.
- Only make use of AI systems that are permitted under this policy to complete tasks that have been previously approved by the employee’s manager.
- Disclose when AI systems have been used to complete work tasks.
- Take precautions to ensure that sensitive information such as employee and donor and program participate data, as well as proprietary company information is kept confidential and not inputted into third party systems without prior approval.
- Immediately report any data privacy and security issues related to the use of AI tools to their manager.
Governing Laws and Regulations
- Copyright laws
- General Data Protection Regulations and similar such as the UK GDPR, DPP, PIPEDA, etc)
Related Policies
RTP Code of Conduct
Recruitment Policy
IT Policy
Noncompliance
Failure to comply with the standards laid out in this policy may result in disciplinary action, up to and including termination of employment.

