Introduction
The adoption of artificial intelligence (AI) is accelerating rapidly within the not-for-profit (NFP) sector, presenting both significant opportunities and considerable ethical risks. While these technologies can enhance efficiency and help an organisation meet community needs, their use raises critical questions about data privacy, fairness, and accountability that must be carefully managed.
For any organisation to use AI safely and ethically, a robust governance framework is essential. This guide provides a practical framework to help leaders navigate the ethics of AI, manage potential risks, and ensure that any use of AI aligns with their organisation’s core purpose and values.
Understanding AI Ethics & Governance for Your NFP Organisation
Defining AI Ethics & Governance
AI ethics involves a detailed examination of the moral implications of decisions made by AI. This includes:
- The responsibilities of its developers and users
- The broader societal consequences of its deployment
The ethical dimension of AI requires a multi-disciplinary approach, drawing on fields like philosophy, law, and computer science to address its complex challenges.
AI governance, on the other hand, refers to the frameworks, policies, and regulations that oversee the development and use of AI systems. Effective governance aims to establish clear standards and guidelines that:
- Promote ethical AI practices
- Protect individual rights
- Reduce the risks associated with AI
This process involves collaboration between governments, industry leaders, and other stakeholders to ensure AI is used responsibly.
Why Ethical AI is Crucial for NFP Organisations
The NFP sector is uniquely positioned to lead the charge in using AI ethically. NFPs are built on a foundation of public trust and are dedicated to serving communities, which provides a strong ethical framework for guiding AI adoption. This focus on human-centred design and helping disadvantaged groups means the sector has the experience to ensure AI is used for social good.
For any charity, public trust is one of its most valuable assets, and maintaining that trust is essential when implementing new technologies. Your organisation must consider how it can adopt AI safely and responsibly to avoid causing harm, particularly to the vulnerable groups it serves. The purpose of using AI in this context is to enhance human decision-making and further the organisation’s charitable mission, not simply to replace human connection with automated processes.
Request a Free Consultation with one of our experienced Lawyers today.
Get Your Free Initial Consultation
Key Ethical & Legal Risks of Using Artificial Intelligence
Data Privacy & Cybersecurity Vulnerabilities
One of the most significant risks when you use AI is the potential exposure of sensitive information. NFP organisations often handle personal data from stakeholders, donors, and beneficiaries, which requires careful management. Using publicly available AI tools can be particularly risky, as staff might input confidential or personal information into unsecured platforms without understanding how that data will be stored or reused.
Organisations must ensure their use of AI complies with privacy and data protection laws, such as the Privacy Act 1988 (Cth).This involves updating privacy policies and collection notices to inform individuals how their personal information might be shared with AI systems.
Key risks in this area include:
Risk Type | Description |
---|---|
Data Breaches | Storing the large amounts of data required by AI systems increases the potential consequences of a security breach. |
Unauthorised Use | Sensitive content input into AI programs could be repurposed beyond the organisation’s control if the product lacks strong data protections. |
Cyber Attacks | AI systems can be targets for cyber attacks like hacking or phishing, making it vital for charities to stay safe from cyber threats with a strong data security plan. |
Bias Discrimination & Fairness Concerns
AI tools learn from the data they are trained on, which means they can perpetuate and amplify existing biases. If the training data reflects historical or societal biases, the AI system may produce unfair or discriminatory outcomes for certain individuals or groups. This is a critical ethical concern for NFP organisations dedicated to serving diverse and often vulnerable communities.
For example, an AI tool used to screen job applications might unfairly favour certain demographics if its training data reflects past hiring biases. This could lead to violations of anti-discrimination laws.
Other documented instances of bias have included:
- Banking sector AI that assigned lower credit limits to women than men.
- Justice systems where AI has discriminated against people of colour.
- Image generation tools that produce stereotypical results, such as only showing older men in response to a prompt for “company directors.”
Misinformation & Intellectual Property Infringement
The use of AI to generate content carries legal risks if the output is inaccurate or misleading. If your organisation publishes AI-generated content that contains false information, it could be held liable for spreading misinformation or breaching the Australian Consumer Law (ACL), which can result in Australian Consumer Law disputes. Furthermore, if the content damages an individual’s reputation, it could be considered defamation.
There are also significant concerns regarding intellectual property infringement. AI systems that generate content by drawing on material protected by copyright may infringe on the owner’s rights if used without proper authorisation.
Key legal points to consider include:
Legal Area | Key Consideration |
---|---|
Copyright | In Australia, it is unclear if an AI can be an “author.” Using a model may reproduce parts of its training data, potentially infringing copyright if no appropriate licence is in place. |
Patent and Trademark | An AI program that replicates patented technology or uses trademarks without permission may lead to infringement claims. |
Environmental & Societal Impacts of AI Use
The environmental cost of AI is a frequently overlooked risk. Generative AI models require vast amounts of energy to train and operate, particularly when hosted in cloud data centres, resulting in significant carbon emissions. In fact, AI’s environmental impact can be greater than that of the entire aviation sector.
Beyond the environmental concerns, there is a societal risk in how AI is implemented within an organisation. Using AI simply to reduce staff numbers or fill gaps that should be occupied by people can conflict with an organisation’s core purpose and values. The focus of AI should be on enhancing human decision-making and supporting staff, not merely on driving efficiency at the cost of human roles.
Developing an Artificial Intelligence Governance Framework
Adopting Australia’s AI Ethics Principles
The Australian Government has developed a voluntary framework to guide the responsible and ethical use of AI. These eight principles serve as a foundation for any NFPs looking to implement AI safely and build trust with its stakeholders.
The core principles for the responsible use of AI include:
Principle | Description |
---|---|
Human, social and environmental wellbeing | AI systems should be designed to benefit individuals, society, and the environment throughout their entire lifecycle. |
Human-centred values | The technology must respect human rights, diversity, and the autonomy of individuals at every stage. |
Fairness | AI systems need to be inclusive and accessible, ensuring they do not create or perpetuate unfair discrimination against any individuals or groups. |
Privacy protection and security | A key part of the ethics of AI is upholding privacy rights and data protection, with a strong focus on ensuring the security of all data used. |
Reliability and safety | Organisations must ensure their AI systems operate reliably and safely in accordance with their intended purpose. |
Transparency and explainability | There should be transparency so people can understand when they are being significantly impacted by an AI system. |
Contestability | When an AI system’s decision-making significantly affects a person or group, a timely process must be available to challenge the outcome. |
Accountability | The individuals and organisations responsible for each phase of the AI lifecycle should be identifiable and accountable for the system’s outcomes. |
Adopting this framework can help ensure that AI systems are developed and deployed in a manner that is secure, reliable, and aligns with community values.
Integrating FATE Principles for Responsible AI Use
To further strengthen an AI governance framework, organisations can integrate the FATE principles, which stand for Fairness, Accountability, Transparency, and Explainability or Ethics. These principles are designed to support decision-making, build trust among stakeholders, and make AI systems more understandable and controllable.
The FATE principles ensure that AI outputs can be confidently and securely leveraged to inform all relevant parties:
FATE Principle | Description |
---|---|
Fairness | Achieved by actively managing bias and discrimination. |
Accountability | Requires clear governance structures to address how issues are rectified. |
Transparency | Involves providing clear information about how AI systems generate outputs, which helps build confidence. |
Explainability | Exposes the reasoning behind specific decisions without revealing the entire mechanics of the process. |
This approach helps prevent errors and increases trust in AI systems, making them more effective and acceptable to users and stakeholders.
Leveraging Voluntary Standards & Risk Management Frameworks
Beyond guiding principles, established international standards offer formal guidance for developing a robust AI governance framework. NFPs can leverage these voluntary standards to manage risks, guide deployment, and ensure continual improvement. These frameworks provide a structured way to balance innovation with governance, addressing challenges like ethical considerations and transparency.
Several key standards provide a pathway for responsible AI implementation:
Standard / Framework | Purpose |
---|---|
Framework for AI systems using ML (ISO/IEC 23053:2022) | Establishes a framework for describing a generic AI system using machine learning, outlining its components and their functions. |
AI – Guidance on risk management (ISO/IEC 23894:2023) | Provides specific guidance on how an organisation can manage risks related to AI and integrate risk management into its AI-related activities. |
AI – Management systems (ISO/IEC 42001:2023) | Specifies the requirements for establishing, implementing, and continually improving an Artificial Intelligence Management System (AIMS). |
AI Risk Management Framework (AI RMF 1.0) (NIST) | A voluntary resource to help organisations design, develop, and use AI systems to manage risks and promote trustworthy and responsible AI. |
Request a Free Consultation with one of our experienced Lawyers today.
Get Your Free Initial Consultation
How Your Organisation Can Implement Responsible AI Practices Ethically
Creating a Comprehensive AI Policy for Your Organisation
A formal AI policy is essential for establishing a governance framework that guides the responsible use of AI. This policy should clearly define acceptable uses of AI, outline risk mitigation strategies, and ensure all practices align with your organisation’s existing policies on data privacy, information security, and discrimination.
An effective AI policy should include several key components:
Policy Component | Description |
---|---|
Purpose and Scope | States the policy’s objective (e.g., ensuring ethical AI use) and defines its scope, including who it applies to and which tools are covered. |
Guiding Principles | Outlines the core principles for AI use, such as ethical practices, transparency with stakeholders, clear accountability, and legal compliance. |
Governance and Oversight | Appoints a specific committee or officer to oversee all AI activities, ensuring board oversight and expert involvement. |
Risk and Data Management | Implements procedures for regular risk assessments (bias, security, privacy) and details data protection measures to ensure compliance with the Privacy Act 1988 (Cth). |
Training and Awareness | Commits to providing ongoing training for all staff on the benefits, risks, and responsible use of AI, as well as the policy itself. |
Monitoring and Compliance | Establishes processes for continuously monitoring AI systems and outlines consequences for non-compliance, including a schedule for regular policy reviews. |
Conducting AI Risk Assessments & Maintaining Human Oversight
To implement responsible AI practices, your organisation should adopt a robust risk assessment system or integrate AI into your existing risk management framework, for which a strong ACNC risk register is an essential tool.This process involves conducting regular assessments of AI projects to identify and address potential issues before they cause harm.
An oversight committee can help establish clear roles and responsibilities for AI initiatives and ensure the organisation stays current with changing legal requirements.
Key areas to address in a risk assessment include:
- Potential for biases in algorithms and data sets
- Data security and privacy vulnerabilities
- Lack of transparency in AI decision-making processes
- The risk of misuse or unintended consequences
A critical component of this framework is maintaining meaningful human oversight. AI should be used as a tool to support and enhance human decision-making, not replace it entirely.
Staff must be empowered to critically review, question, and override AI-generated outputs, particularly in high-risk scenarios where decisions could significantly impact stakeholders.
The Importance of Staff Training & Education on AI Ethics
Educating staff and volunteers is fundamental to ensuring they use AI responsibly and ethically. Training should build a clear understanding of AI’s capabilities and limitations, the specific risks involved, and how to apply the technology in a way that aligns with your organisation’s purpose and values.
Effective training programs should provide clear guidelines and procedures for using AI tools for tasks like content creation or problem-solving. This education ensures that everyone in the organisation understands their responsibilities.
By promoting a culture of responsible experimentation and learning, you can empower your team to leverage AI safely while upholding the highest ethical standards in all their work.
The Board’s Role in Ethical AI Governance & Strategy
Fulfilling Director Duties & Responsibilities for AI Use
The board, committee members, and office-holders have the ultimate responsibility for overseeing risk management within an organisation as part of their corporate governance duties, which extends to the use of AI. This responsibility is rooted in legal duties, including the duty to act with reasonable care and diligence. For charities, these obligations are outlined in standards like the ACNC Governance Standard 5.
In practice, fulfilling these duties is a key part of therisks and responsibilities for directors of Australian charities, requiring them to be actively engaged in the organisation’s AI strategy. They must:
- Understand how the organisation is using AI and manage the potential risks associated with its implementation
- Ensure that any use of AI is ethical and aligns with the organisation’s core purpose and values
- Act honestly and fairly in the best interests of the organisation and for its charitable purposes
- Oversee AI innovations to ensure they are managed by individuals with the necessary skills and knowledge
Asking the Right Strategic Questions About AI Ethics
A critical role for the board is to guide the ethical implementation of AI by asking management the right strategic questions. One of the biggest mistakes a board can make is delegating decisions about AI to frontline staff without providing strategic oversight, as governance decisions must be made at the board level.
To ensure AI is adopted responsibly, boards should lead conversations around key ethical considerations, including:
Topic Area | Key Strategic Question |
---|---|
Data and Security | How will the organisation protect sensitive client or organisational information from being exposed through AI tools? What is the process for informing stakeholders about how their data is used and stored? |
Bias and Fairness | What steps are being taken to ensure AI systems do not exclude, disadvantage, or perpetuate unfair discrimination against certain individuals or groups? |
Human Oversight | How will the organisation ensure that human judgment and experience remain central to any decision-making process involving AI? Is AI being used to fill gaps that should be filled by people, and does this align with the organisation’s values? |
Governance Framework | Does the organisation have an acceptable use policy or a formal AI governance framework in place to guide its implementation and manage risks? |
Environmental and Societal Impact | What is the environmental impact of the AI tools being considered, and how does their use weigh against the organisation’s climate costs and societal purpose? |
100% Obligation-Free
Speak to one of our Experienced Lawyers Today
Conclusion
Adopting AI presents significant opportunities for the NFP sector, but it must be managed through a robust ethical governance framework to mitigate serious risks related to data privacy, bias, and legal compliance. The board holds the ultimate responsibility for this oversight, ensuring that any use of AI aligns with the organisation’s core purpose and values.
Understanding the complexities of AI governance requires specialised legal insight to ensure your organisation remains compliant and protected. For trusted expertise tailored to the unique needs of the NFP sector, contact our team of not-for-profit & charity lawyers at LawBridge today to ensure your organisation can innovate confidently and ethically.
Frequently Asked Questions
The main legal risks for an NFP using AI include spreading misleading information, defamation, infringing on intellectual property, breaching data privacy laws, and perpetuating unlawful discrimination. These risks arise from existing laws like the ACL, privacy legislation, and anti-discrimination acts that apply to the use of AI.
Your organisation can ensure its use of AI is not biased by conducting regular bias assessments of its algorithms and maintaining meaningful human oversight to review and challenge AI-generated outcomes. It is also crucial to include diverse human experiences in the planning and governance processes to identify and mitigate potential discrimination.
Yes, your NFP organisation needs a formal AI policy to establish clear guidelines for acceptable use and to manage the associated risks. This policy should define responsibilities, outline risk management procedures, and ensure all AI applications align with your organisation’s core purpose and values.
The board holds the ultimate responsibility for overseeing AI risk management, which involves understanding how the organisation uses AI and ensuring its application is ethical. Directors must manage the associated risks and approve a formal governance framework that aligns with the organisation’s charitable purpose.
You can protect sensitive client data by establishing clear rules that prohibit staff from entering confidential or personal information into unsecured public AI platforms. Your organisation should also use AI products with strong data protections and be transparent with stakeholders about how their data is being used and stored.
Yes, you should inform stakeholders and donors that you are using AI, as transparency is crucial for maintaining public trust. Your organisation can do this by developing a plain-language statement that explains which tools are used and how they help to further your charitable mission.
Australia’s AI Ethics Principles are a set of eight voluntary guidelines designed to promote the safe, secure, and reliable use of AI. They cover human wellbeing, human-centred values, fairness, privacy protection, reliability, transparency, contestability, and accountability.
No, AI should be used to support and enhance human decision-making, not replace it entirely. Staff must be trained to critically review AI-generated outputs and empowered to question or override them to ensure human judgment remains central to important decisions.
The environmental impact of using AI can be significant, as generative models consume vast amounts of energy and produce substantial carbon emissions, sometimes more than the entire aviation sector. Boards should factor these environmental costs into their procurement decisions and consider more sustainable alternatives when available.