The AI Effect – ensuring good things happen and bad things don't
Artificial Intelligence (AI) presents significant opportunities for how government agencies conduct contract and procurement processes. However, it also poses risks. That is why a foundational guide on the use of AI is important.
The National Framework
In June 2024, the Department of Industry Science and Resources published the National Framework for Assurance of Artificial Intelligence in Government as an approach to safe and responsible use of AI by the Australian, state and territory governments. It not only lays a foundation for the ethical and responsible use of AI, but also identifies the risks to be managed.
Understanding the National Framework for Assurance of AI in Government
The framework is designed to ensure that AI systems used by government agencies are ethical, transparent, and reliable and that it is aligned with public values and laws. It sets a national foundation around which jurisdictions can develop specific policies and guidance.
The framework, based on Australia’s AI Ethics Principles, encompasses these principles:
-
Transparency: Government agencies should be transparent about their use of AI, ensuring disclosure to those who may be impacted by it. Government should comply with all laws, policies and standards for keeping reliable records of decisions, testing, information and data used in an AI system. For use in decision-making processes, government should provide simple explanations for how an AI system produces an outcome.
-
Accountability: Government agencies must take responsibility for the AI systems they procure and deploy. This includes being accountable for any biases or errors that may arise from the system, as well as ensuring proper oversight and review.
-
Fairness: Government agencies must ensure AI systems operate in a non-discriminatory manner, ensuring fairness in outcomes and avoiding bias against any individual or group.
-
Reliability and Safety: AI systems must perform as expected, ensuring they are reliable and safe to use in critical government functions. These systems must undergo rigorous testing and validation to ensure their integrity.
-
Data Privacy and Security: Protecting the data used by AI systems is crucial, particularly given the sensitive nature of government data. The framework places strong emphasis on ensuring that data privacy and security measures are in place to prevent unauthorised access or misuse of information.
-
Ethical Use: Government agencies should ensure that AI systems they use are aligned with ethical guidelines and public values, are used for the benefit of society, and do not cause harm.
Impact of the framework: more rigour, greater accountability
The rise of AI has seen an increasing demand for AI-powered solutions in healthcare, law enforcement, education, infrastructure management and elsewhere. Integrating AI into these systems introduces unique challenges for the procurement and contract negotiation process.
Below are key areas where the framework is reshaping government procurement processes:
1. More stringent vendor selection criteria
The framework requires government agencies to thoroughly assess the AI systems they procure to ensure they align with ethical and legal standards. This means exercising a more stringent vendor selection criteria, where companies bidding for government contracts must demonstrate compliance with the framework's principles if the contract is for AI or encompasses its use.
Vendors should provide detailed documentation on how their AI systems address transparency, fairness, and accountability. For example, they may need to show how their algorithms are designed to avoid bias, how decision-making processes are documented and explainable, and how they plan to address potential ethical concerns.
This shift ensures that only those AI vendors with robust assurance mechanisms can secure government contracts. It also encourages vendors to invest in ethical AI systems, thus raising the overall standards of AI development.
2. Emphasis on transparency and explainability
Government agencies should prioritise AI systems that provide clear, explainable decision-making processes. This is particularly important for high-stakes government functions, such as law enforcement, social services, or healthcare, where decisions made by AI systems can significantly impact people's lives.
Government agencies may require vendors to demonstrate how their AI systems work, explaining how decisions are reached and how data is used. This ensures AI systems are not "black boxes" and that government officials can understand and oversee their operation.
Moreover, this fosters greater trust between government and the public. When AI systems are explainable, people are more likely to trust that decisions made by these systems are fair and unbiased, particularly in areas like social welfare, criminal justice, and public policy.
3. Accountability and liability in contracts
The framework's emphasis on accountability makes incorporating specific accountability clauses into contracts with AI vendors important. They should clearly define who is responsible for any errors, biases, or malfunctions that may arise from the AI system's use.
Contracts may also include provisions for regular audits of AI systems, requiring vendors to submit their algorithms for external review to ensure they remain compliant with ethical guidelines. Clauses may also be included that allow government agencies to terminate agreements if the AI system is found to be discriminatory or otherwise harmful to public interests.
4. Focus on bias and fairness in procurement processes
Addressing bias is one of the most pressing concerns in the deployment of AI in government services. The framework's emphasis on fairness requires government agencies to carefully evaluate AI systems for potential biases that could lead to unfair outcomes for certain individuals or groups.
In the procurement process, this means agencies must assess the datasets used to train AI models, ensuring they are representative and free from bias. Vendors are often required to demonstrate how their AI systems are tested for fairness and what measures they have in place to mitigate any bias that may emerge during deployment.
5. Data Privacy and Security Provisions in Contracts
The framework places strong emphasis on data privacy and security. Government agencies must ensure that AI systems they procure adhere to strict data privacy regulations, particularly when dealing with sensitive personal information.
As a result, government contracts should include detailed provisions on how vendors must handle data, including requirements for data anonymisation, secure storage, and limits on data access. Vendors may also be required to conduct regular security audits to ensure their systems are protected against cyberattacks or data breaches.
Conclusion
The National Framework for Assurance of Artificial Intelligence in Government is a significant step forward in ensuring the responsible use of AI in public sector operations. By emphasising principles such as transparency, accountability, fairness, and data privacy, the framework provides a solid foundation for governments to procure and deploy AI systems that align with public values and ethical standards.
This framework is reshaping how governments evaluate AI vendors and systems. By setting higher standards for AI development and implementation, it ensures that government agencies can harness the benefits of AI while minimising the risk of bias, discrimination, and unethical use.
Author: Rebecca Hegarty