The current explosion in the use of Artificial Intelligence presents significant opportunities and is rapidly reshaping business operations. In most organisations, governance needs to play catchup.
AI is not just an IT issue, it is a legal, reputational and strategic issue that is already presenting challenges for legal, risk and compliance teams. Establishing clear governance frameworks, policies, and responsibilities for AI development, deployment, and use is crucial for managing risks and enabling responsible innovation.
The Evolving Regulatory Landscape
Australia does not (yet) have specific standalone legislation regulating AI. Instead, organisations have to navigate a patchwork of existing laws and industry specific regulations including consumer law, intellectual property, privacy and directors’ duties under the Corporations Act insofar as they apply to their AI use cases.
To support safe and responsible AI use the government has released a Voluntary AI Safety Standard (closely aligned with ISO/IEC 42001) and is currently consulting on mandatory guardrails for high-risk AI uses. Regulators including APRA, ASIC and ACCC all have AI usage and governance in their sights. Globally, regulations such as the EU AI Act are reshaping compliance standards. Australian organisations will inevitably feel this influence.
Existing Frameworks might not cut it.
You may already have robust frameworks for privacy, cybersecurity and data governance. However, AI introduces distinct risks such as bias, autonomous decision making, explainability and transparency which don’t always fit neatly into existing risk or compliance models. ASIC’s REP 798: Beware the gap: Governance arrangements in the face of AI innovation, warns of a “governance gap” in how AI is overseen. Relying solely on existing policies may leave blind spots. Legal and risk teams need to critically assess whether current frameworks are truly fit for purpose and adapt where necessary to reflect the unique and evolving risks AI presents in each business context.
Even the regulators themselves are under scrutiny in this space. In February, the Australian National Auditors Office’s published its report on an audit of the Australian Taxation Office use of AI tools. The ANAO found that the ATOs AI governance framework fell short with key gaps identified around risk management, evaluation and information management.
It’s a timely reminder that even more mature organisations are not immune to governance risk and that improvements are needed across the board. This is also not a “set and forget”, guardrails need to be continuously reviewed and improved as technology and new usages for AI evolve.
Its about Building Trust
Responsible AI adoption is as much about building stakeholder and community trust as it is about managing compliance. Organisations that can demonstrate thoughtful governance covering accountability, fairness, and safety will be better positioned with customers, regulators, and investors alike. It is not simply about setting a position statement, you need buy in from the Board to take a stance aligned with company values, together with an active understanding of how these concepts interplay with everyday business by your employees, suppliers and customers.
How We Can Help
At White Edges Advisory, we work alongside in-house legal, risk and compliance teams to:
- Assess the legal and regulatory implications of AI use and translate this into governance action;
- Build appropriately scaled and fit for purpose AI governance frameworks and policies;
- Review contractor and procurement governance and evaluate supplier agreements to ensure AI risk is appropriately managed; and
- Provide training and prepare for reform.
AI Governance is no longer optional. This is not about over engineering, its about helping organisations stay ahead of issues that matter for their business. Reach out to us if you need support:
Contact us here

