Healthcare

Australia-first online tool for safe AI use in healthcare

Share

Australia is set to pioneer a safer integration of Artificial Intelligence (AI) into healthcare with the launch of a first-of-its-kind online tool.

The Digital Health Cooperative Research Centre (DHCRC), in partnership with the Department of Health and Aged Care and AI specialists at the University of Technology Sydney (UTS), is adapting the Organisation for Economic Cooperation and Development (OECD) AI Classification Framework to the unique needs of Australia’s healthcare sector.

The tool, expected to be ready for initial testing by mid-2025, will help healthcare providers, policymakers and developers classify and assess the risks of AI technologies. It addresses key concerns like bias, explainability and robustness, ensuring AI solutions are deployed responsibly while improving patient outcomes.

Originally developed for global application, the OECD AI Classification Framework offers a baseline for evaluating AI systems. The Australian adaptation will reflect national policies, including emerging mandatory guardrails for AI, providing consistent guidance for developers, deployers and end users.

The framework considers various dimensions, such as stakeholder involvement, the economic context of AI use, data quality, AI model design and the system’s specific tasks. Once implemented, the tool will allow healthcare organisations to evaluate AI technologies more transparently and align with existing medical device regulations.

The importance of responsible AI

Digital Health CRC CEO Annette Schmiede underscored the need for a standardised approach to managing AI’s rapid adoption across sectors.

“The availability and adoption of AI is without doubt moving at a rapid pace across all sectors, including healthcare. The challenge is building clear and consistent guidance and tools, ensuring these are effective for the diverse range of audiences and AI solutions across healthcare, including developers, healthcare providers, and consumers,” Schmiede said.

Department of Health and Aged Care Assistant Secretary of Digital And Service Design Sam Peascod stressed the importance of building trust in AI technology.

“As Government looks to build community trust and promote AI adoption, we need to provide guidance on how to use AI safely and responsibly. Having a tool that can assist in classifying and performing a risk assessment of AI technologies will support the adoption of AI solutions by healthcare organisations and providers, ultimately leading to better health outcomes for consumers,” said Peascod.

The project team at UTS, including the Human Technology Institute and innovation hub UTS Rapido, will focus on making the tool user-friendly while leveraging cutting-edge AI research. Professor Adam Berry, Deputy Director of the UTS Human Technology Institute, emphasised the broader impact of consistent AI practices.

“For AI to realise its tremendous promise for all, it depends upon responsible practice. A critical first step to realizing that practice is to be consistent in the documentation of how individual AI systems are used, function, and deliver impact across diverse stakeholders,” Berry said.

The tool represents a significant step in navigating the transformative impact of AI in Australian healthcare while maintaining safety and accountability.

Website | + posts

Ritchelle is a Content Producer for Healthcare Channel, Australia’s premier resource of information for healthcare.

Next Up