Abstract
In Australian intensive care units (ICUs), Artificial Intelligence (AI) promises to enhance efficiency and improve patient outcomes. However, ethical concerns surrounding AI must be addressed before widespread adoption. We examine the ethical challenges of of AI using the framework of the four pillars of biomedical ethics-beneficence, nonmaleficence, autonomy, and justice, and discuss the need for a fifth pillar of explicability. We consider the risks of perpetuating inequities, privacy breaches, and unintended harms, particularly in disadvantaged populations such as First Nations people. We advocate for a national strategy for ICUs to guide the ethical implementation of AI, that aligns with existing National AI Frameworks. Our recommendations for implementation of safe and ethical AI in ICU include education, developing guidelines, and ensuring transparency in AI decision-making. A coordinated strategy is essential to balance AI's benefits with the ethical responsibility to protect patients and healthcare providers in critical care settings.