The proliferation of Artificial Intelligence (AI) has had an enormous psychological impact on society in terms of how we view ourselves and others, our relationship with technology, and how we communicate with each other. The personification of AI and overreliance on its output can blur the boundaries between humans and machines, in turn impairing empathy, reducing real-world interaction, and worsening mental health. AI literacy is, therefore, extremely important if humans are to leverage the strengths of this accelerating technology.
Discussions around AI literacy almost certainly involve AI ethics: What should AI do – and not do – in order to preserve human communication and protect mental health? The design and behaviour of AI systems are tied to their potential psychological consequences, making mental health an ethical issue central in ongoing AI discussions and its use. Current theories on AI ethics are predominantly Western, using concepts of either utility maximisation or prioritising intrinsic values, such as fairness, dignity, and human agency. While teaching these principles in culturally homogeneous contexts is already challenging, this becomes far more complex in multicultural settings: how does one define and measure fairness, dignity, and whether the public good is served when culturally diverse students have a different understanding of these concepts? How can we train AI algorithms that go beyond borders to understand multicultural definitions of ethics?
Addressing these challenges requires introducing a multicultural approach to ethics that counters what scholars call ‘Western colonialism in the technical and conceptual architecture of AI’. At SEACE2025, IAFOR invites delegates to engage in critical discussions on the intersection of AI, mental health, and education in multicultural contexts.
Read presenters' biographies
