Between Efficiency and Empathy: Evidence of Ethical Ambivalence and the Residual Moral Subject in AI-Mediated Decision-Making
Abstract
The rapid integration of artificial intelligence (AI) into domains involving ethical judgment has generated renewed concern about the status of human moral agency. While much of the existing literature focuses either on technical optimization or normative prescriptions, fewer studies empirically investigate how human subjects themselves perceive moral responsibility under algorithmic mediation. This article presents an empirical–theoretical analysis of moral ambivalence in the age of AI, drawing on survey data (N = 146) collected among university students. The findings suggest that contemporary subjects neither fully reject nor uncritically accept AI as a moral authority. Instead, they occupy an intermediate position characterized by ethical ambivalence: technical efficiency increasingly competes with empathy, contextual judgment, and personal responsibility. We argue that this condition gives rise to a “residual moral subject,” whose agency is not eliminated but progressively reshaped under algorithmic governance.
Downloads
References
2. Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147
3. Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34. https://doi.org/10.1016/j.cognition.2018.08.003
4. Coeckelbergh, M. (2020). AI ethics. MIT Press.
5. Cognitive Research: Principles and Implications. (2024). Perceptions of AI’s moral and competence judgment. https://cognitiveresearchjournal.springeropen.com/articles/10.1186/s41235-024-00573-7
6. Dreyfus, H. L. (1992). What computers still can’t do: A critique of artificial reason. MIT Press.
7. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
8. Gilligan, C. (1982). In a different voice: Psychological theory and women’s development. Harvard University Press.
9. Gogoll, J., & Müller, J. F. (2017). Autonomous cars: In favor of a mandatory ethics setting. Science and Engineering Ethics, 23(3), 681–700. https://doi.org/10.1007/s11948-016-9806-x
10. Human Trust in AI: A Relationship Beyond Reliance. (AI and Ethics, 2025). https://link.springer.com/article/10.1007/s43681-025-00690-z
11. Introna, L. D. (2016). Algorithms, governance, and governmentality. Science, Technology, & Human Values, 41(1), 17–49. https://doi.org/10.1177/0162243915587360
12. MDPI Social Sciences. (2024). Exploring motivators for trust in the dichotomy of human–AI trust dynamics. https://www.mdpi.com/2076-0760/13/5/251
13. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21. https://doi.org/10.1109/MIS.2006.80
14. Nussbaum, M. C. (2001). Upheavals of thought: The intelligence of emotions. Cambridge University Press.
15. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
16. Trust in AI emerges from distrust in humans. (2025). arXiv:2511.16769. https://arxiv.org/abs/2511.16769
17. Tronto, J. C. (1993). Moral boundaries: A political argument for an ethic of care. Routledge.
18. Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press.
19. Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press.
20. Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158
21. “Zombies in the Loop? Humans trust untrustworthy AI-advisors for ethical decisions.” (2021). arXiv:2106.16122. https://arxiv.org/abs/2106.16122
Copyright (c) 2026 Aljula Gjeloshi, Anila Boshnjaku, Ledia Thoma

This work is licensed under a Creative Commons Attribution 4.0 International License.


