Between Efficiency and Empathy: An Exploratory Study of Albanian Students' Attitudes Towards AI in Ethical Decision-Making

  • Aljula Gjeloshi Agricultural University of Tirana, Albania
  • Anila Boshnjaku Agricultural University of Tirana, Albania
  • Ledia Thoma Agricultural University of Tirana, Albania
Keywords: Moral agency, artificial intelligence, algorithmic governance, empathy, Albania, higher education, ethical decision-making

Abstract

This article presents an exploratory study of how university students in Albania (N = 146) perceive the role of artificial intelligence (AI) in ethical decision-making. While AI is increasingly integrated into everyday choices, little is known about how individuals in non-Western, digitally emerging contexts negotiate trust, empathy, and moral responsibility in relation to algorithmic systems. Drawing on survey data, we find that respondents maintain a strong attachment to empathy as a moral reference point (M = 3.12) and express significant reservations about delegating emotionally consequential decisions to AI (M = 3.41 for routine vs. emotional delegation). Trust in AI's ethical competence is moderate and conditional, strongly associated with perceived safety (r =.54) and transparency. We interpret these findings through the lens of what we term the "residual moral subject"—a concept describing the persistence of human moral agency in technologically mediated environments, albeit in a reconfigured form. The study contributes empirical nuance to debates on algorithmic governance and highlights the need for culturally situated research on AI ethics.

Downloads

Download data is not yet available.

PlumX Statistics

References

1. Adorno, T. W., & Horkheimer, M. (1947). Dialectic of enlightenment. Herder and Herder.
2. Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147
3. Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34. https://doi.org/10.1016/j.cognition.2018.08.003
4. Coeckelbergh, M. (2020). AI ethics. MIT Press.
5. Dreyfus, H. L. (1992). What computers still can't do: A critique of artificial reason. MIT Press.
6. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
7. Floridi, L. (2013). The ethics of information. Oxford University Press.
8. Galindez-Acosta, J. S., & Giraldo-Huertas, J. J. (2025). Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance [Preprint]. arXiv. https://arxiv.org/abs/2511.16769
9. Gerlich, M. (2024). Exploring motivators for trust in the dichotomy of human–AI trust dynamics. Social Sciences, 13(5), Article 251. https://doi.org/10.3390/socsci13050251
10. Gilligan, C. (1993). In a different voice: Psychological theory and women's development. Harvard University Press.
11. Gogoll, J., & Müller, J. F. (2017). Autonomous cars: In favor of a mandatory ethics setting. Science and Engineering Ethics, 23(3), 681–700. https://doi.org/10.1007/s11948-016-9806-x
12. Introna, L. D. (2016). Algorithms, governance, and governmentality. Science, Technology, & Human Values, 41(1), 17–49. https://doi.org/10.1177/0162243915587360
13. Krügel, S., Ostermaier, A., & Uhl, M. (2022). Zombies in the loop? Humans trust untrustworthy AI-advisors for ethical decisions. Philosophy & Technology, 35(1), Article 17. https://doi.org/10.1007/s13347-022-00567-5
14. Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.
15. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21. https://doi.org/10.1109/MIS.2006.80
16. Nussbaum, M. C. (2001). Upheavals of thought: The intelligence of emotions. Cambridge University Press.
17. Oliveira, M., Brands, J., Mashudi, J., Liefooghe, B., & Hortensius, R. (2024). Perceptions of artificial intelligence system's aptitude to judge morality and competence amidst the rise of chatbots. Cognitive Research: Principles and Implications, 9(1), Article 47. https://doi.org/10.1186/s41235-024-00573-7
18. O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
19. Tronto, J. C. (1993). Moral boundaries: A political argument for an ethic of care. Routledge.
20. Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press.
21. Vetlesen, A. J. (1994). Perception, empathy, and judgment: An inquiry into the preconditions of moral performance. Pennsylvania State University Press.
22. Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press.
23. Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158
Published
2026-03-31
How to Cite
Gjeloshi, A., Boshnjaku, A., & Thoma, L. (2026). Between Efficiency and Empathy: An Exploratory Study of Albanian Students’ Attitudes Towards AI in Ethical Decision-Making. European Scientific Journal, ESJ, 22(7), 155. https://doi.org/10.19044/esj.2026.v22n7p155
Section
ESJ Social Sciences