Publications

Gladiš, Mesarčík, and Slosiarová on the ethical and fundamental rights risks of using wearable sleep monitoring devices

Gladiš, M., Mesarčík, M., & Slosiarová, N. (2024). Advising AI assistant: ethical risks of Oura smart ring. AI and Ethics, 1-13. Available at: https://link.springer.com/article/10.1007/s43681-024-00544-0

Abstract: Wearable devices with monitoring and recommendation functions are designed to provide personalised feedback and support to help individuals manage their health and well-being. One of the most widespread uses of these wearable devices is in the area of sleep monitoring. For users, this means they can make more informed decisions and the insight from the device allows them to influence the quality of their sleep better. However, with the use of these devices, certain values such as privacy and autonomy may be at stake. This is particularly true for new methods of artificial intelligence technologies that can provide an unprecedented level of detail about their users. According to the European regulation on artificial intelligence, these wearable assistants will be classified as high-risk and thus will have to undergo a demanding conformity assessment. That's why we decided to choose one of the most popular wearables that can provide recommendations for its users, the Oura Smart Ring and conduct a Human Rights, Ethical and Social Impact Assessment of it. This choice was made in part due to the wealth of publicly available information about this device. We have found that it can pose a high risk to the user from several ethical and legal perspectives that can easily be overlooked in the design and use of these technologies. We have also proposed countermeasures that could in theory reduce their potential harmful effect if implemented. This article contributes to a better understanding of the ethical and fundamental rights risks of using wearable sleep monitoring devices and thus helps to improve the safety of their use.

Kosterec on moral agents in a vat

Kosterec, M. (2024). Moral Responsibility in a Vat. Acta Analytica, 1-8. Availble at: https://link.springer.com/article/10.1007/s12136-024-00602-6

Abstract: This paper investigates an ingenious argument by Andrew Khoury which, if valid, could shed new light on some of the most relevant discussions within the field of moral philosophy. The argument is based on the idea that if we deny the phenomenon of resultant moral luck, then the proper objects of moral responsibility must be internal willings. I analyse the argument and find it unsound. The argument does not adequately account for the positions of all relevant moral actors when it comes to the moral evaluation of agents and their actions.

Gavorník, Podroužek, Oreško, Slosiarová, and Grmanová on ethical issues of smart metering and non-intrusive load monitoring

Gavorník, A., Podroužek, J., Oreško, Š., Slosiarová, N., & Grmanová, G. (2024). Beyond privacy and security: Exploring ethical issues of smart metering and non-intrusive load monitoring. Telematics and Informatics, 90, 102-132. Available at: https://www.sciencedirect.com/science/article/pii/S0736585324000364

Abstract: Artificial intelligence is believed to facilitate cost-effective and clean energy by optimizing consumption, reducing emissions, and enhancing grid reliability. Approaches such as non-intrusive load monitoring (NILM) offer energy efficiency insights but raise ethical concerns. In this paper, we identify most prominent ethical and societal issues by surveying relevant literature on smart metering and NILM. We combine these findings with empirical insights gained from qualitative workshops conducted with an electricity supplier piloting the use of AI for power load disaggregation. Utilizing the requirements for trustworthy AI, we show that while issues related to privacy and security are the most widely discussed, there are many other equally important ethical and societal issues that need to be addressed, such as algorithmic bias, uneven access to infrastructure, or loss of human control and autonomy. In total, we identify 19 such overarching themes and explore how they align with practitioners' perspectives and how they embody the seven core requirements for trustworthy AI systems defined by the Ethics Guidelines for Trustworthy AI.

Vacek on our research in AI ethics

Kosterec on Transparent intensional logic

Kosterec, M. (2024): Transparent logics. Small Differences with huge Consequences. Brill: Leiden – Boston. ISBN: 978-90-04-70333-9. Available at: https://brill.com/display/title/70237

The book presents Transparent Intensional Logic in several of its latest realisations in such a way that it makes a case for the system and demonstrates how the theory can be applied to a wide range of cases. The work strikes a good balance between the philosophical-conceptual and the logical-formal. Transparent Logics prioritises depth over breadth and focuses on advanced formal semantics and philosophical logic, going beyond a mere introduction to the subject, but delving into the details instead.

Vacek on future AI (media, 18 July 2024, in Slovak)

Sambrotta on whether LLMs can be responsible for language production

Sambrotta, M. (2023). If God Looked Into AIs, Would He Be Able To See There Whom They Are Speaking Of? Philosophica Critica 9, 42-54. Available at: https://philosophicacritica.ukf.sk/uploads/1/3/9/8/13980582/philosophica_critica_2_2023_final.pdf#page=42

Abstract: Can Large Language Models (LLMs), such as ChatGPT, be considered genuine language users without being held responible for their language production? Affirmative answers hinge on recognizing them as capable of mastering the use of words and sentences through adherence to inferential rules. However, the ability to follow such rules can only be acquired through training that transcends mere formalism.Yet, LLMs can be trained in this way only to the extent that they are held accountable for their outputs and results, that is, for their language production.

Vacek on AI control

Vacek, D. (2023). Two remarks on the new AI control problem. AI and Ethics, 1-6. Available at: https://link.springer.com/article/10.1007/s43681-023-00339-9

Abstract: This paper examines the new AI control problem and the control dilemma recently formulated by Sven Nyholm. It puts forth two remarks that may be of help in (dis)solving the problem and resolving the corresponding dilemma. First, the paper suggests that the idea of complete control should be replaced with the notion of considerable control. Second, the paper casts doubt on what seems to be assumed by the dilemma, namely that control over another human being is, by default, morally problematic. I suggest that there are some contexts (namely, relations of vicarious responsibility and vicarious agency) where having considerable control over another human being is morally unproblematic, if not desirable. If this is the case, control over advanced humanoid robots could well be another instance of morally unproblematic control. Alternatively, what makes it a problematic instance remains an open question insofar as the representation of control over another human being is not sufficient for wrongness, since even considerable control over another human being is often not wrong.


Mesarčík, Slosiarová, Podroužek, and Bieliková on the regulation of generative artificial intelligence

Stance on the regulation of Generative Artificial Intelligence: Position on the selected aspects of regulation of general purpose AIs, foundation models and generative AI systems as proposed by the positions of the European Parliament and Council towards AI Act. See: 

https://kinit.sk/publication/stance-on-the-regulation-of-generative-artificial-intelligence/

https://zenodo.org/records/10185168

Podroužek on AI in the healthcare (media, 18 July 2023, in Slovak)

Vytvorte si webové stránky zdarma!