Class readings (Last update: 2024/05/01)
Week 1 (4/10) Introduction
Week 2 (4/17) Superintelligence and AI alignment [KM]
Supporting materials:
Week 3 (4/24) Making moral machines [KM]
Supporting material:
Week 4 (5/1) Fairness in AI [KM]
Supporting material:
Week 5 (5/8) Accountability in AI [MM]
Supporting material:
5/15 No Class
Week 6 (5/22) Trust in AI
Supporting material:
Week 7 (5/29) Law and regulation (1) [MM]
Supporting material:
Week 8 (6/5) Law and regulation (2) [KM; Professor Hin-Yan Liu]
Supporting material:
6/12 No Class
Week 9 (6/19) Living with artificial others [KM]
Supporting materials:
Week 10 (6/26) Ontology of chatbots [KM]
Supporting materials:
Week 11 (7/3) Robot rights [MM] Online
Supporting material:
Week 12 (7/10) Flourishing with AI
Supporting materials:
Week 13 (7/17) Guest Lecture: Dr. Timo Spieth (University of Bayreuth)
Week 14 (7/24) Essay writing session [KM]
Week 15 (7/31) General discussion [KM/MM]
Week 2 (4/17) Superintelligence and AI alignment [KM]
- Bostrom, N. (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Minds & Machines 22: 71–85.
Supporting materials:
- Russell, S. (2020). Artificial Intelligence: A binary approach. In S. M. Liao (ed.) Ethics of Artificial Intelligence (pp. 327–340). Oxford University Press.
- Yudkowsky, E. (2016). The AI Alignment Problem: Why It’s Hard, and Where to Start. The 26th Annual Symbolic Systems Distinguished Speaker series. [PDF] [YouTube]
- Coeckelbergh, M. (2020) AI Ethics. Chapter 2 Superintelligence, monsters, and the AI apocalypse (pp. 11–29), MIT Press.
- Gebru, T., & Torres, E. P. (2024) The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday, 29(4). https://doi.org/10.5210/fm.v29i4.13636
Week 3 (4/24) Making moral machines [KM]
- Wallach, W., & Vallor, S. (2008). Moral Machines: From Value Alignment to Embodied Virtue. In S. M. Liao (ed.) Ethics of Artificial Intelligence (pp. 383–412), Oxford University Press.
Supporting material:
- Wallach, W. (2010). Robot minds and human ethics: the need for a comprehensive model of moral decision making. Ethics Inf Technol 12, 243–250. https://doi.org/10.1007/s10676-010-9232-8 [PDF]
- Van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and engineering ethics, 25, 719–735. https://doi.org/10.1007/s11948-018-0030-8
- Formosa, P., & Ryan, M. (2021). Making moral machines: why we need artificial moral agents. AI & Society, 36(3), 839–851. [PDF]
- Crook, N., & Corneli, J. (2021). The Anatomy of moral agency: A theological and neuroscience inspired model of virtue ethics. Cognitive Computation and Systems, 3(2), 109–122. https://doi.org/10.1049/ccs2.12024
Week 4 (5/1) Fairness in AI [KM]
- Gebru, T. (2020). Race and Gender. In M. D. Dubber et al. (eds.) The Oxford Handbook of Ethics of AI (pp. 253–270). Oxford University Press.
Supporting material:
- Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77–91). PMLR. [PDF]
- Conitzer, V., Hadfield, G. K., & Vallor, S. (2022). Technical Perspective: The Impact of Auditing for Algorithmic Bias. Communications of the ACM, 66(1), 100. [PDF]
- Birhane, A., Steed, R., Ojewale, V., Vecchione, B., & Raji, I. D. (2024). AI auditing: The broken bus on the road to AI accountability. arXiv preprint arXiv:2401.14462. [PDF]
- Binns, R. (2018, January). Fairness in machine learning: Lessons from political philosophy. In Conference on fairness, accountability and transparency (pp. 149–159). PMLR. [PDF]
Week 5 (5/8) Accountability in AI [MM]
- Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology, 6, 175–183.
Supporting material:
- Binns, R. (2018). Algorithmic accountability and public reason. Philosophy & technology, 31(4), 543-556. [PDF]
- Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and engineering ethics, 21, 619-630. [PDF]
- Sparrow, R. (2007). Killer robots. Journal of applied philosophy, 24(1), 62–77. [PDF]
- Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299–309. [PDF]
- Novelli, C., Taddeo, M. & Floridi, L. (2023) Accountability in artificial intelligence: what it is and how it works. AI & Society. https://doi.org/10.1007/s00146-023-01635-y
- Vallor, S., & Ganesh, B. (2023). Artificial intelligence and the imperative of responsibility: Reconceiving AI governance as social care. In The Routledge Handbook of Philosophy of Responsibility (pp. 395–406). Routledge.
- Liu, H. Y. (2016). Refining responsibility: differentiating two types of responsibility issues raised by autonomous weapons systems. In Nehal Bhuta et al. (eds.) Autonomous weapons systems: Law, ethics, policy (pp. 325–344). Cambridge University Pres.
- Baum, K., Mantel, S., Schmidt, E., & Speith, T. (2022). From responsibility to reason-giving explainable artificial intelligence. Philosophy & Technology, 35(1), 12. https://doi.org/10.1007/s13347-022-00510-w
5/15 No Class
Week 6 (5/22) Trust in AI
- Simion, M., & Kelp, C. (2023). Trustworthy artificial intelligence. Asian Journal of Philosophy, 2(1), 8. https://doi.org/10.1007/s44204-023-00063-5
Supporting material:
- Ryan, M. (2020). In AI we trust: ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767. https://doi.org/10.1007/s11948-020-00228-y
- Alvarado, R. (2022). Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI. Bioethics, 36(2), 121–133.
- Nyrup, R. (2023). Trustworthy AI: a plea for modest anthropocentrism. Asian Journal of Philosophy, 2(2), 40. https://doi.org/10.1007/s44204-023-00096-w
Week 7 (5/29) Law and regulation (1) [MM]
- Bryson. J. (2020). The Artificial Intelligence of the Ethics of Artificial Intelligence: An introductory overview for law and regulation, In M. D. Dubber et al. (eds.) The Oxford Handbook of Ethics of AI (pp. 3–27). Oxford University Press.
Supporting material:
- UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence. [PDF]
Week 8 (6/5) Law and regulation (2) [KM; Professor Hin-Yan Liu]
- Liu, H. Y., Maas, M., Danaher, J., Scarcella, L., Lexer, M., & Van Rompaey, L. (2020). Artificial intelligence and legal disruption: a new model for analysis. Law, Innovation and Technology, 12(2), 205–258.
Supporting material:
- Hagendorff, T. (2020) The Ethics of AI Ethics: An Evaluation of Guidelines. Minds & Machines 30, 99–120. https://doi.org/10.1007/s11023-020-09517-8
- Munn, L. (2023). The uselessness of AI ethics. AI and Ethics, 3(3), 869–877. https://link.springer.com/article/10.1007/s43681-022-00209-w
- Lundgren, B. (2023). In defense of ethical guidelines. AI and Ethics, 3(3), 1013–1020. https://link.springer.com/article/10.1007/s43681-022-00244-7
- Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature machine intelligence, 1(9), 389–399. [PDF]
6/12 No Class
Week 9 (6/19) Living with artificial others [KM]
- Kureha, M. (2023) On the moral permissibility of robot apologies. AI & Society. https://doi.org/10.1007/s00146-023-01782-2
Supporting materials:
- Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5–24. [PDF]
- Ryland, H. (2021). It’s friendship, Jim, but not as we know it: A degrees-of-friendship view of human–robot friendships. Minds and Machines, 31(3), 377–393. [PDF]
- Turkle, S. (2011). Alone Together: Why we expect more from technology and less from each other. Chapter 3 True Companions (pp. 53–66). Basic Books.
Week 10 (6/26) Ontology of chatbots [KM]
- Mallory, F. (2023). Fictionalism about chatbots. Ergo: an Open Access Journal of Philosophy, 10: 38. [PDF]
Supporting materials:
- Grodniewicz, J. P., & Hohol, M. (2024). Therapeutic Chatbots as Cognitive-Affective Artifacts. Topoi, 1-13. https://link.springer.com/article/10.1007/s11245-024-10018-x
Week 11 (7/3) Robot rights [MM] Online
- Gunkel, D. J. (2018). The other question: can and should robots have rights?. Ethics and Information Technology, 20, 87–99.
Supporting material:
- Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291. [PDF]
- Danaher, J. (2020). Welcoming robots into the moral circle: a defence of ethical behaviourism. Science and engineering ethics, 26(4), 2023–2049. [PDF]
- Darling, K. (2016). Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In R. Calo, A. M. Froomkin, & I. Kerr (eds.), Robot law (pp. 213–232). Edward Elgar Publishing. [PDF]
- Shevlin, H. (2021). How Could We Know When a Robot was a Moral Patient? Cambridge Quarterly of Healthcare Ethics, 30(3), 459–471. doi:10.1017/S0963180120001012 [PDF]
Week 12 (7/10) Flourishing with AI
- Vallor, S. (2017). AI and the Automation of Wisdom. Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics, 161–178.
Supporting materials:
- Giubilini, A., & Savulescu, J. (2018). The artificial moral advisor. The “ideal observer” meets artificial intelligence. Philosophy & technology, 31, 169–188. [PDF]
- Danaher, J. (2018). Toward an ethics of AI assistants: An initial framework. Philosophy & Technology, 31(4), 629–653. [PDF]
- Vallor, S. (2015). Moral deskilling and upskilling in a new machine age: Reflections on the ambiguous future of character. Philosophy & Technology, 28, 107–124. [PDF]
Week 13 (7/17) Guest Lecture: Dr. Timo Spieth (University of Bayreuth)
Week 14 (7/24) Essay writing session [KM]
Week 15 (7/31) General discussion [KM/MM]
Topics not covered in this course
Conscious AI
AI and mental health
AI and science
- Chalmers, D. (2023). Could a language model be conscious? https://philpapers.org/rec/CHACAL-3
- David Chalmers, "Are Large Language Models Sentient?" https://youtu.be/-BcuCmf00_Y
- Schneider, S. (2020). How to Catch an AI Zombie. In Ethics of Artificial Intelligence, 439-458. [PDF]
AI and mental health
- Grodniewicz, J. P., & Hohol, M. (2024). Therapeutic Chatbots as Cognitive-Affective Artifacts. Topoi, 1-13. https://link.springer.com/article/10.1007/s11245-024-10018-x
- Fulmer, R., Joerin, A., Gentile, B., Lakerink, L., & Rauws, M. (2018). Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: randomized controlled trial. JMIR mental health, 5(4), e64 https://www.ncbi.nlm.nih.gov/pubmed/30545815
- Neuman, Y., Cohen, Y., Assaf, D., & Kedma, G. (2012). Proactive screening for depression through metaphorical and automatic text analysis. Artificial intelligence in medicine, 56(1), 19-25 https://www.ncbi.nlm.nih.gov/pubmed/22771201
- Krueger, J., & Osler, L. (2022). Communing with the dead online: chatbots, grief, and continuing bonds. Journal of Consciousness Studies, 29(9-10), 222–252 [PDF]
- Fabry, R. E., & Alfano, M. (2024). The Affective Scaffolding of Grief in the Digital Age: The Case of Deathbots. Topoi, 1-13.
AI and science
- Messeri, L., Crockett, M.J. (20204 Artificial intelligence and illusions of understanding in scientific research. Nature 627, 49–58. https://doi.org/10.1038/s41586-024-07146-0
- Nakadai, R., Nakawake, Y. & Shibasaki, S. (2023) AI language tools risk scientific diversity and innovation. Nat. Hum. Behav. 7, 1804–1805.
- Science News Staff (2017) AI is changing how we do science. Get a glimpse. Science. https://www.science.org/content/article/ai-changing-how-we-do-science-get-glimpse
- Birhane, A., Kasirzadeh, A., Leslie, D., & Wachter, S. (2023). Science in the age of large language models. Nature Reviews Physics, 5(5), 277–280. www.nature.com/articles/s42254-023-00581-4
- Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., ... & Zitnik, M. (2023). Scientific discovery in the age of artificial intelligence. Nature, 620(7972), 47–60. https://www.nature.com/articles/s41586-023-06221-2
- Duede, E. (2023). Deep learning opacity in scientific discovery. Philosophy of Science, 90(5), 1089-1099. https://doi.org/10.1017/psa.2023.8
- 呉羽真・久木田水生(2020)「AIと科学研究」『人工知能と人間・社会』122〜169ページ.