作成:2023/04/05、最終更新:2023/07/05
以下は、2023年度1学期北海道大学大学大学院共通授業科目 Philosophy and ethics of artificial minds (Instructors: Katsunori Miyahara & Mark Miller)の講読文献リストです。随時更新していきます。シラバスは北海道大学のシラバス検索サイトで確認してください。文献のセレクションにおいては Timo Speith氏の貴重な助言をいただきました。
以下は、2023年度1学期北海道大学大学大学院共通授業科目 Philosophy and ethics of artificial minds (Instructors: Katsunori Miyahara & Mark Miller)の講読文献リストです。随時更新していきます。シラバスは北海道大学のシラバス検索サイトで確認してください。文献のセレクションにおいては Timo Speith氏の貴重な助言をいただきました。
Week 1 – What is the Ethics of AI
Artificial Intelligence (AI) is a term coined by John McCarthy in 1956 at the Dartmouth conference. According to him, it refers to a field of research engaged in “the science and engineering of making intelligent machines” (http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html). AI systems potentially introduce both huge benefits and risks to individual humans and the human society. In this first class, we will overview a range of ethical and philosophical challenges posed (or expected to be posed) by the rapid progress in AI technologies and their introduction into the society. This prepares the ground for the upcoming classes. [Handout]
Primary reading:
Follow-up:
- N. Bostrom & E. Yudkowsky. (2014). ‘The ethics of artificial intelligence’. In W. M. Ramsey & K. Frankish (eds.), The Cambridge Handbook of Artificial Intelligence (pp. 316–334), Cambridge University Press.
Follow-up:
- Coeckelbergh, M. (2020) Chapter 1 Mirror, Mirror, on the Wall, in AI Ethics (pp. 1–10), MIT Press.
- Liao, S. M. (2020) A short introduction to the ethics of artificial intelligence, in Ethics of Artificial Intelligence (pp. 1–42), OUP.
Module 1: Long-term issues in AI ethics
Week 2 – The Singularity
What might happen in the not-so-near future if AI systems continue to develop as they do now over years and decades? Some authors predict that this will eventually lead to the birth of super-intelligent artificial agents, that is, AI systems equipped with a level of intelligence that far exceeds that of human intelligence. Moreover, they often claim that super-intelligent systems pose existential risks to humanity. This line of thought raises various philosophical and ethical questions. Is this something that can really happen in the future? In that case, what can we do to prevent super-intelligent AIs from causing disastrous outcomes? Or is it nothing more than science fiction that is most unlikely to happen in real life? If that's the case, is it pointless to ponder over the nature of super-intelligent agents and the accompanying risks? In this class, we will consider these questions by reading and discussing a 2012 paper by Oxford philosopher Nick Bostrom, a leading advocate of the concept of super-intelligence.
Primary reading:
Follow-up:
- Bostrom, N. 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Minds and Machines, 22(2): 71–85. doi:10.1007/s11023-012-9281-3
Follow-up:
- Chalmers, D. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies, 17(9-1), 7–65.
- Coeckelbergh, M. (2020) Chapter 2 Superintelligence, Monsters, and the AI Apocalypse, AI Ethics (pp. 11–30). MIT Press.
- Brooks, R. “The Seven Deadly Sins of Predicting the Future of AI”, on Rodney Brooks: Robots, AI, and Other Stuff, 7 September 2017. https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/
- Floridi, Luciano. “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible. Why?”, Aeon, 9 May 2016. https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible
Week 3 – Making Moral Machines
Autonomous, intelligent artificial systems are anticipated to deliver huge benefits to the human society. However, they also carry the risk of causing substantial harm to us. Some theorists argue for the importance of developing AI systems equipped with ethics, "artificial moral agents (AMA)" (Allen et al., 2000), to address these risks. The project of developing such systems is often called "machine ethics". Machine ethics involve difficult technical and theoretical issues deeply intertwined with the very nature of morality and ethical theories. In this paper, we consider these and other questions related to this domain of research by reading a paper by Director of the Leverhulme Centre for the Future of Intelligence at Cambridge University Stephen Cave and his colleagues.
Primary reading:
Follow-up:
Supplementary materials:
Primary reading:
- Cave, S., Nyrup, R., Vold, K., & Weller, A. (2018). Motivations and risks of machine ethics. Proceedings of the IEEE, 107(3), 562–574.
Follow-up:
- Allen, C., Varner, G., & Zinser, J. (2000). “Prolegomena to any future artificial moral agent.” Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.
- Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and information technology, 7(3), 149-155.
- Anderson, M., & Anderson, S. L. (2010). Robot be good. Scientific American, 303(4), 72-77.
- Wallach, W., & Vallor, S. (2020). Moral machines: From value alignment to embodied virtue. In Ethics of Artificial Intelligence (pp. 383–412). Oxford University Press.
Supplementary materials:
- IEEE Spectrum (31 May 2016). "How to build a moral robot" https://youtu.be/LuqLEx7gAOE
- UConn (16 June 2011). "The Ethical Robot" https://youtu.be/pajCoSTGvas
Week 4 – AI alignment
Primary reading:
Follow-up:
- Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437. https://link.springer.com/article/10.1007/s11023-020-09539-2
Follow-up:
- AI alignment. (20 December 2022). In Wikipedia. https://en.wikipedia.org/wiki/AI_alignment
- Yudkowsky, E. ‘The AI Alignment Problem: Why It’s Hard, and Where to Start’, recorded lecture at Stanford University on May 5, 2016 for the Symbolic Systems Distinguished Speaker series. https://youtu.be/EUjc1WuyPT8
Week 5 – Conscious AI
Primary reading:
Follow-up:
Supplementary material:
- Chalmers, D. (2023). Could a language model be conscious? https://philpapers.org/rec/CHACAL-3
Follow-up:
- Schneider, S. (2020). How to Catch an AI Zombie. Ethics of Artificial Intelligence, 439-458.
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610–623). [pdf]
Supplementary material:
- David Chalmers, "Are Large Language Models Sentient?" https://youtu.be/-BcuCmf00_Y
Module 2: Living with artificial agents
Week 6 – Robot rights
Primary reading:
Folow-up:
Supplementary materials:
- Gunkel, D. J. (2018). The other question: can and should robots have rights? Ethics and Information Technology, 20(2), 87–99.
Folow-up:
- John Danaher (October 31, 2017) "Should Robots Have Rights? Four Perspectives" [blog post]
Supplementary materials:
- Kate Darling "Why we have an emotional connection to robots" [Video]
Week 7 – Ascribing minds to AI
Primary reading:
- Nyholm, S. (2023). Robotic Animism: The Ethics of Attributing Minds and Personality to Robots with Artificial Intelligence. In Animism and Philosophy of Religion (pp. 313–340). Springer.
Week 8 – Making friends with AI
Primary reading:
Follow-up:
Supplementary material:
- Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5–24.
Follow-up:
- Ryland, H. (2021) It’s Friendship, Jim, but Not as We Know It: A Degrees‑of‑Friendship View of Human–Robot Friendships. Minds and Machines, 31, 377–393.
Supplementary material:
- Film "her/世界でひとつの彼女" [Wikipedia]
Week 9 – AI and the art world
Primary reading:
- Coeckelbergh, M. (2017). Can Machines Create Art? Philosophy and Technology, 30, 285–303. https://doi.org/10.1007/s13347-016-0231-5
Week 10 – AI and well-being
Primary reading:
- Krueger, J., & Osler, L. (2022). Communing with the dead online: chatbots, grief, and continuing bonds. Journal of Consciousness Studies, 29(9-10), 222–252
Week 11 – Ethics of terminology
Primary reading:
Follow-up:
- Deroy, O. (2023). The Ethics of Terminology: Can We Use Human Terms to Describe AI?. Topoi, 1-9.
Follow-up:
- Birhane, A., and van Dijk, J. (2020). "Robot rights? Let's talk about human welfare instead." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. [pdf]
Module 3: Fairness, bias, and discriminations in AI systems
Week 12 – Accountable algorithms
Primary reading:
- Binns, R. (2018). ‘Algorithmic Accountability and Public Reason’, Philosophy & Technology, 31, 543–556. https://link.springer.com/article/10.1007/s13347-017-0263-5
Week 13 – Making AI ethical
Primary reading:
- O'Neil & Gunn (2020) Near-term AI and the ethical matrix. In Liao ed. Ethics of Artificial Intelligence (pp. 237–270). Oxford University Press
Module 4 Flourishing in the age of AI
Week 14 – Flourishing with AI
Primary reading:
- Vallor, S. (2021). Twenty-First-CenturyVirtue. In Science, Technology, and Virtues: Contemporary Perspectives (pp. 77–96), OUP.