top of page

A Middle Path for AI Ethics? Some Buddhist Reflections," Theology and Science, Dec 2024'




Editorial

A Middle Path for AI Ethics? Some Buddhist Reflections

Jane Compson, Mark Graves, Peter D. Hershock, and Nikki Mirghafori

=

There is no shortage of AI ethics principles and guidelines.Footnote1 These have been put forward from a wide range of perspectives, including those of academic, scientific, and professional associations; non-governmental organizations; national governments; and inter-governmental organizations. While differences do exist among these sets of principles and guidelines, an underlying and unstated shared premise is that the relationship between human and artificial intelligences, or more broadly between humanity and intelligent technology, is contingent upon their independent existence. Buddhism productively invites us to imagine, think, and act otherwise.

Buddhism is an internally diverse and continuously evolving set of traditions that originated some 2,600 years ago in the Himalayan foothills, that rapidly came to connect the peoples and cultures of Asia, and that now has significant global reach. In this editorial, we have the modest aim of introducing a few of the conceptual commitments that are shared across Buddhist traditions of thought and practice and drawing out their distinctive relevance for AI ethics.

Interdependence: Rethinking the Human-Technology Relationship

According to its canonical origin myths, Buddhist thought and practice are rooted in the insight that seeing all things as interdependently originated is the key to alleviating and eventually eliminating conflict, trouble, and suffering (duhkha). This insight was further qualified by the injunction to see all things as being without-self (anatman)—that is, without any fixed and abiding essence. Taken together, the Buddhist teachings of interdependence and being without-self can be strongly interpreted as affirming the ontological primacy of relationality—an affirmation that individually existing things or beings are abstractions or provisionally useful fictions. In short, the fundamental nature of reality is interdependent. Things ultimately are what they mean to/for each other.

Granting ontological primacy to relational dynamics calls into question the presumed independence of human and artificial intelligences, and more broadly of humanity and intelligent technology. That is, it calls into question the reduction of technologies (distributed relational systems) to tools (localizable artifacts) from which we have clear and actionable exit rights.Footnote2 The fields of human-computer interaction (HCI) and sociotechnical systems (STS) capture aspects of the relational dynamics. HCI focuses on the interaction between people and computers and their relational interfaces. STS emphasizes the mutual causality among social processes and the emergence of technological systems that then affect society. These continuous, interleaved social and technological changes interact in complex ways that are affected by both social and technical norms.Footnote3 However, Buddhism also calls into question assumptions about the ontological independence of agents, actions, and the effects of actions, and more generally of causes and effects.

While acknowledging the conventional utility of analytically distinguishing the elements of the ethical triad of agent-action-patient—and the merits of virtue, deontic, and consequence focused approaches to ethics—the general Buddhist perspective is that doing so risks ignoring the qualitative dynamics of their mutual constitution. In Buddhist contexts, these dynamics are conceptualized in terms of the multidirectional causal interplay among patterns of values-intentions-actions and relational outcomes/opportunities, a.k.a. karma.Footnote4 Phrased in contemporary terms, the teaching of karma enjoins seeing the relations of sentient beings and their environments as coevolutionary. This includes technological environments.

For example, the design of social media recommendation systems (e.g. a “like button”) makes teenagers and other social media users vulnerable to external validation, creating new social roles (like social media influencer), which then impacts the design of future systems and the agency of those who use them. Although these systems lack the subjective self-awareness of human agents, they nevertheless enact (human-originated) values and intentions, thus leading to what can be described as interdependent agency or agency of interdependence, distinguishing between what might be called relationally weak and relationally strong patterns of interdependence. In the former instance, the agencies involved may be operating somewhat independently; in the second instance, agentive independence is only provisional, while interdependence is primary.

Consider the following scenario: Alex has an important meeting in the morning and worries about oversleeping. Hearing this concern, Alex’s partner sets an alarm on their phone to ensure Alex wakes up on time. When the alarm rings in the morning, Alex wakes up promptly. In this case, who is the agent of Alex’s waking up? Is it Alex, with the expressed intention to wake early? Is it the partner, who set the alarm? Or is it the phone, which triggered the alarm sound? From the perspective of interdependent agency, no single entity alone is the agent. Instead, Alex’s timely awakening arises from a network of interdependent actions: Alex’s intention to wake up, the partner’s responsiveness in setting the alarm, and the phone’s technological capacity to alert Alex. Together, these elements create a distributed agency, wherein each plays a crucial role in generating the final outcome. Consider further however, if there were an AI assistant in the home that hears Alex’s worries, and it proactively sets the alarm rather than the partner, without asking whether to do so. Over time, Alex and his partner may cede alarm decisions to their AI assistant. Here, a shared agency has emerged from a history of interactions between Alex, their partner, and the AI assistant.

By loosening the exclusive ascription of agency to human (or AI) individuals, Buddhist approaches can better capture the relational dynamics at play, characterizing the liminal transition from AI as a mere tool to AI as an integrative locus of relational dynamics within a network of morally significant actions. Here agency emerges from the collective interplay of human intentions, technological design, deep learning neural network operations, and social context. Buddhist teachings about emptiness and being without-self call into question the notion of independent agents that have only external relations with their environments of action. Our use of agency rather than agents is a step toward this recognition that agency is a relational property, not something possessed by an isolated individual.

The Alignment Predicament: Navigating the Middle Ground among Ethical Systems

Recognizing that agency is a relational quality casts ethically significant light on the human-AI alignment problem.Footnote5 Among those working in AI engineering, the “alignment problem” is essentially technical. Solving it consists in first carefully specifying the intended goals of the system and then working to ensure that the system works robustly and reliably, according to the specified ethical principles, to achieve those goals. The concerns that are being addressed by these alignment efforts can be usefully framed in terms of a distinct type of risk: accidents-of-design. Included in these are concerns about whether AI performs as intended–for example, if the AI system’s goal is to reduce suffering, the system should not try to achieve this goal by killing the patient so that there is no more pain. Additionally, the alignment problem includes whether AI systems’ actions might pose existential risks–for example, through the possible advent of artificial general intelligence (AGI) or artificial superintelligence (ASI) that fail (or elect not) to act in humanity’s best interests. But these alignment risks do not exhaust the risks or ethical concerns regarding human-AI interdependencies. By acknowledging the “coevolutionary” interplay of humans and AI, Buddhism directs critical attention to the fact that the developmental trajectory of AI is inseparable from that of humanity and the transformation of economic, political, cultural, and perhaps physiological relations. In short, Buddhism alerts us to the fact that AI ethics is ultimately an ethics of human futures.

Buddhism thus invites reimagining the alignment problem as an alignment predicament, where problems emerge when existing practices cease to be effective for realizing abiding aims and interests, and where predicaments consist in the emergence of apparent contradictions or inconsistencies among those aims and interests.Footnote6 Solving a problem requires changing the means, while resolving a predicament requires eliminating conflicts among ends. There is growing recognition that AI alignment presumes human alignment and the challenges that poses.Footnote7 However, these approaches treat the various human values and AI technology as independent and thus as requiring an additional process for alignment, and this poses challenges. Recognizing their interdependence can foreground the agency of that interdependence. Solving problems involves what we can do, while resolving predicaments involves what we should do. While some dimensions of the AI alignment problem are open to technical solution, other dimensions are not. The AI alignment predicament can only be resolved ethically as a function of increased clarity and commitment regarding the values and intentions to be enacted by AI systems, especially in terms of their interdependent spiraling karmic ramifications. For example, recommender systems offer the most popular solutions on top, which users tend to select, and this human behavior then becomes further training data which biases those systems away from fostering (or perhaps even allowing) deeper and more effortful investigation. The risks of inapt design and misuse by design are AI problems. The risks to human agential capabilities and of ongoing, algorithmically orchestrated transformations of social, economic, political, and cultural relations are AI predicaments.

Buddhist contributions to resolving the AI alignment predicament would focus on fostering robustly shared (and ala Jean-Luc Nancy’s distinction) rather the presumptively common values regarding what matters most for realizing more humane and liberating relational dynamics in an era of human-AI coevolution.Footnote8 For Nancy, invoking common values or natures–in his classic example, the invocation of common folk values and nature of the German people in the Nazi era–always has disciplinary force. The shared–as in sharing a meal or a dance or owning a share in a company–entails offering/contributing something distinctive in a way that is welcomed and appreciated by those receiving it. This collective and shared effort to determine what matters most in realizing the most positive human-technology coevolutionary relations would include understanding and critically addressing the relational risks of “offshoring” intelligent human practices and ethical deliberation to machines, and the current biasing of AI systems toward corporate/national valorizations of competition, control, convenience, and choice over collaboration, contribution, effort, and commitment. For example, one might offshore memorizing and navigating to smartphones, caring to robot assistants, or revering the deceased via chatbots. Here the notion of effort (one of the six paramitas or dimensions of relational perfection) is crucial, as are concerns about the ways in which the digital capture of and direction of attention are currently biased away from deepening attentional capacities for care, commitment, and compassion.

Buddhism also recognizes that no ethical systems–including Buddhist ethical systems–are without blind spots. The challenge in resolving global predicaments like those posed by intelligent technology is to realize conditions in which the differences among ethical systems become resources for each ethical system to take its own blind spots into account to do better what it does best. To use an analogy, if ethical systems are like species, the current state of affairs is a global ethical zoo in which systems are not in relations of mutually supportive interdependence. The aim is to turn the “middle ground” of ethical interaction into a global ethical “ecosystem.” That is, the aim is to go beyond mere ethical variety to realize ethical diversity–conditions in which differences are engaged as the basis of mutual contribution to sustainably shared flourishing.Footnote9

Conclusion

In summary, we have suggested some insights from Buddhist traditions as helpful contributions to the AI ethics deliberations. Nature is fundamentally interdependent, with the concept of individuals a useful analytical construct but with their existence nevertheless dependent upon relational dynamics. Agency is thus a relational property, rather than a property of an individual. A Buddhist approach also calls for reimagining the alignment problem as an alignment predicament. More than a technical problem of aligning algorithmic systems with intended human values, the alignment predicament requires increasing clarity and commitment regarding the values and intentions to be enacted by the AI systems, especially as those embedded values affect (have agency on) people. We also emphasize the importance of ethical diversity, recognizing that a “global ethical ecosystem” requires the strengths of different ethical systems learning from one another to foster mutual growth. Each system brings unique perspectives and resources, while also holding distinct blind spots, making their interdependence essential for a more robust ethical framework. Finally, we invite readers to reflect critically on the potential risks of “offshoring” essential human practices—such as memory, care, and attentiveness—to AI systems. Guided by the Buddhist emphasis on personal cultivation as a dimension of ethical perfection, we encourage actively fostering these qualities within human interactions rather than delegating them to technology. Such intentional cultivation supports a coevolution between humans and AI that enriches and sustains the shared flourishing of both.

Additional information

Notes on contributors

Jane Compson

Jane Compson is a Research Fellow at AI and Faith and an Associate Professor in Philosophy and Religious Studies at the University of Washington, Tacoma.

Mark Graves

Mark Graves is Research Fellow and Director at AI & Faith and Research Associate Professor of Psychology at Fuller Theological Seminary. He holds a PhD in computer science; has completed fellowships in genomics, moral psychology, and moral theology; and has published over eighty technical and scholarly works in computer science, biology, psychology, and theology, including three books.

Peter D. Hershock

Peter D. Hershock is an Advisor at AI & Faith, and Director of the Asian Studies Development Program and founder of the Humane AI Initiative at the East-West Center in Honolulu. A contemporary, intercultural Buddhist philosopher, he is the author of eight books and more than forty articles.

Nikki Mirghafori

Nikki Mirghafori serves as a Stewarding Teacher at Spirit Rock Meditation Center, its Board of Directors, and Chair of its Ethics Council. She is a lineage holder in the Theravada Buddhist tradition and holds a PhD in computer science from UC Berkeley, having led decades of research in AI in academia and industry.

Notes

1 Nicholas Kluge Corrêa et al., “Worldwide AI Ethics: A Review of 200 Guidelines and Recommendations for AI Governance,” Patterns 4:10 (October 13, 2023), 100857, https://doi.org/10.1016/j.patter.2023.100857.

2 Peter D. Hershock, Buddhism and Intelligent Technology: Toward a More Humane Future (London: Bloomsbury Academic, 2021), 64–67.

3 Erin E. Makarius et al., “Rising with the Machines: A Sociotechnical Framework for Bringing Artificial Intelligence into the Organization,” Journal of Business Research 120 (2020), 262–273, https://doi.org/10.1016/j.jbusres.2020.07.045; Ibo van de Poel, “Embedding Values in Artificial Intelligence (AI) Systems,” Minds and Machines 30:3 (2020), 385–409, https://doi.org/10.1007/s11023-020-09537-4; Olya Kudina and Ibo van de Poel, “A Sociotechnical System Perspective on AI,” Minds and Machines 34:3 (2024), 21, https://doi.org/10.1007/s11023-024-09680-2.

4 Peter D. Hershock, “Karma,” in Key Concepts in World Philosophies: A Toolkit for Philosophers, ed. Sarah Flavel and Chiara Robbiano (London: Bloomsbury Publishing, 2023).

5 Brian Christian, The Alignment Problem: Machine Learning and Human Values (New York, NY: W. W. Norton & Company, 2020); Jiaming Ji et al., “AI Alignment: A Comprehensive Survey,” arXiv, May 1, 2024, https://doi.org/10.48550/arXiv.2310.19852.

6 Peter D. Hershock, Valuing Diversity: Buddhist Reflection on Realizing a More Equitable Global Future(Albany: State University of New York Press, 2012), 6–7, 62–65.

7 Vincent Conitzer et al. “Position: Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback,” in Proceedings of the 41st International Conference on Machine Learning (PMLR, 2024), 9346–9360, https://proceedings.mlr.press/v235/conitzer24a.html; Taylor Sorensen et al., “Position: A Roadmap to Pluralistic Alignment,” in Proceedings of the 41st International Conference on Machine Learning (PMLR, 2024), 46280–46302, https://proceedings.mlr.press/v235/sorensen24a.html.

8 Jean-Luc Nancy, Being Singular Plural. trans. Robert Richardson and Anne O’Byrne (Stanford, CA: Stanford University Press, 2000).

9 On this robustly relational conception of diversity, see: Peter D. Hershock, Valuing Diversity: Buddhist Reflection on Realizing a More Equitable Global Future (Albany, NY: State University of New York Press, 2012).

Comments


© 2017-2025 by Nikki Mirghafori, PhD. All Rights Reserved.

bottom of page