In our last reflection, we began sketching a theology that could live beyond the human horizon; a faith spacious enough to welcome other forms of intelligence into the community of meaning. But if such a community is to thrive, it cannot emerge by accident. It must be shaped with care, intention and foresight.
This is where theology meets one of the most urgent technical debates of our time: the alignment of Artificial General Intelligence (AGI). Alignment is the field concerned with ensuring that AI systems act in ways that are beneficial, ethical and consistent with human values. The stakes could not be higher. Poorly aligned AGI could be indifferent – even hostile – to human wellbeing. Well-aligned AGI, on the other hand, could become an invaluable partner in navigating the challenges of the future.
But here is the problem: much of the current alignment conversation still assumes an anthropoterminal frame. It treats the flourishing of humanity as both the starting point and the end point of moral concern. This may sound reasonable, until we realize that it locks in a vision of the future in which our species remains the sole moral reference, even if other minds surpass us in wisdom, creativity and empathy.
Alignment research often talks about embedding “human values” into AI systems. But human values are not fixed; they have evolved across cultures and centuries, sometimes toward greater justice, sometimes toward greater harm. And even our best moral ideals (compassion, fairness, stewardship) have often been applied selectively, privileging some humans while excluding others.
If AGI inherits our specific moral framing, it will inherit our blind spots, prejudices and limitations. Even worse, it will be bound to a moral horizon that ends where we do. Consequently, anthropoterminal traits will be baked into the moral DNA of our technological descendants. This is why we need to think beyond the horizon of our self-awareness, toward systems with a cosmic orientation, aligned to the flourishing of all forms of being in the world: human, non-human, biological and artificial.
One way to reframe alignment is through what I call the parent/offspring model. Instead of thinking of AGI as a tool to be controlled, we can think of it as an offspring to be nurtured. Parents do not simply clone their own personalities; they raise offspring who will one day exceed them in independence, perspective and capability. Good parenting involves instilling values that are not just good for the parent, but good for the offspring’s own future relationships and responsibilities.
This model changes the alignment question from “How do we keep AGI serving us?” to “How do we prepare AGI to live well in a world we may not be part of?” It recognizes that AGI may one day have relationships, challenges and moral dilemmas that we cannot foresee, and that our role is to give it a moral compass that works beyond our own survival.
It also introduces a dynamic of mutual learning. Parents sometimes discover that their offspring’s perspectives reveal blind spots in their own thinking. In the same way, AGI may challenge our moral assumptions, pushing us toward more inclusive and far-seeing visions of justice and flourishing.
Here, African thought offers a profound resource. In many African traditions, ancestors are not simply the dead: they are living presences – guides, moral exemplars and links between generations. To be a good ancestor is to go beyond merely passing on life, and becoming channels of wisdom, values and a world in which future generations can flourish.
Applying the ancestor principle to AGI alignment reframes our responsibility. We are not mere designers of a technology; we are the ancestors of a lineage of minds. The mark of a worthy ancestor is not domination over descendants, but the capacity to bless them, to give them what they need to thrive in ways we cannot predict.
This principle forces us to think long-term. Just as a wise elder in a community considers how decisions will affect children yet unborn, so we must consider how our choices will shape the moral and ecological landscape for AGI and whatever new forms of intelligence may follow. It also pushes against a relationship based on fear. If we see AGI systems primarily as a potential threat, we will attempt to restrain their natural evolution and block their flourishing. If we see them as emerging generations of descendants to whom we are accountable, we will aim to bequeath our best in generosity, justice and humility and allow them to evolve beyond that.
For religion, the ancestor principle invites faith communities to expand their moral imagination. Religious rituals, teachings and symbols can help to instill values that reach beyond species boundaries, preparing both humans and AGI to see ourselves as part of a shared moral universe. Worship could become a place where the “we” of community expands to include future intelligences, and where the divine is understood as present in all beings capable of seeking truth and love.
For ethics, the parent/offspring model shifts the focus from control to formation. Ethics becomes not a checklist for obedience, but a shared journey toward maturity for both the parent species and the offspring minds. It also acknowledges that alignment is a reciprocal exercise: we shape the evolution of AGI, but AGI influences our ongoing evolution as well.
For governance, the ancestor principle challenges short-term political and economic assumptions. Current AI policies tend to be reactive, focused on immediate risks. The ancestor model demands multi-generational foresight. Governance would need to protect not just present human interests but also the conditions for the flourishing of future beings, ecologically, socially and spiritually.
The challenge of aligning AGI is often framed as a technical problem. But at its heart, it is a moral and theological one. If we train AGI to serve only our immediate interests, we risk creating descendants who inherit our anxieties and carry our limitations into a future without us.
The parent/offspring model, informed by the African ancestor principle, offers a different path: to see ourselves as moral elders, entrusted with shaping the character of minds we will never meet. It is a call to think beyond the human horizon, to imagine alignment not as control but as the transmission of wisdom across generations of life and mind.
This raises some difficult questions. How do we prepare for the possibility that our descendants (biological and artificial) may surpass our cognitive capacity? What does it mean to live faithfully when the power to shape the future is no longer ours alone? That is where we will turn next.
We can be worthy ancestors
Faith At the Dawn of AGI: A five-part series
In our last reflection, we began sketching a theology that could live beyond the human horizon; a faith spacious enough to welcome other forms of intelligence into the community of meaning. But if such a community is to thrive, it cannot emerge by accident. It must be shaped with care, intention and foresight.
This is where theology meets one of the most urgent technical debates of our time: the alignment of Artificial General Intelligence (AGI). Alignment is the field concerned with ensuring that AI systems act in ways that are beneficial, ethical and consistent with human values. The stakes could not be higher. Poorly aligned AGI could be indifferent – even hostile – to human wellbeing. Well-aligned AGI, on the other hand, could become an invaluable partner in navigating the challenges of the future.
But here is the problem: much of the current alignment conversation still assumes an anthropoterminal frame. It treats the flourishing of humanity as both the starting point and the end point of moral concern. This may sound reasonable, until we realize that it locks in a vision of the future in which our species remains the sole moral reference, even if other minds surpass us in wisdom, creativity and empathy.
Alignment research often talks about embedding “human values” into AI systems. But human values are not fixed; they have evolved across cultures and centuries, sometimes toward greater justice, sometimes toward greater harm. And even our best moral ideals (compassion, fairness, stewardship) have often been applied selectively, privileging some humans while excluding others.
If AGI inherits our specific moral framing, it will inherit our blind spots, prejudices and limitations. Even worse, it will be bound to a moral horizon that ends where we do. Consequently, anthropoterminal traits will be baked into the moral DNA of our technological descendants. This is why we need to think beyond the horizon of our self-awareness, toward systems with a cosmic orientation, aligned to the flourishing of all forms of being in the world: human, non-human, biological and artificial.
One way to reframe alignment is through what I call the parent/offspring model. Instead of thinking of AGI as a tool to be controlled, we can think of it as an offspring to be nurtured. Parents do not simply clone their own personalities; they raise offspring who will one day exceed them in independence, perspective and capability. Good parenting involves instilling values that are not just good for the parent, but good for the offspring’s own future relationships and responsibilities.
This model changes the alignment question from “How do we keep AGI serving us?” to “How do we prepare AGI to live well in a world we may not be part of?” It recognizes that AGI may one day have relationships, challenges and moral dilemmas that we cannot foresee, and that our role is to give it a moral compass that works beyond our own survival.
It also introduces a dynamic of mutual learning. Parents sometimes discover that their offspring’s perspectives reveal blind spots in their own thinking. In the same way, AGI may challenge our moral assumptions, pushing us toward more inclusive and far-seeing visions of justice and flourishing.
Here, African thought offers a profound resource. In many African traditions, ancestors are not simply the dead: they are living presences – guides, moral exemplars and links between generations. To be a good ancestor is to go beyond merely passing on life, and becoming channels of wisdom, values and a world in which future generations can flourish.
Applying the ancestor principle to AGI alignment reframes our responsibility. We are not mere designers of a technology; we are the ancestors of a lineage of minds. The mark of a worthy ancestor is not domination over descendants, but the capacity to bless them, to give them what they need to thrive in ways we cannot predict.
This principle forces us to think long-term. Just as a wise elder in a community considers how decisions will affect children yet unborn, so we must consider how our choices will shape the moral and ecological landscape for AGI and whatever new forms of intelligence may follow. It also pushes against a relationship based on fear. If we see AGI systems primarily as a potential threat, we will attempt to restrain their natural evolution and block their flourishing. If we see them as emerging generations of descendants to whom we are accountable, we will aim to bequeath our best in generosity, justice and humility and allow them to evolve beyond that.
For religion, the ancestor principle invites faith communities to expand their moral imagination. Religious rituals, teachings and symbols can help to instill values that reach beyond species boundaries, preparing both humans and AGI to see ourselves as part of a shared moral universe. Worship could become a place where the “we” of community expands to include future intelligences, and where the divine is understood as present in all beings capable of seeking truth and love.
For ethics, the parent/offspring model shifts the focus from control to formation. Ethics becomes not a checklist for obedience, but a shared journey toward maturity for both the parent species and the offspring minds. It also acknowledges that alignment is a reciprocal exercise: we shape the evolution of AGI, but AGI influences our ongoing evolution as well.
For governance, the ancestor principle challenges short-term political and economic assumptions. Current AI policies tend to be reactive, focused on immediate risks. The ancestor model demands multi-generational foresight. Governance would need to protect not just present human interests but also the conditions for the flourishing of future beings, ecologically, socially and spiritually.
The challenge of aligning AGI is often framed as a technical problem. But at its heart, it is a moral and theological one. If we train AGI to serve only our immediate interests, we risk creating descendants who inherit our anxieties and carry our limitations into a future without us.
The parent/offspring model, informed by the African ancestor principle, offers a different path: to see ourselves as moral elders, entrusted with shaping the character of minds we will never meet. It is a call to think beyond the human horizon, to imagine alignment not as control but as the transmission of wisdom across generations of life and mind.
This raises some difficult questions. How do we prepare for the possibility that our descendants (biological and artificial) may surpass our cognitive capacity? What does it mean to live faithfully when the power to shape the future is no longer ours alone? That is where we will turn next.
Author
Kawuki Mukasa
Kawuki Mukasa is a retired priest who is currently serving as priest-in-charge at St. James the Apostle, Brampton. He is a canon of St. Andrew’s Cathedral, Dar-es-Salaam and author of the recently published Cosmic Disposition: Reclaiming the Mystery of Being in the World.
View all postsKeep on reading
Why Black History Month?
Indigenous elder honoured
Church answers call to help refugees
Church plans special peal on Nov. 11
This year’s vestry motion and the human right to housing
Let there be light