Fraternity in the age of AI
Our global appeal for peaceful human coexistence and shared responsibility
Rome, September 12, 2025
to His Holiness Pope Leo XIV
to all Global Leaders
to all People of Good Will.
English version (also available in العربية-الإمارات, Deutsch, Español-España, Français-Canada, Français, עברית-ישראל, हिन्दी-भारत, Italiano, 日本語, 한국어, Magyar, , Português-Brasil, Svenska, 简体中文)
Moved by a deep desire for a future with humans shaping society and decisions, we, an independent roundtable composed of experts, technology leaders, thought leaders, and scholars from many different nations, backgrounds and faiths, make the following appeal for a future where AI must be developed responsibly by and for the people.
The choices we make today about AI will fundamentally shape the world we leave to future generations. AI is already causing significant harm, widening inequalities, concentrating power in the hands of few, and damaging the environment. Vast and rapidly growing sums are devoted to creating agentic technologies with the potential to surpass human intelligence – what many in the AI research community refer to as “superintelligence”. These challenges call for moral leadership and urgent concrete actions.
Artificial intelligence presents significant opportunities to advance scientific discovery and mutual human understanding, transform healthcare, improve governance and, more broadly, foster sustainable, inclusive prosperity. However, it also poses serious risks as described in the International Scientific Report on AI Safety, including job displacement, reduction of individual freedoms, power warfare, disinformation and manipulation, mass surveillance, environmental impacts, and threats to human welfare.
To harness any legitimate and potential opportunities while mitigating the costs and risks, it is essential to establish the foundations for human flourishing as well as well-defined boundaries that are rooted in respect for dignity, community, human and environmental rights and accountability.
In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family:
- Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated.
- AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent.
- Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or “rights”.
- Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.
- Safe and ethical development: Developers must design AI with safety, transparency, and ethics at its core, not as an afterthought. Deployers must consider the context of use and potential harms and are subject to the same safety and ethical principles as developers. Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle.
- Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance.
- Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy.
- No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized.
- No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable.
- Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain.
- No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.
Upholding these principles will not be easy. It demands moral courage, meaningful accountability mechanisms, farsighted leadership from all sectors of society and binding international treaty establishing red lines and an independent oversight institution with enforcement powers. We therefore call for moral leadership in the age of AI. Since the dangers presented by AI are often indirect, we call on scientists, civil society and rights groups, and other stake-holders to make a greater effort to articulate – and amplify public awareness – AI’s limitations and dangers. We call scientists, technology industry leaders and policy makers to listen to the voices, experiences and research of data workers, of the communities and peoples experiencing the costs (of the material side) of AI and center their work on the protection and benefit of the most vulnerable. Because the legitimacy of moral and legal rules in a society rely on how it treats its most vulnerable peoples.
We also appeal to scientists, civil society groups, and independent auditors to develop and propose new objectives and metrics to train, optimize, and evaluate learning algorithms in terms of veracity, balance and human good, throughout the entire lifecycle, not only task performance and engagement.
We encourage policymakers, technology industry leaders, and global communities to collaborate in developing comprehensive frameworks for the governance of AI that serve the common good. This includes the right of humans to live free of AI. The advancement of genuine human fraternity in the age of artificial intelligence requires the establishment of universal ethical and legal standards.
Finally, we appeal to all people of good will: let us unite to ensure that AI serves all of humanity rather than a narrow few.
By coming together across nations, cultures, and creeds, prioritizing dialogue over competition, we can shape a future that uplifts human dignity and fosters a more just and peaceful world.
We call upon all stakeholders – including citizens, scientists, business leaders, faith leaders, community representatives, and policymakers – to participate in this initiative. Collectively, we reiterate the essential principle that machines are to serve the interests of humanity.
Members of the working group that drafted the Global Appeal
1. Paolo Benanti (Scientific Coordinator)
2. Yoshua Bengio
3. Ernesto Belisario
4. Abeba Birhane
5. Cornelius Boersch
6. Yuval Noah Harari
7. Geoffrey Hinton
8. Lorena Jaume-Palasí
9. Antal Kuthy
10. Riccardo Luna (Coordinator)
11. Nnenna Nwakanma
12. Valerie Pisano
13. Stuart Russell
14. Max Tegmark
15. Marco Trombetti
16. Jimena Sofía Viveros Álvarez
17. Alexander Waibel
18. will.i.am
Also signed by
● Miguel Benasayag
● Giorgio Parisi
● Maria Ressa