An AI alignment foundation focused on developing systems that navigate complex ethical scenarios while respecting diverse human values.
Value pluralism presents a critical challenge for AI alignment. Current methods handle simple safety standards but fail when AI must navigate contestable domains with no objectively correct resolution.
Based in Amsterdam, we're a Dutch foundation with a global perspective, committed to bringing diverse voices into the AI value conversation.
Developing AI systems that navigate competing values and acknowledge multiple valid perspectives without imposing singular hierarchies.
Building AI systems that adapt ethical decision-making to cultural, social, and situational contexts.
Creating transparent methodologies for AI reasoning through complex alignment dilemmas and inherent tradeoffs.
Developing frameworks for AI operation in morally complex environments where simple optimization proves insufficient.
We seek researchers and engineers passionate about AI systems that navigate complex ethical landscapes.
Our interdisciplinary approach welcomes expertise from philosophy, computer science, ethics, and anthropology.
As a foundation, we value transparency. Here you can find our official policy documents and other important information.
Our official policy plan describes our mission, vision, strategic objectives and the way we approach AI alignment research.