Current alignment methods address straightforward safety standards but fail in complex scenarios where AI must navigate competing values with no single correct answer.
We develop frameworks that acknowledge multiple valid perspectives, enabling AI systems to operate ethically in pluralistic environments.