We research how AI systems can be aligned with human values while navigating complex scenarios where different contexts may reflect diverse and sometimes competing value preferences.
OUR APPROACH
Value pluralism presents a critical challenge for current alignment methodologies. The field has made significant progress on "thin" alignment, one-dimensional harmlessness standards. However, these approaches prove problematic when faced with "thick" alignment problems: situations where AI must navigate inherently contestable domains with no objectively correct resolution.
We aim to develop AI systems that can navigate complex scenarios while respecting diverse value perspectives.
We explore how AI systems can navigate the complexity of diverse and sometimes competing values.
We develop AI systems that can navigate complex scenarios while respecting diverse value perspectives.
We explore how AI can reason through complex alignment dilemmas and acknowledge inherent tradeoffs.
JOIN OUR MISSION
We're looking for researchers and engineers who are passionate about developing AI systems that can navigate complex scenarios while respecting diverse value perspectives.