Join our team
Join our team of researchers and engineers passionate about AI alignment and pluralistic values.
Lead technical research on AI safety and alignment, developing novel approaches to value learning, preference modeling, and robustness in AI systems.
Build and maintain experimental infrastructure for AI safety research, focusing on scalable systems for testing alignment theories and value learning approaches.
Investigate foundational questions in AI alignment and safety, combining theoretical analysis with empirical insights to develop new frameworks for value learning and preference modeling.