Join our mission to advance AI alignment research through pluralistic values. We're looking for passionate researchers and professionals who want to help ensure AI benefits all of humanity.
Lead technical research on AI safety and alignment, developing novel approaches to value learning, preference modeling, and robustness in AI systems.
Amsterdam, Netherlands or Remote
Build and maintain experimental infrastructure for AI safety research, focusing on scalable systems for testing alignment theories and value learning approaches.
Amsterdam, Netherlands or Remote
Investigate foundational questions in AI alignment and safety, combining theoretical analysis with empirical insights to develop new frameworks for value learning and preference modeling.
Amsterdam, Netherlands or Remote
We're always interested in hearing from talented individuals who are passionate about AI alignment research. Send us your CV and tell us how you'd like to contribute.
Get in Touch