About the LSE AI Research Lab: We are hosted by the Data Science institute at LSE, and we are publishing in partnership with Stanford. We focus on interpretability of ML models: LLMs in particular, and their application to various domains such as Data Science, Finance, Politics and Education.
This field is growing fast: OpenAI just announced a $10M grant for research in interpretability, but it's also a part of AI alignment: research that ensures AI ends up benefitting humanity. This makes interpretability a field both professionally rewarding and socially impactfull.
Role Overview: We are seeking 2 highly motivated Core AI Researchers to join our team. Successful candidates will be provided training to the exceptional standard needed to push the boundaries of AI.
You will be co-authors of AI papers published in the world's best conferences (NeurIPS, ICLR, ICML, etc..), which will open doors to both PhDs and roles in industry (£200k base at Google Deepmind, Anthropic, OpenAI, etc...).
Key Qualification:
Self motivated and capable of thriving in this highly complex field, this is the most important part, we will provide as much training as possible, but the dynamic and complex nature of AI research will require core team members to push the boundaries of knowledge on their own
Optional Qualifications:
Proficiency in Python
Strong background in LLMs, transformers, and machine learning interpretability.
Proven track record of research in relevant areas.
Excellent analytical, problem-solving, and communication skills.