David Krueger is an academic researching AI alignment. Having a professor directing a lab working on AI safety could lead to a steady pipeline of new safety researchers being trained.
Funding new AI safety labs and researchers is a high-impact area. With more labs working on AI safety, there is more chance of important breakthroughs and progress on reducing AI risks.
Krueger’s lab seems to have a significant shortage of computing resources, with less than 2 GPUs per group member, and no state-of-the-art GPUs.
Backing one of the first independent AI alignment research groups in Europe could raise the profile and importance of AI safety in Europe.
Outcomes: Krueger’s research group used this grant to support work that led to a number of publications in technical AI Safety which are widely regarded as pioneering: