We conduct research to improve the quality of life for all. Our current focus areas are outlined below.

AI governance and policy

Advanced AI could have enormous benefits but could also pose catastrophic risks. We’re particularly interested in AI governance and policy areas such as strategic analysis, compute governance, and information security. 

Selected publications

Please note that our non-public research reports in this area are shared directly with AI companies and other stakeholders for internal consumption.

Cooperative AI

We’re interested in promoting cooperation between advanced AI systems.

Selected publications

Please note that our non-public research reports in this area are shared directly with AI companies and other stakeholders for internal consumption.

Societal Long-Term Risks

Protecting secular democracies

We’re concerned about increasing tribalism, polarization, political dysfunction, and the erosion of secular norms that form the foundation of Western democracies. We support projects upholding core Enlightenment values—such as reason, science, liberty, impartiality, free speech, pluralism, and compassion—against authoritarian and regressive ideologies from both sides of the political spectrum.

Selected publications

Reducing risks from fanatical ideologies and malevolent actors

Extremist ideologies led by authoritarian tyrants—such as totalitarian communism, fascism, and religious fundamentalism—have contributed to many of history’s most catastrophic conflicts. We explore how to reduce the risks posed by such fanatical ideologies and malevolent actors.

New Directions

Improving the world—especially from a long-term perspective—is fraught with extreme uncertainty. It’s entirely possible that our current efforts are misguided.

We actively seek external input, including critiques of our current work, to refine our approach to reducing long-term risks. This may lead us to pursue different research areas.