Loading...
Existential Risk: The AI Safety Movement in the United States and China
Kalish, Daniel
Kalish, Daniel
Abstract
This thesis examines the global AI safety movement through the lens of epistemic communities theory, asking why a transnational coalition of researchers, scientists, and advocates has failed to achieve lasting policy influence in the United States and China. Drawing on case studies of both countries between 2015 and 2024, it argues that while the AI safety community established many of the attributes of epistemic communities, its impact has been constrained by the realist dynamics of great power competition. In the United States, initial receptiveness to safety regulation under the Biden administration gave way to political fragmentation, deregulatory pressures, and a return to strategic deployment logics. In China, despite state-led support for AI ethics principles and institutional involvement by epistemic actors, policy has prioritized ideological conformity and national security imperatives. Through a comparative analysis, the thesis extends existing theories of epistemic influence by showing how even robust communities of expertise can be structurally sidelined when their agendas conflict with state-driven conceptions of power, competition, and technological dominance.
Description
Date
2025-04-01
Journal Title
Journal ISSN
Volume Title
Publisher
Collections
Download Dataset
Rights Holder
Usage License
Embargo
Research Projects
Organizational Units
Journal Issue
Keywords
Citation
Department
Government
