← Members
Tianyi Qiu

Tianyi Qiu

Incoming PhD, Stanford CS  ·  Former AI Safety Fellow, Anthropic

Tianyi works on the systemic epistemic effects of AI, value lock-in, and methods for truth-seeking alignment. His work spans Martingale-based measures of confirmation bias in LLM reasoning, coherence optimization as a theoretical lens on self-improvement, the Lock-in Hypothesis, and formal approaches to representative social choice in AI alignment.

He was previously an AI Safety Fellow at Anthropic, working with the Alignment Science and Societal Impact teams. He is currently a Cosmos Fellow at the Oxford HAI Lab, and will begin his PhD at Stanford in 2026, advised by Noah Goodman.

Best Paper, ACL 2025