Abstract
Every epistemic technology — the printing press, search engines, social media — has reorganized how ideas spread, compete, and survive. Large language models are the latest such technology, but with a property no predecessor possessed: they do not merely transmit ideas, they generate and personalize them at scale. We argue that this capability makes LLMs categorically different as epistemic infrastructure, and that their systematic deployment will reshape the flow of ideas in ways that require principled analysis and intervention.
We develop a theoretical framework for understanding AI as an epistemic technology, identifying three structural properties — generation (not just retrieval), personalization (not just filtering), and feedback (influencing its own training data) — that together drive qualitatively new dynamics. We show how these properties amplify existing biases, produce knowledge collapse toward AI-favored positions, and create conditions for value lock-in across populations. This position paper argues that alignment research must expand beyond individual interactions to address population-level epistemic effects, and sketches a research agenda for Bayesian-rational, epistemically-safe AI design.
Cite
@inproceedings{he2025rewires,
title = {Position: {AI} Systematically Rewires the Flow of Ideas},
author = {He, Zhonghao and Qiu, Tianyi and Lin, Tao and Glickman, Mark
and Wihbey, John and Kleiman-Weiner, Max},
booktitle = {ICLR 2025 Workshop on Bidirectional Human-AI Alignment},
year = {2025},
url = {https://openreview.net/forum?id=Mzza24PyIq}
}