It’s subtle, but look closely: The newest information entering the public domain—whether in journalism, academic research, market analysis, or policy briefs—is no longer coming directly from human minds. Instead, it’s produced by AI. Meanwhile, we humans have started outsourcing our reading as well. Instead of encountering source materials firsthand, we let AI read and summarize them for us. Then, when we respond, we use AI to generate our replies. The result is a strange loop: Our writing isn’t really ours anymore, and we’re barely reading at all.
The process that once defined thought—absorbing complex ideas, wrestling with nuance, crafting our own interpretations—has begun to collapse into a neat little cycle of machine-generated text. AI drafts content, AI summarizes it, AI crafts our responses, and so on. Humans are nudged offstage, reduced to spectators occasionally clicking a button. We’ve replaced active reading and critical thinking with passive consumption of AI-curated digests.
This is worse than just laziness. When we no longer write in our own words or read others’ words directly, we lose the friction that sparks original thought. Critical thinking isn’t some optional academic exercise; it’s how we move knowledge forward, how we notice that something’s amiss, how we generate fresh ideas. Without it, we’re letting our intellectual muscles atrophy, handing over not just the labor of writing, but the very essence of what it means to think.
And it gets more insidious. Future AI models will be trained on the data of today, which already includes vast streams of AI-generated text. Eventually, these systems won’t just learn from human-created material—they’ll be learning from output that AI itself produced. Every training cycle, the true human signal gets weaker, diluted by a growing ocean of automated prose. Instead of building on a foundation of human reasoning and evidence, AI learns from its own reflections, its own rehashes of old material. The result is a gradual descent into a hall of mirrors. With each iteration, the distinction between human-originated insight and machine-generated mimicry becomes harder to detect.
That’s the real danger here: a spiral of self-reference in which the source code of our culture, knowledge, and sense-making becomes synthetic and self-referential, spinning ever further from the raw reality human minds once grappled with directly.
If we continue down this path, we risk ending up in a world where the supposed “knowledge” we rely on is nothing but a refined echo of echoes, a smooth but empty narrative that’s never challenged, never grounded. The cost isn’t just stylistic or aesthetic—it’s the collapse of intellectual progress. Without true reading, we stop engaging deeply. Without true writing, we stop articulating genuine insights. Without both, we stop thinking in any meaningful sense.
The hopeful note is that we still have a choice. We can notice what’s happening and decide that, however tempting the convenience, it’s worth keeping a foothold in genuine understanding. We can keep reading challenging texts ourselves, keep writing in our own words, keep pushing back against the creep of synthetic consensus. If we do, perhaps we can preserve what made human thought so powerful in the first place—the messy, effortful, and ultimately irreplaceable process of thinking for ourselves.