TL;DR: As generative AI engines take the wheel in deciding what we see, read, and remember, the question isn’t just about how to optimize for algorithms. The real question is whose stories survive the next wave of digital memory and whose fade away.
When AI becomes the librarian
AI models like ChatGPT and Gemini are the new librarians of the web. They decide which facts are surfaced, which sources get cited, and which voices echo across the digital landscape. But AI doesn’t make these choices in a vacuum. They learn from what’s already online, what’s popular, and what’s easy to parse.
This means the loudest, most-linked, and most-optimized content rises to the top. Niche expertise, local knowledge, or unconventional perspectives risk getting buried. If you’re not in the training data, you’re not in the answer.
The power and the problem of citation
Reference rate is the new currency. If your work is cited by AI, you get noticed. If not, you might as well not exist. This changes the game for everyone: big publishers, indie creators, and communities working on the margins.
When a user asks an AI a question, the model pulls from its training data and the live web. But not all sources are equal. Well-optimized, frequently cited content rises to the top. Smaller voices, niche communities, and non-mainstream perspectives risk fading into the background.
But here’s the catch. AI models are trained on what’s available and visible. If your content isn’t structured for AI, or if you don’t play by the rules of GEO, your knowledge could slip through the cracks. The danger isn’t just being ignored. It’s being forgotten.
Whose knowledge gets preserved?
The web was supposed to democratize information. Now, we’re at risk of letting algorithms decide whose knowledge gets preserved and whose gets left out. Smaller voices, niche communities, and non-mainstream perspectives have always had to fight for visibility. With generative AI, the stakes are even higher.
If a local historian’s blog isn’t cited by generative searching, does that knowledge survive? If a community’s lived experience never gets summarized in an AI answer, does it fade from collective memory? These are not just technical questions. This is about the future of what we know and who we listen to.
What do we owe each other as stewards of knowledge?
If you’re a writer, editor, or builder of digital spaces, you’re part of this story. The choices you make, what you publish, how you structure it, and who you cite decide what gets remembered.
We should ask hard questions:
- How do we make sure underrepresented voices aren’t erased by algorithmic convenience?
- What can we do to keep the web weird, diverse, and surprising?
- How do we balance optimization with authenticity?
The stakes
GEO gives us tools to surface knowledge, but it also gives us a responsibility. We can use our skills to lift up stories that might otherwise get lost. We can design for discoverability without flattening everything into the same template.
If AI is the new librarian, let’s make sure its shelves aren’t missing the best, strangest, and most vital books.
Who gets to be remembered? The answer isn’t written yet. It’s up to us to make sure it’s not just the loudest voices, but the ones that matter most.