For more than twenty years, Google was the map of online knowledge .
It was the place where almost every search began, the invisible guardian of our curiosity. Asking Google something became a reflex, a way of thinking out loud.
But that attitude is changing. With the rise of ChatGPT, Perplexity, and Gemini , users no longer want to explore—they want to understand. They don't want links, they want answers.
Artificial intelligence isn't perfecting search. It's replacing it.
In this new scenario, knowledge ceases to be an open territory and becomes a closed conversation. One where the model doesn't lead you to information, but rather returns it already digested.
1. The end of the click: from links to answers
For years, Google thrived thanks to a simple principle: the more clicks, the more data, the more ads.
That chain supported an entire economy of visibility, positioning, and optimized content. Every click was a form of consent.
Conversational models broke that logic.
ChatGPT, Perplexity, and Gemini no longer show you routes, but conclusions. They eliminate the click. And with it, a good part of the economic and cultural fabric that sustained the web.
The user no longer browses; they consult. The experience becomes cleaner, more immediate, but also more opaque. The generated answers lock knowledge inside a black box: they reveal no sources, no processes, no intermediate errors.
What used to be a journey—following a thread, comparing, contrasting—is now reduced to a direct delivery.
Total efficiency, but without context.
Knowledge ceases to be a quest and becomes a finished product. And that, for a network that grew from exchange and curiosity, is almost a paradox: the more accessible knowledge becomes, the more its origins are blurred.
2. Google responds, but it no longer dominates.
Google understood the danger. Its response was Gemini , a conversational model integrated into the search engine, and the fusion of the classic search engine with generative AI. But there's a clear irony in that move: to remain relevant, Google had to adopt the very model that threatens its hegemony.
The result is a hybrid that talks to you, but doesn't always tell you where its voice comes from.
Transparency, which was once the hallmark of the web, is becoming a luxury.
Now, every response can come from a human source, an AI model, or an ad block disguised as context.
In this new landscape, the incentive is no longer to create the best content, but to train the most compelling model.
Information becomes a training resource, not a discovery resource.
And Google, which built its empire by organizing the chaos of the web, is forced to reinvent itself in an environment where order is no longer given by the search algorithm, but by the model that decides for us.
3. The knowledge economy enters a crisis
For decades, SEO was the invisible grammar of the Internet.
Behind every article, recipe, or analysis, there was language designed to please search engines.
Today, that language is beginning to run out of people to talk to.
If users stop visiting the sites, the cycle breaks down: less traffic means less revenue, less data, and less new content.
Artificial intelligence feeds on that knowledge, but returns it encapsulated, without any return paths. It's a circular paradox: the models are trained on the web… while eroding the ecosystem that feeds them.
The impact is not only economic. It's also cultural.
Because if everything is filtered through a model, the diversity of voices is flattened , and knowledge ceases to be conversation and becomes synthesis.
Perplexity or ChatGPT may offer brilliant answers, but they rarely show the dissonances, doubts, or errors that make an idea evolve.
The user gains clarity, yes, but at the cost of losing the friction that previously made one think.
The search is dying, but not the desire to understand.
Google changed the way we accessed information; artificial intelligence is changing who controls it .
Efficiency wins, but transparency fades away.
What was once an open network of references is transformed into an interface that decides what is worth knowing. And there, in that elegant comfort, arises a question bigger than the algorithm itself:
Do we want to know quickly, or do we want to know well?
The future of online knowledge will not depend on the most powerful model, but on our ability to keep curiosity alive in an environment that seeks to answer everything .
Because the danger is not that AI will kill the search.
The danger is that, with it, we forget how to search.