Artificial Intelligence and Education: Learning to live with machines.

Inteligencia Artificial y Educación: Aprender a convivir con las máquinas.

Brain Code |

The old dream of thinking machines

Ever since Ada Lovelace imagined an “analytical engine” capable of manipulating symbols, humanity has wondered if a machine can think. Two centuries later, that question remains: what distinguishes a human mind from an artificial one?

In the 19th century, Lovelace understood that machines could perform complex operations, but not create anything on their own. For her, a machine's intelligence would always depend on the human intelligence that designs it. Her vision—that of a symbolic mind, capable of reasoning within the limits we set for it—anticipated the debates we still have today about creativity and artificial autonomy.

From neurons to algorithms: when the brain inspired machines

In 1943, Warren McCulloch and Walter Pitts proposed a mathematical model of the brain. Their “logic neurons” demonstrated that mental processes could be represented by equations. Thus, the seed of artificial neural networks was planted.

Decades later, Alan Turing posed a question that would change everything: can machines think? Instead of philosophizing about consciousness, he proposed an empirical test—the “imitation game”—which gave rise to the famous Turing Test. If a machine can converse indistinguishably from a human being, isn't that, in practice, a form of thinking?

Winters and summers of artificial intelligence

The initial enthusiasm for AI led to cycles of euphoria and disillusionment.
In 1956, John McCarthy coined the term artificial intelligence at the historic Dartmouth meeting, giving rise to the symbolic era: a time of confidence that rules and formal logic would be enough to replicate thought.

Later, Frank Rosenblatt's Perceptron (1957) paved the way for neural networks, although its criticism by Minsky and Papert in 1969 caused the first major winter in the field.

After a new boom in the 1980s with expert systems—capable of solving specific problems—came another period of stagnation: the second winter. AI seemed to have reached its limit, until a paradigm shift revived everything.

The rebirth of deep learning

In 2012, the AlexNet neural network marked the beginning of deep learning . For the first time, machines could recognize images, process language, and learn from enormous volumes of data with previously unimaginable accuracy.

A decade later, the emergence of ChatGPT democratized access to AI. Suddenly, millions of people could converse, write, and create with advanced language models. Artificial intelligence ceased to be an academic promise and became an everyday tool.

Today we are experiencing a new “AI summer”, where technology not only assists human beings, but also challenges them: what does it mean to think, understand or create in the age of algorithms?

Simulation or understanding: Searle's dilemma

In 1980, philosopher John Searle formulated his famous Chinese Room experiment to question the idea that a machine can truly understand.
A person who does not know Chinese could respond to messages in that language if they follow precise instructions, but without understanding their content.
Similarly, language models manipulate symbols without accessing their meaning.

This distinction between syntax and semantics remains essential: current systems don't understand the world, they only learn patterns of how humans talk about it. Their "intelligence" is, at its core, a statistical imitation of human thought.

The Promethean risk: technology without limits

The philosopher Günther Anders warned that technological power has outpaced our ability to imagine its consequences. He called this imbalance the Promethean gap .

Artificial intelligence embodies this paradox today: the very leaders who warn of its risks are driving its unchecked progress. It is not the machines that act alone, but rather we who have delegated decisions to them without fully understanding their implications.

The danger lies not in AI as an autonomous entity, but in our lack of ethical reflection on what we build.

Educating in the age of automation

Universities have always known how to reinvent themselves. In the Middle Ages, teachers taught by reading aloud. Centuries later, written essays and personal assessments transformed the way we learn. Today, with AI-powered automated writing, the challenge remains the same: to preserve what makes us human.

Learning is not about repetition or producing text, but about active thinking . Therefore, the future of education lies in restoring direct interaction: tutoring, oral exams, debates, and in-person writing.
It's not about going back, but about strengthening critical thinking in the face of digital immediacy.

Technology as an ethical mirror

Every technological innovation reflects our priorities and values. Artificial intelligence raises urgent questions about sustainability, working conditions, privacy, and social justice.

As Brain and Code reminds us , smart technology isn't about optimizing processes, but about improving human life . It requires responsible design: assessing its ecological footprint, anticipating side effects, and avoiding harmful uses such as information manipulation or indiscriminate surveillance.

Only in this way can AI be a means to serve humanity, not an end in itself.


Conclusion: coexist, don't compete

Artificial intelligence is not here to replace us, but to challenge us to redefine the meaning of learning and creativity .
The real challenge is not teaching machines to think, but learning to live with them without losing our critical awareness .

Education plays a vital role here: training citizens capable of using technology without being subservient to it. Because AI not only transforms what we do, but also who we are.

The future will not belong to machines that imitate human beings, but to people who learn to live with the intelligence they themselves have created.

Leave a comment