O que a tecnologia pede de você

O debate sobre a IA mapeia uma questão que o Iluminismo nunca resolveu. Rousseau disse que as ferramentas nos corrompem. Condorcet disse que eles nos aperfeiçoam. Nietzsche disse que ambos não entenderam. A tecnologia é um teste de caráter, e a única resposta honesta é o que você se torna no encontro.

O que a tecnologia pede de você

A conversation about an earlier essay I wrote on AI skepticism made me realize the argument I was making has roots that go much further back than I'd considered. Not just about AI. About technology itself. About what tools do to the people who use them. The "faith in humans" I described there sounds Kantian on the surface — will people choose to engage? But what I really meant was something Nietzsche got to first: showing up is the prerequisite, not the answer. The deeper question is what you become when you do.

The question of whether technology improves or corrupts the human condition is one of the oldest unresolved debates in modern thought. AI is just the latest arena where we're replaying it. And the Enlightenment thinkers who shaped this argument all got part of it right while missing what matters most.

Rousseau: the fall

In 1750, Jean-Jacques Rousseau won a prize from the Academy of Dijon for an essay arguing that the restoration of the sciences and arts had not purified morals but degraded them. Five years later, in his Discourse on Inequality, he went further. He traced the origin of social ills to invention itself. Metallurgy and agriculture created property. Property created inequality. Inequality created the social contracts that locked it all in.

Rousseau's position was clear: humans in their natural state were free, compassionate, and whole. Civilization, driven by technology, pulled them away from that wholeness. The more tools we built, the further we fell.

This is the AI skeptic's position, restated. Each new tool distances us from authentic human experience. AI will erode our ability to think critically, remember for ourselves, create without assistance. The wise response is restraint. Limit exposure. Preserve what we have. Stay close to the natural state.

Condorcet: the ascent

The opposite position came from the Marquis de Condorcet. Writing in 1794, hiding from the authorities who would soon execute him, Condorcet composed his Sketch for a Historical Picture of the Progress of the Human Mind. He argued that human progress through reason, science, and education was unlimited. Each generation builds on the discoveries of the last. Problems are real but solvable because human ingenuity compounds over time.

Where Rousseau saw corruption, Condorcet saw accumulation. The printing press didn't weaken thought. It spread it. Medicine didn't make us weaker. It gave us decades of life our ancestors never had. The tools weren't the problem. Human capacity to improve them was the constant.

This is the AI builder's position, restated with inevitability. The people working with AI see its failures clearly, but they also watch the rate of improvement with their own hands in the work. They trust that humans can make these tools serve us because that is what humans have always done.

Hobbes: the leash

Between Rousseau's pessimism and Condorcet's optimism sits a position that looks like progress but isn't. Hobbes looks like Condorcet's ally but isn't. In Leviathan (1651), Hobbes argued that life without organized society was "solitary, poor, nasty, brutish, and short." We need civilization not because humans are great, but because we're terrible without someone holding the leash.

Hobbes represents a different kind of AI skepticism: not "stay away from the technology" but "regulate it heavily because humans can't be trusted with powerful tools." It shares the skeptic's pessimism about human nature but channels it into institutional control rather than personal retreat.

This is the position behind most calls for AI regulation. It's not Rousseauian restraint. It's Hobbesian constraint. One asks you to step back from the tool. The other asks the state to step in.

Kant: the collective choice

Kant came closer than any of them to framing the problem correctly. In 1784 he wrote a short essay called "What is Enlightenment?" His answer: it is humanity's emergence from self-imposed immaturity. The courage to use one's own understanding without direction from another.

Kant's point was that the question isn't whether the tools are good or bad. It's whether people choose to engage with them using their own judgment, or whether they defer: to authority, to fear, to the assumption that someone else will figure it out.

In Kantian terms, the AI skeptic chooses immaturity. Not because the concerns are wrong. The concerns about bias, environmental cost, cognitive dependency, and military misuse are legitimate. But retreating from the technology, deciding it's someone else's problem, is a decision not to participate in shaping the outcome.

Kant gets a lot right. But his framing has a ceiling. He frames the question as a collective one: will humanity choose maturity? Will people show up? That's important. It's also not enough.

Nietzsche: what you become

The thinker who gets past that ceiling is Nietzsche. And he gets there by asking a question none of the others asked.

Rousseau asks: does the tool corrupt us? Condorcet asks: does the tool advance us? Hobbes asks: can the tool be controlled? Kant asks: will we choose to engage? Nietzsche asks: what does the encounter with the tool reveal about who you are, and who you're becoming?

In Thus Spoke Zarathustra, Nietzsche draws a line between the last man and the overman. The last man is comfortable. He has found his small happiness. He blinks. "We have invented happiness," the last men say. They avoid difficulty because difficulty is unpleasant. They avoid risk because risk threatens comfort. They have opinions about everything and convictions about nothing.

The overman is the opposite. Not a superhero. A person engaged in perpetual self-overcoming. Someone who takes what is difficult and uses it as material. Who treats obstacles not as reasons to retreat but as the substance out of which a stronger self gets built.

Nietzsche would look at the AI debate and see neither side clearly.

The skeptics who retreat from AI because it's flawed, because it might erode something they value, because someone might misuse it: Nietzsche would recognize them as the last men. Not wrong in their observations. But choosing comfort over encounter. Choosing the safety of critique over the vulnerability of creation. They have found their small happiness and they want the new tool to leave it undisturbed.

But Nietzsche would not side with the naive optimists either. The people who adopt AI uncritically, who outsource their thinking, who let the tool do the creative work they should be doing themselves: they're also last men. They've traded one form of comfort for another. Instead of avoiding the technology, they let it carry them. Either way, the self stays small.

The Nietzschean position is harder than either of those. It says: the technology is here. It will change what it means to think, to create, to work, to be human. That change is not a threat to run from and not a gift to passively receive. It is material. What matters is what you make of it.

Self-overcoming means using AI in the places where it forces you to become better. A writer who uses AI to research faster, then writes with more depth and honesty than before, is overcoming. A teacher who uses AI to automate grading, then spends the freed time on the parts of teaching that demand real human presence, is overcoming. A programmer who uses AI to generate boilerplate, then focuses on the architecture and design that require judgment, is overcoming.

The person who avoids AI to preserve a skill is preserving, not overcoming. The person who surrenders to AI and stops developing the skill is declining, not overcoming. Nietzsche would say both paths lead to the same place: a smaller self.

The will to power is not what you think

Nietzsche's concept of the will to power is widely misunderstood. It is not domination over others. It is the drive to grow, to create, to impose form on chaos. It is the instinct that makes an artist paint, a founder build, a researcher push into the unknown. The will to power is self-directed. It wants more from you, not more for you.

Technology tests this will. Every major tool in history has asked the same question: will you use this to become more, or will you use it to become less? Print could make you a reader or a passive consumer of pamphlets. The car could expand your world or shrink it to a commute. The internet could connect you to minds across the planet or seal you in an algorithmic bubble.

AI is the most intense version of this test yet. It can think for you. It can write for you. It can create images, compose music, generate strategies. The question is not whether the technology works. The question is whether you use it as a tool for your own creative will or as a substitute for it.

No pity, no resentment

There's one more piece of the Nietzschean lens that matters here. Nietzsche despised what he called ressentiment: the impulse to devalue what you can't reach. Aesop told the version with a fox and sour grapes. Nietzsche saw it running through entire civilizations.

A lot of AI skepticism carries this flavor. Not all of it. Some is principled and grounded. But the undertone in much of the criticism is: "this technology threatens something I have, so the technology must be bad." The writer who fears AI will devalue prose. The artist who fears AI will devalue illustration. The knowledge worker who fears AI will devalue expertise. Their critique often presents as ethical concern. But underneath, it's often a defense of position, dressed in the language of values.

Nietzsche would say: if AI can do what you do, that isn't an argument against AI. It's a signal that you need to go deeper. Find the layer of your work that no tool can replicate. If that layer doesn't exist, the problem isn't the tool. The problem is that you stopped developing before you reached it.

That's not cruelty. It's honesty. And Nietzsche valued honesty above comfort.

The eternal recurrence of the same question

Nietzsche's thought experiment of eternal recurrence asks: if you had to live your life over again, identically, forever, would you affirm it? Would you say yes to every choice, every encounter, every difficulty?

Applied to technology, the question becomes: if this exact moment, where AI is new and uncertain and full of risk and potential, recurred forever, would you choose engagement or retreat? Would you choose the difficulty of working with a tool that changes the ground under your feet, or would you choose the comfort of refusing it?

Every major technology has forced this same question. The printing press. The railroad. The telephone. Radio. Television. The internet. Social media. Each concern turned out to be partially right. Memory did change after print. Communities did reorganize around rail. Social media did erode shared reality. The tools always brought real costs.

But the costs were never resolved by the people who stayed away. They were resolved, slowly and imperfectly, by the people who engaged. And the people who engaged were changed by the encounter. That's the point. They didn't just fix the technology. They became different people in the process of working with it.

Rousseau bet against human capacity. Condorcet bet on it. Hobbes wanted to constrain it. Kant said it was a choice.

Nietzsche would say the question itself is wrong. There is no "human capacity" in the abstract. There is only what you do next. The technology is here. It will test you. What you become in response is the only answer that matters.

Ensaio
Compartilhar