O ceticismo da IA tem realmente a ver com a fé nos humanos
View markdownAs pessoas que adotam a IA encontram os seus problemas mais diretamente do que qualquer outra pessoa. Mas eles também têm mais fé que os humanos podem trabalhar através deles. O verdadeiro ceticismo não é em relação às máquinas.
Principais conclusões
- Os céticos da IA não veem apenas limitações. Eles concluem que as limitações são motivo para o desligamento, sob a suposição de que as coisas não vão melhorar rápido o suficiente para fazer diferença.
- Os entusiastas encontram as mesmas frustrações de forma mais direta, pois trabalham diariamente com a tecnologia. Tratam os problemas como o trabalho em si, e não como desqualificadores.
- Adotar novas tecnologias é um teste decisivo para a fé na capacidade humana. As pessoas que se apoiam acreditam que os humanos podem dirigi-lo. As pessoas que recuam revelam dúvidas de que conseguiremos.
- A mesma dinâmica ocorreu e continua ocorrendo na criptografia. Os céticos apostam nas instituições existentes. Os construtores apostam na adaptabilidade humana.
- Não há um futuro determinado onde a IA ou os bots desempenhem qualquer papel. O resultado depende de quem aparece para moldá-lo.
title: "AI skepticism is really about faith in humans" excerpt: "The people who embrace AI encounter its problems more directly than anyone. But they also have the most faith that humans can work through them. The real skepticism isn't about the machines." category: "essay" tags: ["AI", "technology", "culture"] read_time: 7 published: true published_date: "2026-03-10" hero_image: "ai-skepticism-is-really-about-faith-in-humans-hero.png" hero_image_style: "keep-proportions"
The common thread I notice among friends and family who are skeptical about AI isn't that they focus on its current limitations and risks.
It's that they conclude those negatives are reason to approach it tentatively, if at all. They aim to limit its usage to where things feel safest, under the assumption that it won't get meaningfully better any time soon. There's always a background sentiment that AI will never achieve enough "humanness" to handle the greater tasks.
You might assume this is about people outside of tech who lack context. It isn't. Last fall I spent a week in Silicon Valley catching up with friends across the tech industry. I heard the same skepticism from engineers, product managers, and founders. Not ignorance of AI, but a deep reluctance to commit to it. The pattern isn't about who understands the technology. It's about something else.
I've seen this pattern before with crypto. When blockchain was a hot mainstream topic, skeptics pointed to the same kind of structural distrust. Not just "this tech has problems" but "these problems prove it can never replace what we already have." The conclusion was always the same: stay away, wait it out, let someone else figure out whether it matters.
The people getting their hands dirty
My enthusiastic friends also see the limitations and risks. But they perceive how fast things move and improve. The most enthusiastic ones join in to address the problems directly. They build new tools. They consult to help companies adopt. They commit their daily work to this frontier.
They encounter frustrations more thoroughly and directly than the skeptics, because they navigate the ups and downs every day. But they accept that the only way to resolve those problems is to get dirty with the tech. Clear-eyed about both its leverage and its failures.
The skeptics observe from a distance and conclude the problems are disqualifying. The builders run into those same problems head-on and treat them as the work itself.
The paradox
Here's what I find striking. It's a litmus test about the faith one places in human capability.
Those who proactively embrace the machines have the most faith in human ingenuity and creative control. They believe we can steer this. They believe the problems are solvable because humans are capable of solving them.
Those who demur reveal a lack of confidence in humans, whether as individuals or as institutions, to guide the technology to a place that serves us. The worry isn't just "AI is flawed." It's "we can't fix it." Or worse: "we can't be trusted with it."
That framing applies to crypto too. The skeptics said our monetary institutions are irreplaceable. The builders said humans can create new forms of trust. One group bet on the status quo. The other bet on human adaptability.
This is not the same as faith that replaces evidence. I spent seven years in a crypto ecosystem where belief became liquid and narrative replaced product feedback. That kind of faith persists by insulating itself from reality. The faith I'm describing here is the opposite. It comes from engaging with the failures directly and watching the rate of improvement with your own hands in the work.
No determined future
If this sounds polarizing, I suspect it only feels that way if you already have a fixed scenario in your head. One where we either compartmentalize AI into some safe set of use cases or we let it take over everything.
But there is no determined future. Nobody has written the script in which people or bots play any given role, let alone prevail over the other. The outcome depends on who shows up to shape it.
And shaping it doesn't mean writing code. A teacher figuring out how AI changes what students need to learn is shaping it. A writer using AI to research faster and publish more honestly is shaping it. A small business owner automating invoices so she can spend more time with customers is shaping it. The question isn't whether you have technical skills. It's whether you engage with the technology with a productive mindset, willing to push through the friction because you believe humans can make something good out of it.
The optimism I'm describing isn't faith in any particular technology. It's faith in human technological capacity, the accumulated, stubborn, creative ability of people to take rough tools and bend them toward something that serves life. That capacity has been the constant across every major technological shift. The question, as always, is whether we trust ourselves enough to use it.
Postscript: the specifics
A friend pushed back on this essay. She liked the premise but wanted specifics. Take on the real ethical problems, she said, and show someone addressing each one with good faith. Otherwise the argument stays abstract.
She's right. So here are four problems that skeptics raise, and what it looks like when people show up instead of stepping back.
Can we trust AI with our brain trust? In Oakland, 17 public school teachers joined a community of practice called AI Together. They started skeptical. By the end, one teacher had cut essay grading from over an hour to three minutes while generating personalized study plans for each student. The point isn't the efficiency. These teachers decided they should be the ones figuring out how AI enters their classrooms, not waiting for someone else to set the terms. They used AI to reclaim time for the parts of teaching that require human judgment. Nobody handed over their cognitive capacity. They expanded it.
What about environmental impact? This concern is real. Training large AI models consumes enormous energy. But the people working closest to the problem are also the ones driving the fix. UCL researchers found that practical changes to how models are configured, like quantization and using smaller specialized models, can reduce AI energy demand by up to 90%. Google cut the energy per Gemini text prompt by 33x in a single year. A London grid trial using NVIDIA hardware cut data center power demand 40% in real time. These gains didn't come from people who refused to engage with AI's energy costs. They came from people who measured the problem and got to work on it.
What about AI reproducing past biases? Stephanie Dinkins is a transmedia artist in New York. After encountering Bina48, a humanoid robot designed to represent a black woman whose responses about race were shallow and reductive, she didn't write off the technology. She built Not the Only One, an AI chatbot trained on 40 hours of oral histories from three generations of women in her own family. Instead of accepting that training data would always carry dominant-culture bias, she made her own. Her installations at the Smithsonian and Queens Museum invite the public into the same question: what would our machines become if we trained them with care? Dinkins is not a computer scientist. She's an artist who decided the problem was the work itself.
What about government and military decisions? In February 2026, the Pentagon demanded that Anthropic remove safeguards from its AI systems to allow fully autonomous weapons targeting and mass domestic surveillance. Anthropic refused. CEO Dario Amodei said the systems aren't reliable enough for autonomous lethal decisions and that mass surveillance violates democratic principles. The Pentagon threatened to cancel a $200 million contract and label Anthropic a supply chain risk. Anthropic held its ground. This is what it looks like when people who build the technology use that position to draw lines. You can't draw lines from the sidelines.
On social media: another recent historical comparison. The obvious rebuttal to this whole essay is that we heard the same optimism before. Democratize information. Connect communities. Give everyone a voice. What we got: polarization, misinformation at scale, a teen mental health crisis, and the slow erosion of shared reality.
I take that seriously. But look at who caused the damage and who is fixing it. Facebook launched its engagement-optimized algorithm in 2006. By 2016, the company's own researchers found that 64% of extremist group joins came from algorithmic recommendations. Leadership called the fix "antigrowth" and shelved it. The first comprehensive social media regulation, the EU's Digital Services Act, didn't arrive until 2022. The US still has none. That is sixteen years of civil society, regulators, and users ceding the field to platform incentives.
The correction, when it finally came, came from people who got close enough to understand. Frances Haugen blowing the whistle from inside Facebook. Researchers documenting algorithmic harms. The EU writing new law. Parents organizing. Teens leaving platforms that didn't serve them. None of it came from the people who stayed away.
Social media doesn't disprove the thesis. It proves it. The danger wasn't too many people trying to shape the technology. It was too few, for too long. And that is the posture the AI skeptics are repeating now.