AI without intentionality is augmented stupidity
A recent study by MIT showed that when people relied on LLMs to help write essays, their brain activity dropped. They also had trouble remembering what they had just written. It's not that the AI is bad. The problem is how people use it.. If you let the tool do all the work, you stop learning. And if you stop learning, you lose the ability to think clearly on your own.
LLMs are fast. They are impressive. They can generate text, code, and answers in seconds. But speed isn't the same as insight or intelligence. And volume isn't the same as understanding. If we use LLMs without purpose or awareness, we're just copying outputs we don't control or even fully grasp.
"The risk isn't that AI will become too smart. The risk is that people will become too passive, too confident and too arrogant." - Zak Allal.
When we offload our thinking to systems that don't think, we train ourselves to stop thinking intentionally and sharply with acumen. We lose the habit of effort. Humanity has already lost many of its effort-driven instincts: first physical effort, then social effort. Now we’re heading toward the loss of cognitive effort. This will lead to worse decisions, shallow reasoning, and fragile knowledge.
"After the physical and social impacts of technology, the next wave is cognitive." - Zak Allal.
LLMs can help. But only if we treat it as a tool, not a brain. Without human intentionality, using AI is not smart. It's just a faster way to be wrong.