The Double-Edged Sword of Technology

Is AI Making Us Dumber?

By Patrick Seaman | CEO @ SportsBug™,  Author of Streaming Wars

The recent Bloomberg article (https://www.bloomberg.com/news/articles/2025-08-12/ai-eroded-doctors-ability-to-spot-cancer-within-months-in-study?) about doctors losing their cancer-detection abilities after just a few months of AI assistance struck a nerve with me. The study found that when AI tools were removed, doctors’ ability to spot precancerous growths in the colon dropped by about 20% compared to their performance before they ever used AI assistance. This isn’t just a medical issue. It’s the latest chapter in story of technological dependency that I’ve watched unfold throughout my lifetime, and, indeed, we’ve seen over and over again throughout history.

I remember the heated debates when handheld calculators first appeared in classrooms replacing sliderules. Teachers and parents worried that students would lose their fundamental arithmetic skills, becoming helpless without their electronic crutches. How many of us can now effortlessly perform long division or calculate square roots or other scientific calculations by hand? Even now I barely remember how to use a slide rule. If you don’t know what a slide rule is, well, it did take us to the moon.

Then came spreadsheets, and suddenly accountants and analysts found themselves freed from the tedious manual calculations that had defined their professions. The efficiency gains were undeniable, but something subtle was lost too. And, of course, how many of us would really and truly want to go back to typewriters and whiteout?

My brother, a mathematician, once made an observation that has stayed with me over the years. He noted how the growing dependence on computers to brute-force mathematical proofs was reducing many mathematicians’ ability to craft elegant, non-numerical proofs. The art of mathematical reasoning (that beautiful dance of logic and insight that had characterized the discipline for centuries) was giving way to computational power. Problems that once required creative thinking and mathematical intuition were increasingly solved by throwing processing power at them until they yielded.

This pattern repeats across industries and decades and even centuries. Go back to when, oh, say, Johannes Gutenberg invented the printing press around 1440. Itt introduced the world to the idea of machines “stealing jobs” from workers. Before Gutenberg’s press, the creation of books was a painstaking process, primarily carried out by skilled scribes in monasteries and universities. These professional copyists had dedicated their lives to meticulously transcribing texts by hand, developing extraordinary skills in penmanship, illumination, and textual accuracy. By the late 15th century, the printing press had rendered their unique skillset all but obsolete. The scribes didn’t just lose their jobs; they lost an entire way of thinking about and interacting with text that had taken centuries to develop.

Similarly, the advent of the magnetic compass and later GPS navigation systems fundamentally changed our relationship with spatial awareness. For millennia, humans developed sophisticated celestial navigation skills, learning to read the stars, understand seasonal patterns, and maintain an intimate awareness of their position in the natural world. The United States Naval Academy announced that it was discontinuing its course on celestial navigation in 1998, a course that had been taught since the Academy’s founding in 1845. Today, GPS navigation systems have made us spatially illiterate, unable to read maps or develop the internal compass that previous generations took for granted. In recent years there are more and more incidents of over reliance on GPS that has led to collisions and accidents at sea and elsewhere.

Spell-check and autocorrect have weakened our grasp of language mechanics. Search engines have diminished our capacity for deep research and critical evaluation of sources. Each technological advance brings tremendous benefits while simultaneously eroding some fundamental human capability.

The medical AI study is particularly troubling because the stakes are so high. The researchers suggest that doctors became “over-reliant” on AI recommendations, “leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.” This isn’t just about forgetting how to do something. It’s about the atrophy of clinical judgment, pattern recognition, and the hard-won diagnostic intuition that takes years to develop.

Yet I don’t want to sound like a Luddite railing against progress. AI tools, when properly implemented, represent a tremendous force multiplier for business, medicine, research, and countless other endeavors. The same AI that caused skill deterioration in those doctors also helped them detect cancer more effectively when it was active. The efficiency gains from calculators, spreadsheets, and GPS are real and valuable. The question isn’t whether we should embrace these technologies—that ship has sailed—but how we can harness them without losing essential human capabilities.

Using AI tools for search instead of using search engines has reduced my research time on book writing by half or more. And then, think what it was like before search engines and research involved driving to libraries and even using microfiche?

The key lies in conscious, intentional use. We need to recognize that every technological assistance comes with a hidden cost: the gradual erosion of the very skills it replaces. This awareness should inform how we integrate new tools into our lives and professions. Perhaps medical schools need to design curricula that maintain diagnostic skills even as they teach AI integration. Maybe business schools should require students to perform financial modeling by hand before introducing them to automated spreadsheet functions.

I’m reminded of how pilots, despite having sophisticated autopilot systems, still undergo extensive manual flight training. The aviation industry learned early that over-reliance on automation could be catastrophic when systems failed. They built in safeguards, regular manual practice, and a culture that maintains core piloting skills alongside technological proficiency.

What gives me hope is humanity’s remarkable capacity for adaptation and creativity. Throughout history, we’ve used technological advances not just as replacements for human capability, but as platforms for achieving greater heights of innovation and civilization. We stand on the shoulders of giants: not just the great thinkers of the past, but their tools and creations as well. The challenge is ensuring that we use technology to amplify our innate abilities rather than replace them entirely.

The doctors in that study experienced skill erosion, but they also demonstrated something remarkable: human abilities can be trained, maintained, and recovered with conscious effort. The solution isn’t to reject AI assistance but to structure its implementation thoughtfully, with built-in safeguards that preserve and strengthen human expertise.

As we hurtle toward an increasingly automated future, we have a choice. We can passively allow our capabilities to atrophy, becoming ever more dependent on our digital crutches. Or we can consciously cultivate our uniquely human skills (creativity, intuition, critical thinking, and wisdom) using technology as a tool to enhance rather than replace these abilities.

I choose to believe we’re smart enough to navigate this path successfully. After all, the same human ingenuity that created these remarkable technologies can surely figure out how to use them wisely. The question is whether we’ll have the wisdom and discipline to do so before we discover what essential capabilities we’ve lost along the way.

I can dream, can’t I?