AI vs Programmers

AI vs Programmers

Hello everyone, I hope you are doing well.

Today I want to share a concern that has been on my mind for months: AI, programmers, and the future of software work.

I am not writing this as someone who hates AI. I use AI every week, and in many cases it helps me move faster. But I keep asking myself one uncomfortable question:

At what point does acceleration stop being progress?

I have worked as a programmer since 2020, right before the pandemic exploded. Back then, the rhythm was different. When I got blocked, I opened Stack Overflow, joined Discord threads, and talked with people in WhatsApp or Telegram groups. Sometimes we spent hours discussing one bug, one architecture decision, or one trade-off. It was slower, but it was deeply human, and I learned a lot from those conversations.

In the last year, especially in the last six months, the daily workflow changed a lot. AI became the default answer to almost everything:

  • "Send it to AI and see what it says."
  • "Ask AI if this process makes sense."
  • "Use AI to question it first, then come back to me."

On paper, that sounds productive. In practice, I have seen teams become faster and more disconnected at the same time.

What changed in the day-to-day

The biggest change is not technical. It is cultural.

The old flow was collaborative by default. The current flow is often isolated by default. People ask a model first, then return with an answer that may look polished but lacks team context. Real discussion happens less often. Pair debugging happens less often. Even code ownership gets blurry.

One difficult pattern I have experienced is this:

  1. A manager asks me to analyze a process with AI support.
  2. I return with a structured answer.
  3. The answer gets rejected, but with unclear feedback.
  4. I am asked again, "Are you sure about what you are saying?"

After repeating this cycle, you start wondering: what is the real source of truth here? The model? The manager? The process document? Nobody is fully aligned, and yet everyone is moving quickly.

That speed can look impressive in dashboards, while confidence and clarity quietly drop.

The benefits are real, and we should keep them

To be fair, AI helps a lot when used intentionally:

  • Faster prototypes and first drafts
  • Better support for repetitive tasks
  • Quicker exploration of unfamiliar libraries
  • Easier creation of tests and documentation
  • Lower friction for junior developers

These gains matter. Ignoring them would be naive.

The goal is not "AI or humans". The goal is "AI plus humans, with clear boundaries and shared accountability".

The risks we should not ignore

The concern is not that AI exists. The concern is that many teams now treat AI as a replacement for thinking together.

When this happens, the hidden costs appear:

  • Shallow understanding: code is accepted before the author can explain key decisions.
  • Weak collaboration: fewer technical debates means fewer shared mental models.
  • Lower requirement quality: vague prompts replace clear problem framing.
  • False confidence: clean-looking output hides wrong assumptions.
  • Human fatigue: developers feel replaceable and less connected to the craft.

And this is the part that hurts the most for me: even when AI is used correctly, many companies still do not fix the root issue, which is unclear communication.

If leaders cannot define context, constraints, and business goals, no model can solve that by itself.

A simple graph: speed vs human learning

I like to visualize the trade-off like this:

Impact
^
| Human learning and collaboration  \
|                                    \
|                                     \
|                                      \
|                                       \____
|
| Delivery speed               ____/""""""""\
|                           __/             \___
|                        __/
|                     __/
|____________________/________________________________> AI dependence
                    low             medium            high

This is simplified, but useful.

  • At low to medium AI dependence, speed increases and learning stays healthy.
  • At high dependence, speed may still rise in the short term, but learning and collaboration drop.
  • Over time, maintainability and product quality can suffer because the team does not deeply understand what it ships.

The sweet spot is not "maximum AI usage". It is balanced usage.

A practical operating model for teams

If we want better software and healthier teams, we need rules that protect both delivery and human development.

Here is a simple model that works in many environments.

Work TypeAI RoleHuman Role
Boilerplate and repetitive codeGenerate first draftValidate quality and edge cases
Tests and docsSuggest structureEnsure intent and clarity
Architecture decisionsProvide optionsDecide trade-offs and ownership
Business rulesSummarize existing docsConfirm with domain experts
Incident responseOrganize signalsLead decisions under pressure

Team playbook (actionable)

  • Define AI-safe zones: list tasks where AI is encouraged and tasks where human-first work is required.
  • Adopt AI-free blocks: run one AI-free half day per sprint for design reviews, pairing, and deep code reading.
  • Require explain-before-merge: if AI helped write code, the author must explain assumptions and failure modes.
  • Upgrade requests from managers: replace "ask AI" with clear briefs (problem, constraints, non-goals, acceptance criteria).
  • Measure quality, not only throughput: track rework, escaped defects, and time-to-understand changed code.
  • Protect collaboration rituals: keep regular architecture huddles, debug sessions, and knowledge-sharing moments.

Some people may disagree with the AI-free day idea. That is okay. You can start smaller: one AI-free morning every two weeks. The exact format is less important than the message: human reasoning is still a core part of engineering.

So, what is "right" in this new reality?

I still do not have a perfect answer.

What I know is this: if we use AI to write code, review code, validate code, and even decide priorities without strong human checkpoints, we are not just optimizing workflow. We are outsourcing judgment.

And judgment is exactly where experienced engineers create long-term value.

If AI is truly powerful, the correct response is not to remove humans from the loop. The correct response is to raise the quality of human decisions around the loop.

For me, this is the real question behind "AI vs programmers":

Are we using AI to become better engineers, or using it to avoid the hard parts of engineering?

I would love to hear your view:

  • In your team, are you shipping faster but learning less?
  • Are technical conversations improving or disappearing?
  • Do your leaders provide clearer context now, or just more pressure?
  • Would an AI-free collaboration block help your team?
  • What balance has worked for you so far?

If we can answer these questions honestly, we have a chance to keep progress without losing the human side of this profession.