Skip to content
Artifical intelligence

The impact of LLMs on your company’s engineering culture

How do we ensure organizational knowledge and learning systems keep evolving with engineers adopting AI?

Grégoire Mielle
by Grégoire MielleLast updated on Jan 12, 2026

Your company’s engineering culture might be changing and not for the better.

Engineers are happy. You gave them access to Claude Code or Cursor, and they feel empowered to ship more pull requests and value to users than ever before.

But something subtle changed. Your #frontend or #engineering Slack channels went quiet. Questions didn't disappear, they moved. Sure, we can celebrate that some noise like code style debates and TypeScript shenanigans are gone. But what about deeper questions? Architectural debates? The "why do we do it this way?" conversations?

Something valuable was lost at the cost of individual speed: organizational learning.

In well-functioning teams, those signals drove developer experience improvements, documentation updates and mentorship conversations. Developers used to learn from indirect exposure to knowledge, debugging sessions that revealed how systems actually behave, rabbit holes that connected seemingly unrelated concepts.

When LLM code works on the first try, engineers miss the feedback that would reveal gaps in their understanding. When they start offloading more than routine tasks to AI (which is totally fine), they risk loosing productive struggle that builds expertise & leaves them with a feeling of mastery without the underlying mental models to support it.

Of course organizational knowledge is not gone, wether it’s in the form of Notion pages, internal wikis, etc. In fact, many teams have poured significant energy into making this knowledge AI-accessible: AGENTS.md files, codebase indexing, custom instructions, etc.

Optimizing the input to those systems is great, but almost no one is systematically learning from the interactions. We’re ignoring the most valuable data source to understand:

  • Where our documentation is being supplemented or contradicted by LLMs
  • Which parts of our codebase generate the most confused questions
  • What concepts newcomers ask AI about repeatedly in their first 90 days
  • Which architectural decision or tooling engineers are constantly working around

I’m not against LLMs, they genuinely expand what’s possible. The question is wether your organization is adapting its learning systems or just enjoying the speed boost while your knowledge quietly atrophies.

We don’t want to just train engineers to be fast reviewers but to be better calibrated about when to trust, when to verify and when to push back.

What patterns are you seeing in how AI tools are changing knowledge-sharing in your engineering teams?