Markus Andrezak, who says he is currently experiencing his fourth technological revolution – after the internet, Agile, and mobile – will speak at the VibeKode conference in Munich (June 22-26) about exactly this misunderstanding. Markus is a software architect focused on product development; his correction is precise: “You can generate code for free, but not the software.”
AI software development at full speed — in the wrong direction
For decades, code production was the dominant bottleneck between what companies needed and what software engineers could deliver. A few years ago, various experts declared the so-called software crisis over: engineering had finally reached a point where it could deliver value on par with business demand.
Then something unexpected happened. Software engineers became the ones waiting: for user stories from the business side, for reviews, for feedback.
Now, with radical automation through AI, the balance shifts completely. Software is produced so quickly that new challenges emerge both upstream and downstream.
When looking at AI-generated code, we see code that often looks convincing, but is surprisingly superficial in critical areas: architecture, runtime behavior, deployment interactions – exactly where systems need to be stable, the machine lacks real understanding. In these areas, it is not just limited, but often imprecise. Speed does not replace judgment.
The more dangerous bottleneck, however, sits upstream. Markus Andrezak: “If the machine runs on the wrong information, it’s nice that it produces massive amounts of code. But if none of that aligns with what the company actually needs, it’s useless.”
Imagine a corporation where executives meet every three to six months. Then it takes another month to turn decisions into polished PowerPoint slides. By the time the new direction reaches the team, two to three months have passed. Markus: “These processes are not accidentally slow – they are designed for that kind of speed. As long as implementation was the bottleneck, that worked. Now that same logic becomes the problem.”
A slow machine running in the wrong direction causes limited damage. A fast machine overheats – because its speed is useless if it is moving in the wrong direction.
Markdown files as core infrastructure
If the problem sits upstream, speed alone no longer helps. Something else becomes critical: context. The question is whether the machine actually knows what it is supposed to do.
Where does a company’s knowledge live? Traditionally, in thousands of neglected Confluence pages – written at some point, by someone, with good intentions. Accuracy and relevance? We know the answer. As long as humans work with it, that’s tolerable. AI does not tolerate it. It needs stable context – current, valid, maintained.
From this, Markus draws a conclusion that initially sounds surprising. He describes strategic documents as infrastructure. Yes, infrastructure. These documents are not optional documentation or strategy papers that can be skimmed.
When Markus sets up a new system for a client, he starts with two files: Company.md and Strategy.md. They define who the company is, what it does, and why – including customers, compliance requirements, and what “good” even means in this context. “I need at least two levels of reasoning in there so the AI can build properly – just like humans used to need to understand what makes their boss, and their boss’s boss, happy.”
These files live in a GitHub repository, versioned, reviewed, and shared across teams. Treated like code, not like vague prose.
In the age of AI, these markdown files form the core of a company and act as the interface between humans and machines. Markus: “I want to make context a first-class citizen, just like APIs and API documentation.”
This is where a new discipline emerges: context engineering. It is no longer about writing good prompts, but about defining context so precisely that the machine can work meaningfully at all. Those who can do this get results that surprise. Those who cannot experience the same technology as unreliable or “dumb.”
For the people writing these documents, the implications are equally significant. What used to be considered a soft skill – analytical clarity, precise thinking, sharp wording – now directly affects the quality of AI output. Vague input gives the machine degrees of freedom you cannot control.
It is striking what this means for a topic long considered boring: standardization. What used to be dismissed as bureaucratic overhead is now critical. Things once avoided as “too much process” now determine how well the machine can operate. Only when context is clear and consistent can the machine work reliably.
Sign Up for Our Newsletter
Stay Tuned & Learn more about VibeKode:
Build first, decide later – a new AI Development Model
Boris Cherny runs ten terminals in parallel. In between, he walks his dogs and manages everything from his phone. He builds 20 to 30 feature ideas at the same time – not sequentially, not prioritized, but simultaneously. The decision comes afterward.
Cherny is not some random frontier developer. He is the key figure behind Claude Code at Anthropic – he built the tool he uses.
What he describes is not a working style, but a different way of thinking. Traditionally: think first, then build. With Cherny: build in order to think.
Markus Andrezak calls this “option storming.” “You explore all options. It’s a very lightweight process. You barely decide anything upfront – you let everything be built, and only afterward decide what to keep. You can skip all that overthinking.”
The closest analogy comes from photography. In the past, taking a photo required deliberation. You chose your subject carefully before pressing the shutter; a roll of 36 exposures might yield five usable shots if you were good. Today, you shoot 200 photos without hesitation. The skill has shifted: from careful planning to deliberate curation.
Software development is following the same path.
Feed in twenty customer interviews – three minutes later, you have a product requirements document. Not to replace the product manager, but to start working immediately. The document will not be perfect. It is a starting point.
“My point is not how to automate this so that I no longer need humans. On the contrary: the role of humans shifts – away from writing code, toward the decisions and discussions around it: what gets built, what gets discarded, and what counts as quality in the first place.”
Implementation is no longer the problem
How are people affected by this shift? On one side are juniors who – in Markus’s words – have “eaten AI for breakfast.” They grow up in a world where generated code is the norm. On the other side are highly experienced engineers who have built, stabilized, and taken responsibility for systems over many years. For them, AI becomes leverage – not because they type faster, but because they know what to build.
The middle – solid developers, functioning teams, organizations that are “doing fine” – is aligned to a world where speed was the bottleneck. That world no longer exists. Those who learned to execute requirements cleanly now face a different task: to clarify what should be built – and why.
What matters is no longer building, but judgment. What gets built – and what counts as good? Functionally, but also technically: stability, security, maintainability.
Many organizations are not prepared for this. Their processes are not just slow – they are designed for slowness. Coordination loops, committees, safety mechanisms all made sense when implementation was the limiting factor.
Now it becomes clear where the real inertia lies. Not in the code. Not in the teams. But in the structures that decide what gets built in the first place.
What past shifts in software development can teach us about AI
What we are describing may feel unfamiliar. It does not align with the roles and processes organizations were built on. But we have seen shifts like this before.
I remember a keynote by an Etsy engineer at our Java Enterprise conference W-JAX in 2015. At a time when many teams were releasing once per quarter, someone stood on stage talking about deploying 50 times a day – not as a vision, but as everyday practice. The gap felt enormous.
Today, we understand how that works. With AI, we are at a similar point again. The question is not whether this way of working will become standard. It will. The question is whether organizations are able to think it ahead of time – and prepare accordingly.
Author
🔍 Frequently Asked Questions (FAQ)
1. What does “you can generate code for free, but not the software” mean?
The article argues that AI can produce code very quickly, but that does not automatically create useful software. Software still depends on architecture, runtime behavior, deployment fit, and alignment with actual business needs.
2. Why does AI-driven software development create new bottlenecks?
For years, implementation speed was the main constraint in software delivery. Now that AI can accelerate code production dramatically, slower upstream activities like decision-making, prioritization, and feedback become the real bottlenecks.
3. Why is AI-generated code not enough on its own?
According to the article, AI-generated code can look convincing while remaining weak in critical areas such as architecture, runtime behavior, and deployment interactions. These are exactly the areas where stable systems require judgment rather than speed alone.
4. Why is context more important than prompts in AI software development?
The article positions context as the foundation for meaningful AI output. If the machine receives incomplete or incorrect context, it can generate large amounts of code that still fail to match what the company actually needs.
5. What are Company.md and Strategy.md used for?
Markus Andrezak describes these markdown files as core infrastructure for AI-driven development. They define who the company is, what it does, why it exists, who its customers are, and which compliance and quality expectations matter.





