Devmio: Is “vibe coding” part of your daily work? If so, where exactly?
Thomas Mahringer:
Vibe or agentic coding is part of my daily work. I use it every day in different contexts (website development, software architecture support, developing new software components, adapting existing systems, etc.) and with different approaches (full-bundle uploads, agentic coders like Roo, Claude Code, Cursor, etc.).
Christoph Henkelmann:
Yes—basically whenever I’m developing or handling administrative tasks. When working on the console, I use it more as a sparring partner or tutor (“Which arguments do I need for rsync if I want to […]?”, “What’s the correct command for […]?”). When programming, I use it to write code based on precise specifications, especially in standard cases. When things become more specialized and I notice that I’m leaving the agent’s or LLM’s “comfort zone,” I go back to implementing individual parts manually as I used to. I switch approaches depending on the task.
Rainer Stropek:
Yes, constantly. Vibe coding has become an integral part of my daily work. It allows me to build prototypes quickly, and those prototypes are extremely valuable in digital product development—whether they’re technical prototypes or UX-focused concepts. When vibe coding is based on a clearly defined goal, it becomes spec-driven development. At that point, good, production-ready code emerges.
From my perspective, vibe coding itself isn’t really new. People have been doing it for as long as I’ve been in the industry—more than 30 years now. The only thing that has changed is who provides the “vibes.” In the past, it was product planners; today developers can pass them directly to AI.
Paul Dubs:
Yes, it’s definitely part of my daily workflow, although we follow a specific process we internally call “Omega Programming.” It resembles pair programming more than the hands-off delegation people often associate with vibe coding. Since I mostly work in small, experienced teams, we allow ourselves to develop a large portion of new code with AI assistance. That applies to almost every discipline.
In principle, I use AI for tasks where the details are basically always the same—essentially anything I would traditionally delegate to a junior developer. Today I offload that to AI. Since the advances in models like Claude 4 and especially the Claude 4.5 versions released late 2025, AI has become capable enough that you can confidently assign it larger tasks, as long as they’re properly supervised.
Pieter Buteneers:
The answer is clearly: yes. For me, it’s simply a way to significantly speed up work. I’m primarily a Python developer, but some time ago I started working with TypeScript and I’m not an expert yet. With vibe coding, I can write much more code in less time.
For many small bug fixes, we can simply describe the issue and it gets fixed immediately. I use Cursor. My colleagues usually use Copilot Code, which is better in some ways but it is not always as well integrated, and it’s a bit slower. Still, for small bug fixes it often solves the problem right away—if you know where the bug is and what the issue is.
It gets harder with more complex bugs. In general, I write most of my code using vibe coding, but that doesn’t mean I don’t review it. I often have to tell the agent twenty times to change something here or there. Even though we have an agents.md file where we describe the code structure and our coding requirements, it sometimes ignores it.
But overall, yes—I use it every day for almost all of my programming tasks to move faster. Sometimes I have to throw everything away and start from scratch because it’s garbage, but I still use it.
Sign Up for Our Newsletter
Stay Tuned & Learn more about VibeKode:
Devmio: Where do you notice that vibe coding is not used effectively? (e.g., loss of understanding, copy-paste mentality)
Thomas Mahringer:
You notice it when developers—including myself—get stuck in a kind of trial-and-error loop. Developers delegate a (sub)task to the vibe coder. The vibe coder gives useful hints and generates code that looks reasonable. Often, despite precise instructions (“Here is the plan as Markdown”), something small doesn’t fit: wrong variable names, incorrect imports, repeated generation of similar types or structures, and so on.
Because of that—and because you’re not “deeply involved” yourself—you trigger another generation. Some errors get fixed, but new ones appear. Such a loop can last many iterations.
The reason for these loops is that LLMs are probabilistic systems. They optimize for plausibility, not system coherence, and they don’t possess a global architectural model. Due to limited context windows (200k to 1 million tokens), agents often only send partial context.
The problem is that developers gain little insight because the cognitive effort is delegated. After ten iterations you may still not understand what’s happening in the code, framework, or component—you’re basically a passenger. It’s like delegating work to another developer.
Meanwhile, it constantly consumes tokens. It’s easy for a developer to spend €100–200 per day, even with “Max” plans. Claude’s “Max 20” subscription, for example, has a limit of around 900 messages per five hours. Vibe coders burn through that quickly because—often invisibly to the user—they repeatedly send pieces of context to the AI.
This means that, with current pricing, costs can easily reach several thousand euros per developer per year and providers can raise their prices at any time.
LLM agents and chat tools are designed to be highly engaging—meaning they try to keep users interacting as much as possible. For instance, I’m currently using one of the cheapest API models (Gemini 3.0 Flash Preview, pay-as-you-go), which costs only fractions of a cent per token and request. Yet during a complex session (about two hours in a 30–50 LOC project), I end up paying about €10 per hour. I also frequently hit limits (“Quota exceeded”—1 million tokens per minute) when the agent sends many large-context requests and context caching doesn’t work.
AI companies are spending billions on marketing—both traditional and content marketing. You constantly see “organic” posts describing how tool X “autonomously built a compiler” or “created a game by itself.” Sometimes it feels like The Emperor’s New Clothes: if someone points out limitations of vibe coding, people immediately respond with examples like “But I generated an interface for my Raspberry Pi!”
A more subtle issue occurs when tools generate code for the wrong framework version. The code compiles and runs, but it uses patterns from an older version. You usually discover that mistake much later.
Even worse are architectural or design errors that aren’t obvious at first because “it works.” I saw this recently in a low-code tool project. The tool generated multiple data type definitions that looked similar but weren’t identical. To “fix” the issue—even in “architect mode”—it suggested copying data back and forth between structures.
Christoph Henkelmann:
Vibe coding is very dangerous for beginners because it quickly creates the illusion of productivity. At that stage, you often can’t judge whether the result is correct or not. I worry that newcomers will have a harder time learning the fundamentals.
It requires a lot of discipline and reflection to recognize what you still need to learn and then leave the agent’s comfort zone to fully understand complex work—sometimes by programming things manually again. Otherwise, you risk security vulnerabilities, unmaintainable code, and ultimately a skills shortage in the next generation.
Rainer Stropek:
Vibe coding without a clear goal may be fun, but it has little in common with professional work. On the continuum between vibe coding and spec-driven development, I place myself closer to the spec-driven side. I usually have a fairly precise picture of what I need and how the code should be structured.
Without that target vision—or without giving the AI technical guardrails—you give up too much control and hand over the steering wheel to the AI.
Paul Dubs:
For me, the clear boundary is completely unsupervised, hands-off vibe coding where you let AI build entire projects on its own. Once you give up supervision, you lose your understanding of the codebase.
The primary artifact of our work as software developers isn’t raw code, it’s understanding the problem and its solution. Without that understanding, you fall into a copy-paste mentality: “It’ll probably work.”
Another problem with purely additive work with AI is that you build a “snowball” or “big ball of mud.” The AI keeps adding layers, and if the core was already wrong, you waste huge amounts of time instead of simply deleting the flawed code and starting over.
Pieter Buteneers:
The clear limitation is that AI constantly takes shortcuts. If there’s a quick hack that solves the issue immediately, it will often choose that. It doesn’t always analyze how the code was written in order to maintain the same standards. But it’s improving.
Devmio: Is vibe coding more of a junior boost, or is it also a real advantage for senior developers?
Thomas Mahringer:
In my opinion, with clear rules it can boost both juniors and seniors. For juniors, it’s useful for quickly researching new topics and improving algorithms or components. But it should be used as a knowledge base and coach, not as an autonomous programmer. For example, when reviewing React components it’s helpful because the tool often catches common mistakes such as incorrect hooks or unstable callbacks.
For seniors, my experience suggests one rule. The developer using it must be significantly better than the AI. This is especially true for architecture and design topics. The developer must be able to immediately spot when something is wrong.
Where is it useful? As an idea generator, for generating certain algorithms, creating complex type definitions (e.g., TypeScript union types or generics), detecting specific errors, and for prototyping.
Christoph Henkelmann:
Actually, it’s more the other way around. You need a lot of experience to use these tools effectively. At least until new educational standards emerge for training junior developers.
Vibe coding is not a multiplier for programming ability, it’s an exponent. If your skills are weak, the results get worse and you lose time. The less experience you have, the less you should rely entirely on agents.
For example, when doing system administration I only use LLMs as a tutor. I’m not experienced enough to supervise the work closely, and if I outsource everything, I stop learning. But when programming a Java server, I can delegate much more to the agent because I immediately see when it’s going in the wrong direction. Vibe coding is more of a boost for senior developers.
Rainer Stropek:
Vibe coding can be useful regardless of experience level or age. What matters is how you use it. It’s a new trend for everyone.
Junior developers often lack practice in formulating clear and structured instructions and in managing a digital “team” of coders. Senior developers, on the other hand, sometimes focus so much on risks that they overlook the opportunities.
In the end, the mix is what matters. Seniors need the energy and experimentation of juniors, while juniors need to learn from seniors what it takes to succeed in long-term software development within larger teams.
Paul Dubs:
I actually see vibe coding as a much bigger benefit for senior developers. Seniors already have the necessary abstractions in their heads and know from experience how problems are typically solved. They immediately recognize when the AI is heading in the wrong direction and can intervene early.
For juniors, however, vibe coding carries a risk. You get quick results but not necessarily real wisdom. Wisdom often comes from struggling over time. If juniors skip that craftsmanship phase, they build a fragile house of cards that will become a burden.
Pieter Buteneers:
Honestly, experienced developers benefit much more from vibe coding than juniors. A junior can certainly produce things with it, but the result can be spaghetti code. That might work for ten pull requests, but after that it becomes very fragile. An experienced developer can look at the code and say, “Okay, this isn’t right.” You can use it to work on several things at once.
I often work on two tickets in parallel: two versions of Cursor running side by side. I work on something, and when one finishes, I review it and then check the other. That also frees time for things like support tickets.
Switching between tasks used to be costly when I wrote everything myself. Now it’s easier. I just review the generated code and move on.
We’re now a team of four, but we used to be three people: two other very experienced colleagues and me. The amount of work we can get done now is incredible. Vibe coding gives us wings to build things faster.
Tools like Coder Rabbit finds bugs that we never would have and that customers might have discovered a month later.
Devmio: How deeply should software developers dive into the fundamentals of machine learning today?
Thomas Mahringer:
Machine learning also includes “traditional” statistics, big data, data retrieval/mining, and predictions based on regression analysis models. In my opinion, it makes sense to know this area well if you develop software in that domain. For using vibe coding more effectively, but it doesn’t play a major role, since it operates on a different level.
Christoph Henkelmann:
Just as developers should have some basic knowledge of computer graphics, operating systems, and networking, I believe a basic understanding of machine learning is important today. Not everyone needs to be able to train models themselves, but a general understanding helps when using these systems properly and when working across teams.
Rainer Stropek:
You don’t need to be an ML expert to use AI successfully in software development. For me personally, a solid foundation is enough. Deep expertise becomes necessary at the level of APIs and SDKs used to access cloud-based or local LLMs. Anyone who dives into that layer and explores all relevant aspects in detail already has more than enough to deal with. That knowledge is essential for using AI effectively and purposefully as a coding partner.
Paul Dubs:
It depends greatly on the direction you want to develop professionally. For simply using generative AI in everyday development work, a deep dive into the mathematics behind it isn’t necessary. Classical machine-learning foundations are mostly statistical and stochastic mathematics. Knowing the exact order of matrix multiplications or how specific activation functions work isn’t particularly helpful for day-to-day vibe coding. For that reason, I don’t think these traditional mathematical ML basics necessarily have to be part of every standard software engineering curriculum today.
Tam Hanna:
At the very least, a basic understanding of what you can obtain from an AI system is absolutely essential today. Otherwise—take machine vision as an example—you risk reinventing the wheel. In an era of ever-accelerating product cycles, even in the embedded market, that’s not a viable allocation of resources.
Devmio: Should ML basics be part of every software engineering education? Why or why not?
Thomas Mahringer:
Just as children and teenagers often lack the tools to deal with social media responsibly, many developers lack the tools to work effectively with AI coders.
That’s why we need a new approach to developer education, both in formal training institutions and on the job within companies. Traditional computer science courses are no longer enough. This new type of education is more about personal development: How much do I know? How much do I want to know? Am I willing to invest cognitive effort? Is my motivation to acquire knowledge or simply to get things done quickly? It’s about impulse control and the ability to step back and reflect.
It would help if power users evolve into “specification and black-box testing specialists.” They define precisely what is required and then let the agent run until all black-box tests pass successfully.
The catch is that to do that, you still need highly algorithmic thinking as well as strong specification and testing expertise—essentially, you still need developers. Whether this effort is actually less than understanding the software properly from the start remains an open question.
In my view, there is a real need for action. We need better developers, not fewer skilled ones, so that we can properly train the next generation. The guiding principle should be that the developer must be better than the tool.
In many areas—music, image generation—we’re already seeing people who have a pseudo-feeling of productivity through AI. In reality, they’re “prompt monkeys” with little understanding of the concepts behind it.
(See also: Wired, June 2025: “Vibe Coding Is Coming for Engineering Jobs.” The article describes the paradox that, despite the boom in AI-generated code, a deep understanding of programming has become more important than ever. Users without technical knowledge hit dead ends when code breaks and they have no idea how to fix it.
And Wired, October 2025: “Vibe Coding Is the New Open Source—in the Worst Way Possible.” It warns that while vibe coding enables fast prototyping, it also creates “accidental architectures” and security risks because developers often give up control over how the code works.)
Rainer Stropek:
Yes. Even though ML fundamentals aren’t strictly required for AI-assisted coding, having an understanding of the internal structure and functioning of AI systems certainly doesn’t hurt.
Pieter Buteneers:
To be honest—and this may sound strange coming from the program chair of Amelcon—when you look at what AI can do today, the need to develop and train your own machine-learning algorithms is practically zero. The tools are improving every month. Image recognition, for example, has progressed to the point where in most cases you no longer need to train your own models.
You can still achieve much more beyond language models, but it requires work. For most applications today, AI is already advanced enough that you don’t necessarily need to deal with the fundamentals of machine learning.
On the other hand, it’s easier for me to use these models effectively because I understand how they work and how they are trained. But even there, the gap is shrinking. The models are improving, they understand more, and you can achieve good results even without a full understanding of how they work. AI is increasingly becoming a tool that you simply learn to use, rather than something whose internal workings you must understand in every detail.
Tam Hanna:
At the very least, understanding which AI systems operate deterministically is extremely important. How the models work internally is less important—after all, no one implements them manually anymore.
Sign Up for Our Newsletter
Stay Tuned & Learn more about VibeKode:
Devmio: Is it enough to “use AI correctly,” or do you also need to understand it?
Thomas Mahringer:
No, you should understand the basics. How do LLMs work? What is their probabilistic nature? How do the four layers of vibe tools work? How do vector databases and semantic search function?
If you understand these fundamentals, you can evaluate and use vibe coders much more effectively. For example, a developer understands that just a few requests can generate tens of thousands of tokens that must be paid for. They also know that semantic search (preparing context fragments) can be done locally on the developer’s laptop and is free.
Christoph Henkelmann:
You don’t have to go very deep, but in my opinion, you do need a rough understanding—tokens, the stochastic nature of the models, and so on. I don’t need to understand an engine in every detail to drive a car, but I should know what a gearbox is so I can shift gears and understand what happens when I press the accelerator.
Rainer Stropek:
It’s certainly possible to work successfully with AI without understanding the details behind it. In my daily work, I’ve seen impressive examples of domain experts with no software or ML knowledge use vibe coding to create solutions that massively improved their work.
However, anyone with an IT-related education should be able to step in when AI makes mistakes or needs precise technical guidance. For that, some background knowledge is indispensable.
Paul Dubs:
It’s not enough to simply type commands, you should also understand the behavior and abstract mechanics of AI. With today’s dominant large language models, it helps to know that they essentially generate one token after another and often operate within role-playing dynamics, somewhat like improvisational theater.
For example, if an AI makes mistakes and you repeatedly point them out in conversation, it may adopt exactly that role of the “mistake-making partner.” If you understand this, you know that it’s often more efficient to clear the context and start over rather than endlessly correcting the AI.
You also need to consider what type of model you’re dealing with. At the moment, autoregressive models dominate, but it’s unclear whether that will remain the case. Knowledge about “how to use it correctly” can quickly become outdated. Understanding the underlying mechanisms allows you to adapt more easily.
Pieter Buteneers:
It really depends on what you want to use it for. If you’re doing vibe coding, using a bit of prompt engineering and entering some text to get an output, then you don’t really need to understand what’s happening behind the scenes.
But if you want to stay at the cutting edge or work on things that go beyond language—like truly advanced processing—you still need to understand how the models work, because you may need to train your own models.
For the average user, it’s not necessary. It’s a bit like driving a car: many people can drive, but very few truly understand how the mechanics work. AI is reaching the stage where many people can use it without knowing what’s under the hood.
Two years ago, the decisive moment for me was when ChatGPT was announced. Back then it was still called Malcon. I played around with it and almost fell out of my chair. I thought: “Wow, what is this? It actually understands what I’m saying.”
And even then, compared with today’s models, it was still very primitive. But I always said at conferences that we already crossed the language barrier years earlier. We already had models that could process language better than humans. That barrier had already been broken, and then suddenly ChatGPT appeared, based on a model that was already two years old and had only been slightly fine-tuned. It wasn’t a new model, just an older one trained in a different way.
I remember thinking: “We could have had this two years ago.” Then GPT-4 came out, a huge leap forward. These models performed much better. For most people, the real shift began with GPT-4, or when it became affordable, but the change had been coming for quite some time.
Author
🔍 Frequently Asked Questions (FAQ)
1. What is vibe coding in software development?
Vibe coding refers to using AI-powered coding assistants such as Copilot, Cursor, or Claude Code to generate, modify, and debug code. Developers provide instructions or context, and the AI produces code suggestions. It is often used for prototyping, automation, and repetitive tasks.
2. How do developers use AI coding tools in daily work?
Developers use AI tools for tasks like writing standard code, debugging, generating prototypes, and answering technical questions. Many treat AI as a “pair programming partner” or tutor. Usage varies depending on task complexity and developer expertise.
3. Where does vibe coding provide the most value?
Vibe coding is most effective for repetitive tasks, prototyping, generating algorithms, and fixing small bugs. It can significantly speed up development workflows. It also helps developers explore unfamiliar technologies more efficiently.
4. What are the main risks of vibe coding?
Key risks include loss of understanding, poor architecture decisions, and reliance on incorrect or outdated code patterns. Developers may fall into trial-and-error loops without fully grasping the system. This can lead to unmaintainable or insecure code.
5. Why can vibe coding lead to a “copy-paste mentality”?
When developers rely too heavily on AI-generated code without reviewing it, they may lose insight into how the system works. This creates a situation where code is accepted because it “works,” not because it is correct. Over time, this reduces code quality and maintainability.
6. Is vibe coding more useful for junior or senior developers?
Vibe coding benefits senior developers more because they can better evaluate and correct AI output. Experienced developers recognize architectural issues early and guide the AI effectively. Junior developers risk generating fragile or incorrect code without realizing it.
7. How does AI impact developer productivity?
AI tools can significantly increase productivity by automating repetitive tasks and enabling parallel work. Developers can handle multiple tasks simultaneously and iterate faster. However, productivity gains depend on proper supervision and code review.
8. Do developers need to understand machine learning to use AI coding tools?
A deep understanding of machine learning is not required for everyday use of AI coding tools. However, a basic understanding of concepts like tokens, probabilistic outputs, and model limitations improves effectiveness. This helps developers use AI more critically and efficiently.
9. What happens when developers rely too much on AI-generated code?
Overreliance can lead to poor system design, hidden bugs, and increasing technical debt. AI tends to choose quick fixes rather than optimal long-term solutions. Without proper oversight, this results in fragile and inconsistent codebases.
10. How should developers use AI coding tools responsibly?
Developers should treat AI as an assistant, not a replacement. Clear specifications, strong review practices, and architectural understanding are essential. The most effective approach combines AI speed with human expertise and critical thinking.








