How I Saved 70000 USD Yearly for My Company Using Vibe Programming

This term vibe programming is pretty new. What it means is instead of writing code yourself, you use natural language to instruct an AI to do the coding for you. Coding for you would be an understatement. AI can now do end-to-end software development for you. From development to deployment, it can do all pretty neatly today. I don’t even want to mention what it can do tomorrow.
Let me share my story about how this disrupts the whole software development ecosystem.
Context
I work in Brain Station 23. Due to the nature of my work, I usually end up trying out new technologies that come up. And over the last 7–8 years I’ve managed to learn things that aren’t exactly my primary skill but do assist me as a Software Engineer.
I always believed in
“Jack of all treads but Master of One”
So I always learnt anything that had future implications. So our CTO once came to me during a meeting that, We have this software we have been using to run surveys within the company. But it’s getting expensive over time as the licensing model is 7$/user/month. And our company has now 800+ employees. This means we need to spend almost 70K USD per year for this. Can we not make something like this on our own?
Planning and Execution
I was hoping to try out my Vibe programming skills for a while. And wanted to evaluate the extent of AI Engineered Software. It may sound okay in theory but we still need to run a full lifecycle of software using AI to make sure that it’s viable.
First I made sure we have a product owner who’s not me. Someone who can validate the product’s quality. Since our HR runs our survey we got someone who gave me the requirements of the system. (I’m trying to simulate the client requirements-gathering process) Once that’s done I need to make sure I track my hours so that we can do a benchmark later.
After 30–40 work hours I had like the first deployable version of the product. So I put it for client (HR) review. They tested it and came up with a few issues. Which I was able to fix in like 10–20 minutes per issue.
Once HR was happy. I wanted to focus on polishing out the code and some loose ends and make it as if it’s a product that can be deployed anywhere. So overall I logged around 80–90 work hours.
The product was already handed over and the HR had run their first survey and is pretty happy. So in short that went pretty well. But as an engineer that’s just one parameter I wanted to measure. Now we need to know the viability of the entire development pipeline. And of course efficiency.
Traditional Effort vs Vibe Programming Effort
I went to multiple Business Unit Heads of our company. Showed them what I built. The survey platform, Its features, and what can be done with it. In a bare minimum fashion (Their time is valuable). And I asked how much you think developing a system like this would cost us. And how long do you think this would take us to develop? Gimme an estimation.
All three of them gave me similar cost and timeline. Which I already knew but needed external validations to make sure my data wasn’t something I made up. According to them to create such a system it would cost us something like 20k-30k USD. As for Dev effort, it’d take something around 6 man-months of effort.
Let me try to show what this means.
- 6 man-month worth of work was done in less than 100 Hours.
- 20k-30k worth of work was done in less than 100 USD for the tool [of course + my hourly rate. Which still isn’t anything close to the 20k-30k range]
And just a reminder this estimation was a minimalistic one. They didn’t see the entirety of the system.
Code Quality?
Okay, we need some more critical validations. It’s AI. I don’t trust AI to code properly. What about its quality? Is it maintainable? how does it fare against a human code base? Luckily I had just the tool and case to make the contrast. So we have another product in development for a while. It’s been being developed for a while. And it’s being developed in the traditional way. I put them both on Sonar Cloud Static Analysis and got this report.

Open Office Survey is the Repo that’s been generated entirely by AI (99% of the code was done by AI)
Tracker-23 is the repo that was developed by multiple human beings.
Interestingly enough they are both TypeScript Projects and have similar Lines of Code Count which you can see from the screenshot here. We can have a very good understanding of the difference.
AI-generated code is [Based on this reference point]
- 10x more reliable
- 2x More maintainable
- 2x less duplication
[Of course, this number would vary based on reference point.]
Safety?
I think there are multiple questions about this.
- Is AI respecting the Licensing Terms of other Libraries?
- Is it pushing secrets to the repository?
- How secure is the system compared to traditional development?
OSS Review toolkit
The way that I ensured that AI wasn’t using libraries that it shouldn’t was by setting up a Github workflow using ORT. This is an amazing tool. It does a lot of things but I only needed the action to check all the package licenses and their dependencies licenses to meet the policy. If it had used any package that has a restrictive license it would have failed the workflow.

As a result, I always knew if the AI was using libraries it shouldn't be using. It’s like a gatekeeper that enforces licensing policy.
Git Guardian
To make sure AI isn’t pushing secrets to the repo I’ve set up another workflow to check if there are any secrets in the codebase using Git Guardian.

And yes the AI tried to push Secrets a few times. Although the secrets were hallucinated duds. It created imaginary secrets and pushed the code. My protection system immediately triggered and let me know our AI wasn’t following the rules.
Security Audit
I’ve sent the AI-generated code to our internal audit team to provide me with a report on how good or bad the AI code is. I don’t have it at hand yet. But I’m quite confident about it actually. The reason Is we have some security implemented by default as we have mostly followed standards.
Convention over Configuration
For example, we are using OAuth 2.0 And Microsoft Entra Login. So Auth system isn’t exactly part of the concern here. At least AI doesn’t have much to do here.
The backend is entirely on Supabase. And we have RLS policies enabled for each table (At least most of it). As such without a proper role token, you can’t even get data even if you know the tables. And you get this token from authentication. So…
The search and most functionalities are based on Stored Procedure so there’s that.
I have automatic GitHub security enabled so any package that might be vulnerable will be flagged immediately.
And interestingly enough I can show the difference in security issues found by GitHub for AI-generated software vs human-developed software.


Aside from that in the sonar analysis, there weren’t any known security issues either.
So overall it’s pretty secure. But once I get the manual audit report we will be more confident. (I mean we can always polish it out if there’s any issue)
Summary
- User Acceptance Testing — Passed
- Development Cost — Significantly low
- Development Time — Significantly low
- Code Quality — Acceptable
- Security Quality — Acceptable
- Overall Status — Ready for Production
AI engineering tools
Now time for a shout-out to the tools that enable us to supercharge our development. I mean there are lots in the market already. You probably know a few already. But I’ve some recommendations.
- Lovable.dev : This is probably on top of the food change. It does a lot of things right. However, it’s only focused on React + Supabase combo.
- https://bolt.new/: This is probably the next best thing. It covers almost everything that’s possible.
- https://bolt.diy: This is the self-hosted version of Bolt. Eventually, we will have more tools like this. And everyone will have their own AI Engineer at home.
Conclusion
Software engineering as we know it is reaching its end. This is a major paradigm shift. Whether you like it or not it’s happening and everyone has been handed over a canon.
AI won’t replace you. Someone with AI would certainly