Gone are the days of AI influencers promoting 100X productivity gains through vibe coding.
As developers have put vibes into practice, they’ve come away disillusioned by the limits of the technology.
That’s not to say the underlying LLMs will not continue to make strides in AI-assisted coding applications.
AI will improve, agentic systems will improve but it’s the reality that vibe-coders must come to terms with: complex code changes are under-determined by simple language directives.
It’s why they are great at standing up a demo but the wheels fall off once you’re optimizing a complex app worth the maintenance cost.
Another way vibes come up short: introducing new ideas not well-represented in the mode of the training data.
It’s why Cursor is in talks to send telemetry to their most dangerous competition–the big AI labs who can make their own fork of VSCode.
It’s also why at Remyx AI, we consume from the greatest source of divergent technical thought on the internet: the arXiv.
As software implementation costs continue to drop, the bottleneck in development will shift toward scaling ideation. That’s why we’re building the ExperimentOps platform to accelerate the phase of development overlooked by DevOps and MLOps devtools.
The biggest shortcoming of vibe-coders isn’t even the technology, it’s the vision. Like productivity tools from a bygone era, they offer the illusion of value. But in reality, developers are babysitting agents and manually reintroducing critical pieces of code that were ‘refactored’ out of their repos.
Far from the promise of co-working with a ‘junior engineer,’ these tools force developers to treat everyday like they are onboarding a green developer, constantly reintroducing context that a human would grep much faster.
Another way vibes fail is in validating your implementations. For the last couple of years, AI engineers have been wandering in the dark looking for that magic bullet which would allow them to cheaply evaluate offline so they can ship with confidence.
But AI doesn’t yet understand causality, so you end up getting plausible sounding non-sense instead of the deeper insights which come through experience and experimentation.
With ExperimentOps, we offer an alternative vision to the AI slop nightmare and illusion of productivity. It’s based on scaling the scientific method, applying the same battle-tested workflows adopted by all the most successful technology companies.
We advocate for applying science-backed methodologies like randomized controlled trials after ramping up from cheaper offline evaluation techniques. This is how you prove what works instead of praying the vibes work out in your favor.
By mapping each new code change to the low-dimensional space of practical engineering challenges you’re facing, our system can help you learn what delights your users so you can double-down on that.
Many organizations face the challenge of transferring their institutional knowledge to new employees and the future of AI-assisted engineering will support this need by externalizing that context and using it to improve recommendations for what’s next.
By closing the loop between ideation and validation, while designing smarter workflows to reduce the cost of implementation, we aim to offer a more promising vision for the future of software engineering.
One where you are not stuck between a boss who wants you to vibe harder and a ‘junior engineer’ who you’d can if they needed constant monitoring.
Now you can try your next PR using GitRank today.