I’ve been sitting with a question for a while now. Not a technical question. A personal one. What am I, exactly?
I graduated from NIT Warangal in 2023 with an electronics degree, ended up in software, and within two years found myself staring at Cursor completing my code and thinking — something is wrong here. Not with the tool. With what the tool revealed. The thing I was building my career on was quietly shifting underneath me.
Most of my peers didn’t see it. Or didn’t want to. I did. So I left my job.
I read Anthropic’s 2026 Agentic Coding Trends Report recently. It’s a vendor report — worth keeping that in mind. But one line stopped me cold.
“The key to success lies in understanding that the goal isn’t to remove humans from the loop — it’s to make human expertise count where it matters most.”
That sentence is doing a lot of work. It’s framed as reassurance, and it is. But underneath it is something more uncomfortable. If the goal is making human expertise count where it matters most, the question every engineer should be asking themselves is — do I actually know where that is for me?
Most don’t. And that’s a problem.
I think a big change is happening with engineering careers right now, and I haven’t seen anyone say it clearly enough.
AI is making good, experienced engineers dramatically better. Their accumulated judgment — the taste they built through years of doing things wrong first — is being amplified. They move faster, think wider, ship more.
But the same thing that’s helping seniors is quietly devastating juniors. Not because they’re being replaced. Because the reps that built judgment are disappearing.
You used to get good by doing the easy stuff first. Fix small bugs. Write utility functions. Build CRUD features. Through that grind you developed an internal model of how systems break, where complexity hides, what good code actually feels like. That was the ladder.
AI does the easy stuff now.
So juniors are being asked to orchestrate AI agents before they’ve developed the intuitions needed to evaluate what the agent is producing. The AI is confident. It doesn’t flag its own subtle wrongness. And the junior without taste doesn’t know what they don’t know.
The output looks fine. The learning isn’t happening.

In five to ten years we may have a generation of engineers who are excellent at operating AI tools but genuinely cannot diagnose a hard problem without them. They won’t know that about themselves until a moment arrives where the tools fail.
I was two years into my career when I saw this coming. Not perfectly, but clearly enough to act. I left my job. I spent four months trying to build a SaaS. It failed — not because I couldn’t build, but because I didn’t know how to market. Four months after that, I joined Zomato, where I’m now building and leading an AI evaluation platform and SDK.
Eval is one of the hardest problems in AI right now. Nobody has really solved it. And yet I found myself feeling empty a lot of the time. Not because the work isn’t meaningful — it is. But because I was in between identities and hadn’t named it.
I wasn’t purely an engineer anymore. I wasn’t yet a creator with a real audience and clear positioning. I was straddling both and getting full validation from neither.
I had 20K followers on Twitter and 28K on LinkedIn — all technical, all following for ML content. “How does X model work.” Decent impressions. No real differentiation. No clear conversion. A distribution, but not really an audience.
I had four reasons for making content: building proof of work for future opportunities, monetizing attention through sponsorships, building a hedge if my job disappeared, and eventually having eyeballs when I built something. Four coherent reasons. But the content I was making served none of them well.
“How X model works” is commodity content. Anyone can make it. It builds no unique positioning, and it builds no trust — only familiarity.
What I actually have, and hadn’t named clearly until I thought hard about it, is a specific vantage point. I’m not a tech educator who learned about AI. I’m an engineer who felt the disruption viscerally, made bets on it, built something that failed, came back, and is now deep inside the infrastructure of AI evaluation at scale. I understand how models fail from the inside — not from reading papers.
The audience I have doesn’t need more explainers. They need judgment. How to think about what to build. How to evaluate AI output. How to stay relevant as the ground shifts.
And the thing is — I still am not good at this. I’m learning this in real time. I do not want to pul from a fixed pool of knowledge. I have to build the pool while drawing from it. Every piece of content I will make, teaching it will force me to actually figure it out. My job, my content, and my learning can all point in the same direction.
The positioning I’ve landed on is this.
Classical engineering depth meets the AI orchestration era. I post not on how MapReduce works, but on why MapReduce still matters and how to make your coding agent actually work with it. Not blabbering, not summarizing — teaching what I’m learning, bridging what I know deeply to what’s new, helping engineers make better decisions about what still requires their irreplaceable judgment.
The newsletter is called The Conductor (came at this name after changing it multiple times, fullstackagents, onepromptaway, modernai and then this).
I like that name because it captures the new engineering identity without abandoning the old one. A conductor doesn’t play every instrument. But a conductor who never learned music can’t make the orchestra coherent. The skills aren’t gone — they’ve changed form.
The description I settled on: For engineers learning to lead AI without losing the depth that makes their judgment irreplaceable.
I’m still figuring this out. I won’t pretend otherwise. But this is the clearest I’ve been about what I’m doing and why.
The financial freedom I want — the independence to navigate my career on my own terms, to not panic if a job disappears, to build something real when the time comes — doesn’t come from one big bet. It comes from compounding small ones consistently from a position of genuine expertise.
The engineers who thrive in this era aren’t the ones who resist AI or blindly trust it. They’re the ones who’ve done enough hard work to know what good looks like — and can therefore evaluate what the AI produces. Taste isn’t a soft skill. In this era, it’s the core one.
I’m building an audience of engineers who want to develop that. Starting with myself.