All Posts

Using AI Coding Tools as a Junior Engineer

February 16, 20266 min read
AICareerDeveloper ToolsOpinion

The Skeptic Phase

When AI coding tools started gaining traction, I wanted nothing to do with them. My reasoning was straightforward — I was early in my career, still building intuition for how software systems fit together, and leaning on an AI felt like it would short-circuit that process. If the tool writes the code for me, what am I actually learning?

There was also a part of me that just felt like it was cheating. I'd spent years in school where using unauthorized help on assignments could get you expelled, and that instinct doesn't disappear overnight. Writing code yourself was the point. Outsourcing it to a model felt like skipping the part that mattered.

So I kept my distance. And honestly, the early results didn't challenge my position. Whenever I threw a genuinely complex problem at a coding agent — something involving multiple services, tricky state management, or a non-obvious architecture — the output would fall short. It would hallucinate APIs that didn't exist, lose track of context across files, or produce something that technically ran but was structured in a way that made no sense. Every time I solved something an agent couldn't, it reinforced my belief that these tools weren't ready for real work.

What Resistance Taught Me

Looking back, I'm actually glad I didn't rely on coding agents early on. That period of doing everything by hand gave me something I don't think I would have developed otherwise — a feel for how systems work at the big-picture level. When you're the one wiring up services, writing middleware, debugging deployment pipelines, and tracing requests through layers of abstraction, you build a mental model that sticks.

That mental model is what lets me use AI tools effectively now. I know what good architecture looks like, so I can tell when an agent produces something structurally wrong. I understand the tradeoffs between approaches, so I can steer a conversation with a model toward the right solution instead of blindly accepting whatever it generates. The foundation I built by coding without assistance turned out to be the thing that made assistance useful.

How I Actually Use Them Now

My workflow has settled into a pattern, especially for personal projects. I start by designing the system myself — thinking through the components, the data flow, how things should be organized. Then I lay out the project structure manually.

For example, when I built an MCP server recently, I created the skeleton first: an empty server.py, then tools/get_templates.py, and so on — files named and arranged in a way that made sense to me. Some had simple comments describing what each component should do, others were just blank files marking where functionality would live. Only after I had the architecture laid out did I bring in coding agents to fill in the implementation.

This approach works because it keeps the design decisions in my hands while offloading the parts where AI is genuinely faster — writing boilerplate, implementing well-understood patterns, translating a clear specification into code. The agent isn't making architectural choices; I am. It's just writing the code I already know I need.

What I Got Wrong

For all my early skepticism, there's one thing I'll admit I got wrong: I didn't give these tools a fair chance. When a coding agent didn't nail the output on the first try, I'd write it off and go back to doing things manually. But here's the thing — even when the first result isn't perfect, prompting again and refining is almost always faster than typing the code from scratch. I was treating a bad first attempt as proof the tool couldn't do it, when really I just needed to iterate.

I also made the mistake of thinking of these tools as static — like they'd always be roughly as capable as they were when I first tried them. That assumption aged poorly. For me, the inflection point was Claude Opus 4.5. That was the first time I consistently got results in one shot that I would have been happy writing myself. The gap between what I expected and what I got collapsed almost overnight. These models are improving on a curve that makes any fixed opinion about their capabilities outdated within months.

Letting Go of "Cheating"

The deeper shift for me wasn't about the tools at all — it was about mindset. In school, you're trained to value process above everything. The grade is supposed to reflect what you learned, not just what you produced, so any shortcut feels like a violation. That framing made sense in an academic context, but it doesn't translate cleanly to professional work.

In the real world, what matters is the result. Did the system ship? Does it work? Is it maintainable? Nobody audits whether you typed every character yourself. The value you bring as an engineer isn't your typing speed — it's your judgment, your taste in architecture, your ability to know what to build and why. AI tools don't replace any of that. They just make the execution faster.

I look back at my initial resistance now and feel a mix of appreciation and amusement. Appreciation because that stubbornness forced me to learn fundamentals I still rely on every day. Amusement because I was so convinced that using these tools meant giving something up, when in reality they gave me back the most valuable thing I have — time to think about harder problems.

Where I Stand

I'm not an AI evangelist. I don't think these tools are magic, and I don't think they replace the need to actually understand what you're building. But I do think that resisting them out of principle — out of a lingering sense that real engineers write every line by hand — is a mindset worth examining. The engineers who will build the most in the next decade won't be the ones who type the fastest. They'll be the ones who think the clearest and use every tool available to turn that thinking into working software.

If you're early in your career and on the fence, my honest advice is this: learn the fundamentals first. Build things the hard way until you understand why things work, not just how. And then let the tools accelerate you — because once you have the foundation, that's exactly what they do.

Written by

Ashesh Nepal

All Posts