7 Surprising Ways AI is Supercharging My Coding Workflow

Coding with AI Isn’t Magic. It’s Mostly Messy.

Getting started with AI coding tools feels like trying to vibe with a DJ who’s playing five different songs at once. You keep hearing about people doing full-stack builds in an afternoon with nothing but vibes and a chatbot—but when you try it, it feels like you’re just stringing together mismatched code and praying it compiles.

This is part of an ongoing exploration—if you want to see where I first started messing with these tools, check out my earlier post on picking up AI tooling.

At first, I thought I was doing something wrong. Everyone made it sound so smooth. What am I missing? Why isn’t this working for me? The tools seemed powerful, almost magical, but only if you already knew how to wield them. I wasn’t sure where to start, what to ask, or even what was possible.

What I found instead was a foggy maze of half-answers, hallucinations, and trial-by-fire. And I’m still figuring it out.

I’ve been in that mess. Still am, kind of. But I’m starting to find a way through it. Here’s what I’ve learned so far.


YouTube is Mostly Noise. Editors are Still a Toss-Up

First off, trying to learn this stuff from YouTube is like asking TikTok for investing advice. Everyone’s optimizing for the algo, not your workflow. So I’ve been testing tools myself.

Cursor has been solid so far—AI integration feels native, and the built-in docs are decent. That said, Zed AI has been surprising. It’s fast, really fast. Built with Rust and GPU acceleration, it feels responsive in a way few editors do. And since it supports Vim motions, I’m not constantly fighting muscle memory just to try something new.

Zed might overtake Visual Studio Code for me. I only started using that because of the support for debuggers and the nice git interface. Otherwise, I would have stuck with vim and probably upgraded along the neovim path. 

Though, I think cursor is doing more for agentic coding. Giving us the tools to assign contexts and docs through configuration. It might be the way.


The Workflow Isn’t the Same—You Are Now a Spec Writer

If you’re thinking AI is going to take your half-baked idea and turn it into clean, shippable code, I’ve got bad news. You’re not outsourcing the thinking. You’re just changing the format.

Vibe coding doesn’t work unless you give the AI something to vibe off of. You need specs. Rules. Context. Constraints. All the stuff we usually hate doing, but now, it’s your job if you want anything usable out of your agent.

You’re no longer just a developer. You’re closer to a systems thinker, or maybe a project manager who can code. You’ve got to keep the whole shape of the thing in your head, define the edges, and then walk the AI through each step like it’s a temp on their first day.

You’re:

  • A spec author, writing down the details you used to keep in your head
  • A test writer, because you can’t assume anything just “works”
  • A context engineer, always pruning and feeding just the right info
  • A code reviewer, catching weird guesses and hallucinated imports
  • A debugger, not just for your code—but for the AI’s output too

It’s a shift. It’s more overhead. But it’s also more leverage—if you can wield it.


Write Small. Test Often. Feed It Like a Junior Dev.

The smaller the context, the better the results. If I try to have the agent build an entire module in one go, it collapses under the weight of its own optimism. Like a junior dev that’s really confident but has no idea what an edge case is.

So I break things into atomic tasks:

  • One file or function at a time
  • With clear constraints
  • And tests—always tests

The tests give the agent something to validate against, and they give me a sanity check before I glue anything together.

Make AI write specs

Something that I feel like we shouldn’t need to do, but for some reason do: make the LLM write the specs and docs for you. I prompt the basis for the docs/specs, then it gives it more structure that an LLM will want for explicit context. Why it doesn’t do this with the initial prompt, hell if I know?!?
But, here we are, having AI write specs and docs for itself because it’s incapable of taking the prompt and running it through a multi-step decision tree (But it really can? wtf??) on it’s own.

To be fair, I usually start with a rough draft prompt, and the LLM will shape it into a markdown doc. I’ll review, tweak, and sometimes prompt again to tighten it into something more like a real spec. Once that’s done, I can reuse it as repeatable context. It acts like an anchor for the AI—helping keep its responses focused, consistent, and (usually) less likely to rewrite things it already understood.


Tools That Actually Help (Sometimes)

A few things that have made a difference:

  • Repo Prompt: Kind of a meta tool. You select which files/folders matter, then generate a structured prompt to feed Claude, Cursor, or Zed AI. It helps aim the context laser a little better.
  • Claude 3.7 Sonnet: My go-to LLM for now. Seems to follow logic better than some others.
  • Context7 + Cursor Docs: Context7 runs an MCP server and feeds helpful docs into the mix. Cursor’s built-in docs are hit-or-miss, but it’s decent when it works.

And yes—I have the agent write docs for me. Not because it’s perfect, but because it gets me 80% there. I review, tweak, then reuse those docs as context next time around. It works shockingly well.


Don’t Prompt for Every Damn Thing

Here’s the trap: once you get comfortable prompting, it’s tempting to use the agent for everything. Simple things like rewriting a function signature.

Don’t. Just because you can use an agent for everything doesn’t mean you should.

Rely on it too much, and you’ll end up forgetting how to code. Just like we all forgot phone numbers the second they were saved in our contacts. The muscle fades. The instincts dull. Prompting should augment your skills, not replace them.

You’re still a developer. If you offload all your thinking to the prompt, your skills will atrophy faster than your memory for phone numbers. Use the tool when it’s faster or better, not just because it’s there.

Sometimes it’s quicker to just… code.


I’m Not There Yet, but I’m Getting Closer

I’m still fumbling through this like everyone else. Haven’t found “the way,” and honestly, I’m not sure there is one. Some days it feels like I’m making progress. Other days, it feels like I’m just rearranging the same blocks and hoping they click differently.

Here’s what I’m holding onto for now:

  • Build small
  • Be specific
  • Test everything
  • Document aggressively (even if the AI writes the first draft)

I don’t think AI is replacing devs anytime soon. But devs who figure out how to collaborate with AI, who build muscle around prompts, structure, and feedback loops? They might just be building something entirely new. I’m not there yet, but I’m still trying.


Leave a Comment