Coding with Jesse

Coding with LLMs can still be fun

Do you love reviewing AI-generated code? Do you get a tickle of pure joy to find and criticize the mistakes and problems in hallucinatory slop? Me neither.

You know what I do love? I love pouring my creativity and insight and empathy into a project. I love designing architectures and solutions that actually make things better for users. I love getting in the flow state, cranking away at a problem, building brick upon brick until the creation comes to life.

If you're not careful, AI tools will have you spending all your time doing code review. It's very hard to get into a flow state when you're waiting on your agents and reviewing what they do. Fortunately, I've found a way to use LLMs to do a lot more of what I love.

Curiosity and excitement

Since I was a child, I wondered how it was possible that humans were able to get a computer to do so many things. It felt like magic, and I wanted to be that magician. Forty years later, that feeling still guides me through my software development career.

While LLMs stir that curiosity and excitement, they also threaten to take it away. If, like me, you have very mixed feelings about it all, you might be both excited and worried about the potential.

I've found LLMs to be very helpful ever since Copilot came out in 2021, and they have definitely made me more productive. My views on coding with LLMs haven't changed much along the way. Five years later, I wanted to share with you how I'm using LLMs today, and how they're making my job more fun than ever.

A workflow that lets you work in flow

The secret is to find a workflow that works for you, that keeps you engaged and in a state of flow. I've found the following workflow that works really well for me. I use it every day for almost every multi-step coding task I work on. It basically starts with the following context:

When the user gives you a task specification:

1. Explore the codebase to find relevant files and patterns
2. Break the task into a small number of steps. Each step should include:
    a. a brief, high-level summary of the step
    b. a list of specific, relevant files
    c. quotes from the specification to be specific about what each step is for
3. Present the steps and get out of the way.

When the user says "done", "how's this", etc.:

1. Run git status and git diff to see what they changed
2. Review the changes and identify any potential problems
3. Compare changes against the steps and identify which steps are complete
4. Present a revised set of steps and get out of the user's way.

Important:
- Be concise and direct, don't give the user a lot to read
- Allow the user to make all technical, architectural and engineering decisions
- Present possible solutions but don't make any assumptions
- Don't write code - just guide
- Be specific about files and line numbers
- Trust them to figure it out

You can paste this in a new chat, or set up a "custom agent" or "skill" if you want to be fancy. I use a Claude Code skill for this, but it'll work with any LLM coding tools. Ideally it'll have access to your codebase and can work as a search engine to point you in the right place. I find this really speeds me up, especially on new codebases, or code I haven't touched in a long time.

With this workflow, I'm not spending hardly any time waiting on the LLM to generate content. I find this approach uses the LLM for what it does best. Hallucinations are almost non-existent, because everything it needs is available in the present context. There's very little sycophancy, and not much back-and-forth chatting at all.

I specifically don't want to be reading through pages of markdown. If it makes a mistake at a high-level, I can easily spot and correct it, or just choose to do something different.

Often I'll still decide to ask the LLM to write code for me, but I keep it limited to a small step in this process at a time. I sometimes have it scaffold out some empty modules for me while I work on a different step. Often the steps are very simple or mechanical, so it's actually easier to have the LLM complete it for me. I'll use it where LLM-generated code can speed me up, where I'm not wasting time babysitting or directing it to do better. I'm in full control of choosing which parts I want to work on, and which I don't. Either way, I'm staying fully engaged, and I know what's happening at any given moment. It's like wearing a jetpack that I'm steering, rather than a team of minions that I have to manage.

This workflow is easy to modify. You can change it to suit your preferences, and add rules that make it work better for you. You can do less of the coding if you want, or even use it as a planning step in your vibe coding.

It's also dead simple to understand, with nothing special you have to learn. You're not messing around with prompt engineering. There are no MCP servers to install, no special plugins. It works with all models, even cheap or local models. You don't have to keep up with the latest techbro videos to make a workflow that works for you.

Find your own workflow

I invite you to find a workflow that empowers you to do your best work while staying out of your way. Stay more engaged, lose track of time, reduce friction, solve problems and do your best work.

Please share what works for you, and if you have any suggestions to make coding with LLMs even more fun. I hope we can all learn from and inspire each other to make coding more fun than ever.

Published on January 17th, 2026. © Jesse Skinner

About the author

Jesse Skinner

I'm Jesse Skinner. I'm a self-employed web developer. I love writing code, and writing about writing code. Sometimes I make videos too. Feel free to email me if you have any questions or comments, or just want to share your story or something you're excited about.