AI Coding Journey: What Works for Me in 2026

AI Coding Journey: What Works for Me in 2026

Lessons from a year of building with AI assistants

January 13, 2026 coding 8 min read

I’ve been experimenting with AI coding tools for about a year now.

For most of that time, the results were promising but frustrating, always falling short of what I needed. Then something clicked towards the end of 2025 and in beginning of this year.

The models got genuinely capable.

This is not a comprehensive guide. It’s not scripture. It’s what works for me after months of trial, error, and deleted projects. Your mileage will vary. But if you’re still on the fence about AI-assisted development, or struggling to make it productive, maybe my experience helps.

The tools that actually deliver

Claude Code paired with Opus 4.5 rocks.

I’ve tried the alternatives. Sonnet, GPT, Gemini earlier… They all have their moments. But Claude Code with Max subscription has become my daily driver for serious work now. The investment pays for itself in the first week when you ship something that would have taken you a month.

Yes, it costs money. Yes, you’ll hit rate limits. But the productivity gains are real enough that I factor the subscription into my project costs now, the same way I budget for hosting or domain names.

Things may probably change in the future as new models are getting released and new things are published every week. But for now you don’t have to chase the latest and greatest tooling.

Just go with Claude Code & Opus, and get building.

Start with thinking, not typing

Don’t open your IDE first. Open a conversation.

When I have an idea, my first move is to use Claude’s chat interface to brainstorm and organise the concept into something coherent. This helps me crystallising what I actually want to build. The model acts as a sounding board, asking questions I hadn’t considered, identifying gaps in my thinking.

Once the idea has shape, I create a Project to organise it further. I take my initial brainstorm and ask Claude to create a proper specification document, for example a PRD technical spec, or project outline depending on what I’m building. This becomes the foundation everything else builds on.

When collaborating on the initial specification, ask Claude explicitly to not write code yet. Just focus on high level ideas mapping, feature description, business logic and systems. Technical spec and implementation will come later, don’t worry.

One hard-won lesson: always specify markdown format. I don’t know why, but Opus has developed a tendency to generate Word documents if you mention “document” anywhere in your prompt. Ask for “markdown format” in your chats. Markdown is more flexible, easier to edit, works everywhere, and won’t corrupt your context with binary noise. Tell Claude to update his memory and NEVER use Word DOCX unless explicitly asked.

Keep your specs tight

It’s dangerously easy to let specifications balloon out of control.

You start with a simple idea. Then you think “oh, it should also do this”, or “this would be cool featured”. Before you know it, your modest project has become an enterprise platform with authentication, role-based permissions, real-time collaboration, and an admin dashboard for analytics.

I’ve made this mistake more times than I’d like to admit. The project spec blows up, implementation becomes a nightmare, and the agent gets lost in complexity it can’t hold in its context window.

Be ruthless. Cut features. Ship something small first. You can always add complexity later, removing it is much harder.

Also go and create another document that is a technical architecture spec, going more in depth in the implementation.

Bootstrapping the project

When starting actual development, I use the framework’s recommended tooling to create a clean project structure. npx create-next-app@latest, rails new ... etc, whatever the standard approach is for your stack.

Why not let the AI do this? Because framework scaffolding tools are tested, maintained, and create exactly what developers expect. The AI might generate something that works but differs subtly from community standards, causing confusion when you look up documentation or troubleshoot issues later.

Once I have the skeleton, I immediately copy my specification documents into a docs/ folder, initialize Claude Code, and ask it to update the README based on the docs. Now the codebase itself documents what we’re building and why. Commit this initial state to git.

Context is everything

Fresh sessions for each feature. This works for me so far.

Your context window is precious real estate. When you work on feature after feature in the same session, context accumulates. Old code snippets, abandoned approaches, irrelevant discussions: they all stick around, degrading the model’s ability to focus on what matters now.

For larger projects, I create an epic skeleton document listing all the features I need to build. Then for each feature, I start a completely fresh session.

Often I’ll use nearly identical prompts, just swapping out which epic I’m implementing.

The Claude Code planning mode (Shift+TAB to switch) is invaluable here. Before any implementation, ask the agent to scan the codebase and review your docs. Let it create a plan. For complex features, I sometimes ask it to “ultrathink” to getting the approach right before writing any code.

If your requirements are vague, lean into that. Claude tends to ask clarifying questions in planning mode anyway, but you can explicitly request an interview to flesh out the details. Or ask him to use AskUserQuestionTool to interview you to clarify

Git commits as your safety net

Commit often. Compulsively, even.

Every time you reach a stable point, something works, tests pass, the UI looks right - commit it. You can let Claude write the commit messages if you want; they’re usually decent.

These frequent commits are your escape hatch. When the agent goes down a wrong path, you have a clean state to revert to. When you want to try a different approach, you don’t lose your previous work. When context gets confused after a long session, you can start fresh knowing your progress is saved.

Actually write tests

I know. Tests feel like busywork when you’re vibing and the code is flowing.

Tests are the feedback mechanism that makes AI-assisted development genuinely productive. They let the model verify its own work. They catch regressions when you add new features. They’re documentation for what the code should do.

My most recent Rails project, vibe-coded in major part over two weeks, has almost 7,000 test assertions. When I ask Claude to implement something new, it can run the tests and see immediately whether the change broke anything.

Don’t say it’s not possible. It is. You just have to ask for it.

Trust but verify

Agents are better than they’ve ever been. They’re still not infallible.

Occasionally peek into the codebase and inspect the output. Sometimes models rely on outdated training data. Sometimes they make architectural decisions that seem fine in isolation but become problems as the codebase grows. A weird pattern or odd approach can proliferate through your project if you never catch it early.

You don’t need to review every line. But periodic sanity checks save future pain. If something looks strange, ask the model why it chose that approach. Often there’s a good reason. Sometimes there isn’t, and you catch a problem before it spreads.

Use multiple tools

Having multiple AI subscriptions is something that can be leveraged.

When I’m waiting for my Claude quota to refresh, I’ll use ChatGPT’s Codex for code reviews, minor bugfixes, or parallel tasks. Different models have different strengths. Sometimes a fresh perspective from another AI spots issues or suggests approaches the first one missed.

I also use non-coding AI sessions for brainstorming tangential ideas while primary development continues.

Frontend design skill

If you’re doing any frontend or user-facing work, install the Frontend Design skill. The difference in output quality is dramatic. The skill guides Claude toward distinctive, production-grade interfaces that avoid generic “AI slop” aesthetics; no more purple gradients, no more Inter font everywhere, no more cookie-cutter layouts. Instead you get intentional design choices that actually look like someone cared about them.

Find it here: Frontend Design Skill

Use your human element

Test as a human. Constantly.

AI agents are extraordinarily capable at implementing specifications. What they lack is human judgement about what “feels” right. The spacing that’s technically correct but visually awkward. The flow that follows the spec but confuses real users. The feature that works perfectly but nobody actually needs.

This is tricky territory. You can waste enormous time obsessing over minor details the agent could fix in seconds. The goal isn’t micromanagement. You should be maintaining connection with your product as a user experiences it.

Don’t expect the final result to be great if you’ve never actually used what you’re building along the way.

Have fun!

Most importantly: have fun with this.

Explore ideas. Build games. Create tools. Ship projects. Start websites.

The barrier between “I wish this existed” and “I built this” has never been lower.

Things that would have required weeks, months, or entire teams are now within reach of a single developer with the right setup. I’ve built more in the last three months than in the previous year.

Don’t waste this potential. The window of cheap, accessible, incredibly capable AI assistance might not last forever. The models will improve, yes, but the pricing, the access, the wild-west experimentation phase - this moment is unique for now.

So go build something.


Small note: this article was posted first on X and LinkedIn, If you enjoyed it, consider giving it a like or share there to help others find it!

You may also like the following content