Looking for vibe-coding guides (best practices, etc.)
Decided I wanted to try vibe-coding some stuff. It's been a very long time since I coded anything, and it was all very amateurish, but as the tooling has become better I wanted to give a shot at some silly ideas. Got tired of writing about random teaching and AI related stuff, decided I wanted to build some more stuff to get more acquainted with agentic tooling.
I have gathered some sparse links here and there, but I was hoping the community here may know of some more "definitive" guides. My plan is to use Claude Code, but if people want to share guides for other coding agents (Codex, etc.) please feel free.
Very interested in iOS app development if that helps, but I feel that best practices can likely look very similar across platforms and tools.
In the underwater survival video game Subnautica, you can eventually get access to the PRAWN Suit, a mecha that lets you go far deeper into the ocean than you could before. When you first construct it, the in-game computer tells you that it's normal to feel a sense of limitless power when first putting the suit on, and that the months of training suit operators usually get is not to learn how to pilot the thing, but to understand that you're not invincible in it. Claude Code works the same way.
Boris Cherny, the creator of Claude Code, was on a podcast the other day and said that Claude Code had "largely solved coding". He's not wrong. It has. The code Claude writes is better than the code I could write, in a fraction of the time. But that doesn't mean it's good, or better than me, at software engineering. It isn't just telling the computer what to do, it encompasses design, figuring out what needs to be done or what you'd like to do, thinking about problems abstractly, laying out possible ways to solve those problems, learning why some solve attempts don't work, and taking all that knowledge with you to the next problem. This is largely conceptual, and it's what STEM education tries to impart.
If you sit down with Claude and tell it to make an app, you probably won't have a great time. Like every tool, you need to know how to use it properly, and in the case of an agentic tool that writes code for you, you need to know (in broad terms) how to solve the problems you present to the agent.
Specifics here: Plan, plan, plan. I mean this literally. I have great success in taking paper and sketching out my app, figuring out what should go where, what features it should have and most notably why it should have them. Claude is best when it thinks like you do, and you have to get it to that place of understanding. If possible and you know enough about the platform to do this yourself, write the plan, pitch and claude.md files yourself, in as much detail as you can, including your reasoning, and of course you have to (name drop) show your work.
Claude won't do a good job at graphic design most of the time since that can't really be done with just text, so use images to your advantage. Wireframe and mock up the app you want, feed it screenshots, inspiration. Remember: To work correctly, your thought processes must be in sync with the "thought processes" of the LLM.
If you encounter nasty bugs, I've had great success with asking Claude to tell me how I can help. It'll then put in debug logging, which might mean little to you but gives it proper context, especially because it can't access things like a browser console or a debugger in most cases.
Documentation is of course your friend. Ideally, write the docs yourself, but if that's not practical, ask Claude to use its Memory system to save insights about quirks in the code base or sturdy bugs.
On a new platform this might be difficult, but it's generally best practice to at least understand the code it generates. Not line by line, necessarily, but roughly have an idea of what parts are where and what they do. Again, being in sync with your agent works both ways.
If you don't have much experience in software engineering as a whole, I suggest reading about it. You don't need to have a CS degree, but reading the basics and maybe playing games that gamify programming (like Shenzhen I/O, EXAPUNKS and TIS-100 by Zachtronics) will help you sharpen those logic skills. Yes, no programming language will work like those games do exactly, but they will help in the conceptualisation of a problem, how to split it up, and how to (generally) solve it.
And please, don't jump into a large project. Make a wordle clone first or something, just to dip your toes in. Start slow.
PRAWN SuitClaude Code operators receive weeks of training to counteract the feeling of invincibility that comes with the tool. You will have to make do with self-discipline.Thanks for this. I was going to start something easy, and just play around with it from there. Nothing too serious.
I’ve been getting into it more too, with Codex since I already had a ChatGPT Plus sub which comes with a limited number of weekly tokens anyway. I’m not paying for more, so when I run out, the vibe’s over until Monday rolls around again.
Sorry I don’t have any links to guides. Most of what I’ve learned has been firsthand. I can share some basic “what works and doesn’t work for me” notes here if they’re helpful.
Always be mindful of your context window. Auto-compaction has gotten better but can still really cause things to go sideways. I suggest using a separate task/thread for each feature you work on, and aim for completing that work before context fills up.
Give the AI a formal spec (it can write that spec itself, btw). This is a Markdown doc in the project root that details the goals and implementation details of the project. Declare in
CLAUDE.mdthat this doc is the authoritative source of truth for all requirements, and that Claude must (a) always read it before beginning work; (b) never implement functionality that isn’t described there without your express approval in the chat; and (c) always keep the document updated after completing work so it remains current. If your code doesn’t exist yet, tell Claude to write a separate plan document based on that spec, and refer to that plan for initial buildout.Don’t rely on the AI to verify its own work. In my projects I’m usually wanting strict TypeScript typing, tests passing, and style guide adherence. This is a guard against the project drifting into slop. The AI will do great at implementing a feature but drop the ball on these “extra” things — even if you explicitly say they’re required. Better to encapsulate them in a more traditional, non-AI build pipeline that throws actionable errors if anything isn’t up to snuff, and just say in your
CLAUDE.md“Never consider any work complete until a full build completes successfully.” The more wordy/complex your instructions are in that file, the less effectively Claude will follow them, so keep it simple.It’s awesome that the AI can keep the spec doc updated as it makes changes, but it will tend to do so in ways that refer to old implementation details that don’t exist anymore. It’ll write phrases like “feature must do X instead of Y” where Y is now out-of-date and irrelevant. The more of this junk pollutes your spec, the less effective Claude will be. Periodically tell it to clean itself up, so the spec reflects the codebase as it is, without counterfactuals or obsolete references.
I also periodically prompt it to do a full-project audit for dead code, irrelevant comments, inaccurately named components or files, etc. because it’s not always great at cleaning up after itself. Regular housekeeping can help a lot.
This is what people forget about AI. I've been having some fun having Claude write its own book by having it draft a whole bunch of planning documents for how the book will work, both plot wise and how we will make sure we're writing properly. It's certainly not 100% smooth, and I suspect that the book won't be anything incredible, but it's still very interesting.
It knows more about planning these things than anyone else in the world, right? So getting it to properly implement what it knows and coordinate it all in a way that allows a human to prompt it to make its own stories with its own decisions is a neat learning process. Sometimes it surprises me with what it does, too.
I'm sure it's different with coding, but this has been a fun problem to "solve" with this book. Claude has given itself objective criteria by which it evaluates each section and each chapter to see if it has written the section properly or achieved its goals with the chapter. I'm sure there's some degree of hallucination and whatnot, but it's been nice. It's going much better now that they let Claude see how many words it's writing so that it doesn't promise 1000 words and only write 300.
I haven't really read much of the planning documents or the chapters it has drafted so far, but I'm looking forward to seeing what it comes up with. The premise is actually very interesting, but most likely too convoluted, but it's the way Claude "wanted" it to be.
Use version control.
Nobody has yet mentioned devcontainers. I would strongly recommend isolating Claude to one, and following as best you can the principle of least access for cloud services, file systems, etc.
Maintaining a strong separation between dev and live environments is important once a project is live. Claude will, at times, try to burn everything down and start over. It’s going to be a tough day if that happens to your live data.
Git is indispensable, and so is some kind of hosted git solution like GitHub, gitlab, whatever. Continuous integration, lining, test suites should be guaranteed to run.
It can be easy to fall into a sort of flow state where you are asking Claude to do things and you aren’t checking code. Things can derail a bit during these periods. I find using pull requests to be a good way of checking this behaviour. Forcing yourself to do code review pays dividends.
Always use the most common version of a library or service. The better known the system the better your experience will be, which generally applies to everything — Claude can read documentation live but it shines best when it already has read the documentation in its training so it has an intuitive grasp of the material.
First, I don't vibecode in the sense of "never look at the code", but, I do use Claude Code a lot. I found this article enlightening: https://addyosmani.com/blog/ai-coding-workflow/ It's a bit long, but you can skim headings, then cherry-pick what to read in depth. My own personal notes and tips:
~/.claude/CLAUDE.mdfor cross-project instructions,~/projectdir/CLAUDE.mdfor project-specific stuff. The former is much bigger, because it makes sense to be consistent in all your work across all projects. In~/.claude/CLAUDE.md, I have three levels of instructions: 1) global rules that apply to every task; 2) coding-specific rules; 3) language-specific rules/clear, but I prefer to exit out (of the Claude Code CLI), and restart. Each such conversation gets an id (which you can/renameif you want), in case you want to resume later. I try to keep the task size/challenge medium-sized at most. I'm not too ambitious with how much I challenge it in one undertaking, half because I don't trust it yet, and half because I want to understand its output.If you want to get something running quickly an with minimal effort lovable is nice. You can get something similar (and likely superior in some ways) by using claude and hooking it up to supabase and chrome directly. The deep integration makes it easier for the agent to explore issues and test stuff out to validate fixes. Still for larger projects it tends to mess things up...