Effective Prompting for Code Generation
This is part 3 of the series Building with AI Tools
The quality of code you get from an AI assistant is directly proportional to the quality of your prompts. Vague instructions produce vague code. Specific, context-rich prompts produce code that’s ready to ship.
The Specificity Principle
Compare these two prompts:
Vague: “Add a search feature to my site.”
Specific: “Add client-side search using Lunr.js. Build the search index at Jekyll build time from post titles, tags, and content. Create a /search/ page with an input field and results list. Style with Bootstrap and BEM classes matching the existing theme.”
The second prompt gives the AI everything it needs: the technology choice, the data source, the output location, and the styling conventions. Less back-and-forth, better results.
Context Is Everything
AI assistants work best when they understand:
- What exists — the current codebase, patterns, and conventions
- What you want — the specific outcome, not just the general direction
- What to avoid — constraints, anti-patterns, and things you’ve already tried
The CLAUDE.md file handles the first point passively. Your prompts handle the other two actively.
The Iteration Loop
Even good prompts rarely produce perfect code on the first try. The real skill is in the iteration:
- Start broad — describe the feature at a high level
- Review the approach — check the AI’s plan before it writes code
- Refine incrementally — fix issues one at a time rather than re-prompting from scratch
- Capture decisions — document why you chose one approach over another
This series itself was built using exactly this workflow. Each feature started as a spec, went through research and planning, then implementation — with the AI as a collaborator, not an autopilot.
Common Mistakes
- Over-prompting: Including irrelevant details that confuse the model
- Under-constraining: Not specifying conventions, leading to inconsistent code
- Ignoring context: Not telling the AI about existing code it should integrate with
- Skipping review: Accepting generated code without reading it
Up Next
In the next part, we’ll look at real examples of AI-assisted feature development — the wins, the failures, and the lessons learned along the way.
Comments