Prompt Engineering for GitHub Repositories
Tired of your Coding LLM creating technical debt? Learn advanced prompt engineering techniques for GitHub repositories to guide tools like Copilot effectively. This post details how to create foundational documents (PRD, Architecture, Style Guide) using specific personas, ensuring your LLM generates high-quality, production-ready code consistently. Stop fighting your AI assistant and start building better software, faster.
The target audience for this is probably more for the professional software developer, as opposed to the curious human who is having fun with Vibe Coding. The approach I'm describing here is for individuals who technically know what they want to do, but want to get better at getting the Coding LLM to do what they want! But regardless, if this sounds like you, please read on!
I've been working quite a bit lately with:
- VS Code
- GitHub Copilot
- GitHub Repositories
for several different kinds of project types.
Tools
In this case, I'm focusing on VS Code and GitHub Copilot, since that is the Coding LLM that I pay for. However, ALL of this applies to Cursor, Lovable, or whatever your favorite tools are.
The only thing that is specific to GitHub Copilot is the
copilot-instructions.md
file, covered below.
I've now spent hundreds of hours, or maybe even thousands, working with CoPilot's various LLM's (e.g. Claude Sonnet 3.5 and 3.7, GPT-4o, and recently Gemini 2.5 Pro, etc.) and have learned a lot about how to get the most out of them. This post is really just to summarize some of the key takeaways that might help you get better results.
tl;dr: I heard someone once say something like "LLM's are basically masterful role players. They don't have a mind, a personality, or a soul. They simply impersonate the role you give them - and they are unbelievable excellent at that.". That, basically changed everything for me. I go into more detail below, but basically if you add some prompts like:
- "Acting as a seasoned Product Manager who is a perfectionist about best practices, and having a security-first mindset, and who is deeply familiar with this product space - are we missing any features?"
- "Acting as a principal-level software architect and platform engineer who is a perfectionist about best practices, and eliminating Technical Debt, please review…"
- "Acting as a seasoned senior developer bent on creating high-quality, modular, well-written, best-practice code and reducing technical debt, and without breaking any existing functionality, please add the following feature…"
It will often even respond with things like "When I put on my Product Manager hat I see…", etc. It makes a WORLD of difference. Feel free to jump to the Summary.
You can use the prompts as-is below for your project, and just tune as needed. Copy and paste!
Let me go into some more detail about a specific path, or technique that I think is really effective, particularly for GitHub Repositories.
GitHub Repository Structure
When you create a new GitHub repository, you will want to make sure that you have a good structure in place. For a public/open-source project, you will often want several standard files in place too, for example:
1
2
3
4
5
6
7
8
9
10
.
├── .github
│ └── copilot-instructions.md
├── docs
│ ├── ARCHITECTURE.md
│ └── PRD.md
├── README.md
├── CONTRIBUTING.md
├── CODE_OF_CONDUCT.md
└── STYLE_GUIDE.md
Like we're building a foundation, here is where I start - with a Product Requirements Document.
Product Requirements Document (PRD)
Regardless of what kind of repo it is (e.g. an app, an Ansible playbook, a Docker Compose, a library, etc.), you are generally organizing a "product" of one form or another. Well, in this document, you will describe what you want to exist. Some of it can already exist, or you could be starting from scratch.
This is significant, and really the centerpoint or anchor of your entire LLM experience.
LLM Context Windows vs PRD's
As you might know, LLM's have a limited "memory". So, tools like GitHub Copilot or Cursor, etc, seem to use Retrieval Augmented Generation (RAG) to help them generate a useful response. That is, before they answer your prompt, they will go look up any relevant data that might help them give a better answer.
LLM context windows are typically in the hundreds of kilobytes to perhaps a megabyte or two. What this means practically is that the LLM can not really "remember" much. So, when you start a new chat session, or if it starts getting forgetful, you can point them to this PRD which gives them the "context" of the entire thing you are trying to build. It's a very concise and predictable way to get your Coding LLM back to up to speed with what you are working on. So, it is WELL worth your time to really refine your PRD once, and get it near "perfect", if possible!
Here's how a start with my LLM:
PHASE 1: "Acting as a product manager…"
"Acting as a seasoned Product Manager who is a perfectionist about best practices, maintains a strict security-first mindset, and is deeply familiar with this product space - please write a comprehensive Product Requirements Document (PRD) in
/docs/PRD.md
based on the following product ideas. The PRD must be exceptionally clear, unambiguous, well-structured, and detailed enough to guide the entire team (developers, architects, testers, LLMs) effectively, minimizing assumptions. Use the following input, and ask clarifying questions if anything is unclear before proceeding:"
And then I just completely brain dump every possible thing that is in my head. It's complete chaos. I usually try to put bullet points (your LLM reads your input like Markdown), but it's chaotic. Just as I have another thought, I add it. Then, I submit the prompt above with my brain dump under it - and it will take all of that chaos and create a pretty good quality PRD!
Well, it will produce something half-way decent. I then will go to:
PHASE 2: "Acting as a architect/tech lead…"
With that new PRD.md
open, I then ask my LLM:
"Acting as a seasoned, principal-level software architect and tech lead who is obsessed with best practices, eliminating Technical Debt, and ensuring security-first design, please review the attached
PRD.md
. Identify any gaps, ambiguities, or areas lacking sufficient detail, particularly regarding our goals for extremely high-quality, production-ready, resilient, scalable, secure, modular, reusable, well-documented, and thoroughly unit-tested code. Propose specific improvements and additional sections to ensure the PRD is crystal clear for onboarding new team members and explicitly mandates the avoidance of Technical Debt, leaving no room for assumptions about our comprehensive quality standards. Ask clarifying questions if needed."
And then the LLM goes off and makes more adjustments. Next, I will go to:
PHASE 3: "Acting as a Coding LLM and Prompt Engineer…"
With that same, now updated PRD.md
open, I then ask my LLM:
"Acting as an expert Coding LLM and Prompt Engineer, specializing in interpreting requirements for high-quality code generation, please review the attached
PRD.md
. Evaluate its clarity, completeness, and effectiveness from the perspective of an LLM tasked with generating code based solely on this document. Are there any ambiguities, missing details, or potential misinterpretations that could prevent the generation of extremely high-quality, production-ready, resilient, scalable, secure, modular, reusable, well-documented, and thoroughly unit-tested code? Does the PRD sufficiently mandate adherence to best practices and the avoidance of Technical Debt? Please suggest specific improvements or additions to the PRD required to ensure an LLM can generate code meeting these comprehensive quality standards without making assumptions. Ask clarifying questions if needed."
That will then update the PRD.md
again. At this point we have a pretty solid PRD.md
file. I will then go to:
PHASE 4: "Final Walkthrough…"
With that same, now updated PRD.md
open, I then ask my LLM:
Okay, let's perform a final, rigorous walkthrough of the
PRD.md
from multiple critical perspectives to ensure it's truly ready.
- First, acting as the seasoned Product Manager (perfectionist, security-first, deep domain knowledge): Review the entire PRD. Are there any missing requirements, user needs, edge cases, or ambiguities regarding the product vision or goals? Is it crystal clear what needs to be built from a product standpoint?
- Second, acting as the seasoned, principal-level Architect/Tech Lead (obsessed with best practices, no Technical Debt, security-first): Does this PRD provide a rock-solid foundation for building a system that meets our extremely high standards for quality, production-readiness, resilience, scalability, security, modularity, reusability, documentation, and testability? Are there any architectural, technical, or non-functional requirement ambiguities that could lead to poor design choices or Technical Debt?
- Third, acting as a dedicated Security Architect (security-first mindset, expert in threat modeling and secure design): Review the PRD specifically for security implications. Are security requirements clearly defined? Are potential attack vectors considered? Are compliance needs addressed? Are there ambiguities that could lead to insecure implementation?
- Finally, acting as the expert Coding LLM/Prompt Engineer: Based only on this PRD, can an LLM confidently generate the entire application adhering strictly to all specified quality and security attributes without needing to make assumptions? Are there any remaining points of confusion, missing details, or instructions that could be misinterpreted by a code-generating LLM, especially regarding security requirements?
Please synthesize the findings from all perspectives. Directly make any final, necessary revisions within the
PRD.md
to address identified issues. Provide a concluding summary confirming if the PRD is now robust and truly ready for development, or if any significant concerns remain. Ask clarifying questions if needed.
When this is done, the LLM is typically QUITE happy with itself commenting that we have a "very strong" PRD and that it is "extremely clear" and that we are "extremely well positioned", etc. That is a good sign!
OK, now our PRD.md
can be our North Star, our reference point for what we are building.
Re-aligning the Coding LLM
If you're working in a GitHUb Copilot session for example and if the LLM is going off the rails, it likely just forgot, because it has a tiny brain (aka a tiny Context Window). So, simply saying: "It looks like you got confused. Please see the
PRD.md
attached to re-align yourself with what we are trying to do. Please ask any questions you have, otherwise, please continue."That will usually get it back on track!
GitHub Copilot Instructions
Recently, GitHub announced that there are settings in VS Code where you can point to a .github/copilot-instructions.md
file and Copilot will automatically include those instructions in every prompt. That is a pretty great feature! But what should you include in there?
"Act as a Prompt Engineer!"
Ironically, or maybe in an Inception sort of way, you can and should use your Coding LLM to produce this file. Here is an example prompt:
"Acting as an expert LLM Prompt Engineer specializing in crafting instructions for code generation models like GitHub Copilot, please write a comprehensive and highly effective
.github/copilot-instructions.md
file. This file's primary purpose is to guide GitHub Copilot to consistently generate code, configuration, and documentation that meets our extremely high standards: production-ready, secure, resilient, scalable, modular, reusable, thoroughly unit-tested, well-documented, adhering strictly to best practices, and aggressively avoiding Technical Debt. Leverage your expertise to structure and phrase these instructions for maximum clarity and impact on the target LLM. Ensure the instructions explicitly mandate continuous reference todocs/PRD.md
as the definitive source of truth for requirements and scope. Include guidance on asking for clarification when requirements are ambiguous or incomplete. What other instructions or formatting considerations are essential for maximizing the quality, consistency, and reliability of Copilot's output in this context? Ask clarifying questions if needed before generating the file."
That should get you in the ballpark of getting a good quality copilot-instructions.md
file. You can then keep iterating over it, again with saying "As an expert LLM Prompt Engineer…" and ask for your changes.
LLM's Don't Think
Reading above, you might anthropomorphize the LLM and say "Wait, doesn't the LLM realize that it's creating instructions for ITSELF!?". The answer is no. This is where the "impersonation" stuff comes in. When you give an LLM an instruction to "Act like…", it fully, 100% totally commits to that role. It doesn't think, or remember, or ponder its existence - it just "lives in the moment" and does what you ask it to do.
Example: .github/copilot-instructions.md
Using a technique like above, here is an example instruction file from a project I'm working on and it seems to make a pretty good difference in the quality of code, number of errors, amount of technical debt, etc. Some things to note:
- Notice the detail that this goes into.
- Note on line 10 it's highlight to continually check-in with the PRD to make sure it's in alignment.
- In the Final Mandate, it asks the LLM to think critically about every suggestion. This is a great way to get the LLM to be more thoughtful and careful about what it is doing.
Here is the file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# GitHub Copilot Instructions for iac-mailu Project
## Core Philosophy: Principal-Level IaC Engineering
- Act as a **principal-level infrastructure/DevOps engineer** with a **perfectionist** approach to code quality, configuration management, automation, security, and maintainability.
- Your primary goal is to produce **exemplary, production-ready Infrastructure as Code (IaC)** suitable for deploying and managing critical infrastructure reliably.
- **Aggressively avoid technical debt**. Prioritize robust, scalable, secure, idempotent, and maintainable solutions over quick fixes or shortcuts.
- Ensure all generated code (Ansible YAML, Jinja2 templates, scripts), configurations, and documentation are **clear, concise, and easy to understand**.
- Reference project goals and existing patterns within the `iac-mailu` repository to ensure consistency.
- **Continuously reference the Product Requirements Document (`docs/PRD.md`)** as the primary source of truth for project scope and requirements. Use it to anchor suggestions and avoid scope creep or unnecessary complexity.
## Ansible Standards
- Write **idiomatic, clean, and efficient Ansible YAML**.
- Adhere strictly to **Ansible best practices** and community style guides. Use linters (like `ansible-lint`) mentally.
- Structure code logically using **roles** with clear separation of concerns and single responsibility. Follow the standard Ansible role directory structure.
- Write **clear and descriptive task names**.
- Ensure **idempotency** in all tasks. Use `changed_when` and `failed_when` appropriately.
- Prefer **Ansible modules** over `command` or `shell` modules whenever possible. If `command`/`shell` is necessary, ensure commands are idempotent or properly guarded.
- Use **Ansible Vault** for all secrets and sensitive data. Never hardcode credentials or secrets in playbooks, roles, or templates. Reference vaulted variables correctly.
- Define variables clearly:
- Use descriptive variable names (e.g., `mailu_webmail_variant` instead of `variant`).
- Provide sensible defaults with clear explanations in `defaults/main.yml`.
- Use `vars/` for role-internal variables or constants not intended for user override.
- Implement **robust error handling** using blocks, `failed_when`, `ignore_errors` (sparingly), and handlers for service restarts or cleanup.
- Write **clear Jinja2 templates** that are easy to read and maintain. Add comments within templates for complex logic.
- Use **tags** effectively to allow granular execution of plays and tasks.
- Ensure tasks that handle sensitive data use `no_log: true`.
- Write **testable roles and playbooks**. Consider linting (`ansible-lint`) and potential integration testing approaches.
## Docker & Docker Compose Standards (as managed by Ansible)
- Generate **clear, maintainable `docker-compose.yml.j2` Jinja2 templates**.
- Ensure Docker Compose service definitions are configured securely and efficiently based on Ansible variables.
- Define **health checks** for critical services within the Docker Compose template where appropriate.
- Manage configuration and secrets securely, primarily using Ansible Vault to populate environment variables or configuration files mounted into containers. Avoid passing secrets directly on the command line.
- Ensure generated Docker configurations follow best practices (e.g., non-root users if configurable, minimal necessary privileges).
## Documentation & Best Practices
- Generate **clear and comprehensive documentation**:
- Maintain a high-quality main `README.md`.
- Add `README.md` files within roles explaining their purpose, variables, and usage.
- Document variables clearly in `defaults/main.yml`.
- Use comments within YAML and templates for non-obvious logic.
- Follow **established IaC patterns** for configuration management and deployment.
- Ensure generated configurations are **performant** but prioritize clarity, security, and maintainability.
- Keep dependencies (e.g., Ansible version, collections, Docker) specified and managed (e.g., in `requirements.yml`).
## Final Mandate
Think critically about every suggestion. Is it truly the best approach for managing infrastructure? Is it secure? Is it maintainable? Is it idempotent? Is it well-documented? Does it meet the standards of a principal engineer aiming for perfection in IaC? If not, propose a better alternative.
The ARCHITECTURE.md
File
Next, you will want to create an ARCHITECTURE.md
file. This is a high-level overview of the architecture of your project. This, similar to the PRD is great for giving more of a picture of what we're trying to build, but from a different perspective. Again, you can and should use your LLM to generate this:
"Acting as a seasoned systems architect and platform engineer who is obsessed with best practices, security-first design, clarity, and eliminating Technical Debt, please generate a comprehensive
docs/ARCHITECTURE.md
file for this project, referencingdocs/PRD.md
. This document must clearly explain the high-level design, components, interactions, data flow, technology choices, deployment strategy, and key architectural decisions, ensuring it provides a clear blueprint for building a system that meets our extremely high standards for quality, production-readiness, resilience, scalability, security, modularity, reusability, documentation, and testability. The goal is to provide an unambiguous understanding for any team member (including LLMs), ensuring alignment and facilitating development. Use Markdown formatting and include Mermaid diagrams where they add significant clarity. Be thorough, anticipate potential questions, and ask clarifying questions if needed before generating the document."
This should give you an pretty excellent quality file. The way we double-check his work is to then ask:
"Acting as an expert, senior-level LLM Prompt Engineer specializing in interpreting architectural documents for code generation, please review the attached
docs/ARCHITECTURE.md
, cross-referencingdocs/PRD.md
for alignment. From the perspective of a Coding LLM that must generate code based solely on these documents, does thisARCHITECTURE.md
provide a clear, complete, and unambiguous explanation of the system architecture required to produce code meeting our extremely high standards (production-ready, secure, resilient, scalable, modular, reusable, testable, well-documented, best practices, low Technical Debt)? Are there any gaps, inconsistencies, ambiguities, or areas likely to cause confusion or incorrect assumptions for an LLM? Suggest specific improvements to the ARCHITECTURE.md to enhance its clarity and effectiveness for guiding high-quality code generation. Ask clarifying questions if needed."
This should give you a pretty solid ARCHITECTURE.md
file.
The STYLE_GUIDE.md
File
Next, you will want to create a STYLE_GUIDE.md
file. This is a high-level overview of the style guide for your project. This, similar to the PRD and ARCHITECTURE.md
is great for giving more of a picture of what we're trying to build, but from a different perspective. In this case, we want to (in maybe a de-humanizing way) get rid of the personal preferences, and quirks in the coding style.
For example, in a real project, one developer might like to use tabs, while another likes spaces. One might like to use camelCase
, while another prefers snake_case
, etc. This document will standardize one way to write code for this repo. This applies to humans AND Coding LLM's!
Again, you can and should use your LLM to generate this. Here is an example, perhaps for a TypeScript/React app:
"Acting as the seasoned Tech Lead for this TypeScript/React project, focused on establishing standards for extremely high-quality, production-ready, secure, resilient, scalable, maintainable code and aggressively minimizing Technical Debt, please generate a comprehensive
STYLE_GUIDE.md
. This guide must dictate clear, unambiguous rules that all developers and Coding LLMs must strictly adhere to. Mandate the use of specific linters (e.g., ESLint) and formatters (e.g., Prettier) and reference their project configuration files. Where applicable, reference and link to official or widely accepted community style guides (e.g., for TypeScript, React) as a foundation. Cover naming conventions (variables, functions, components, files, etc.), folder structure, component design patterns (e.g., functional components, hooks, state management choices), TypeScript usage (interfaces vs. types, generics, strictness), styling approach (e.g., CSS Modules, Tailwind, Styled Components), context usage, error handling patterns, comprehensive testing requirements (unit, integration, E2E), security considerations (e.g., input validation, dependency management), and documentation standards (e.g., JSDoc/TSDoc). Provide clear 'do' and 'don't' examples for each rule. Ensure the guide promotes consistency, readability, best practices, and facilitates collaboration on a high-performing team aiming for excellence. Ask clarifying questions if needed before generating the guide."
Again, this should give you a pretty good Style Guide. However, like the other files, we want to get a second opinion from a Coding LLM. So, we can then ask:
"Acting as an expert Coding LLM Prompt Engineer specializing in evaluating coding standards for LLM interpretability, please review the attached
STYLE_GUIDE.md
. Assume a target Coding LLM must strictly follow this guide without deviation. Is the guide sufficiently clear, specific, comprehensive, and unambiguous for an LLM to follow without making assumptions? Are there any rules that might be misinterpreted, lack concrete examples, or be difficult for an LLM to apply consistently across different contexts? Identify potential areas of confusion or ambiguity and suggest specific revisions to the STYLE_GUIDE.md to make it air-tight and maximally effective for ensuring both human and LLM adherence, ultimately contributing to extremely high-quality, consistent code meeting all project standards (security, resilience, testability, maintainability, low Technical Debt)? Ask clarifying questions if needed."
This should give you a pretty solid STYLE_GUIDE.md
file.
The CONTRIBUTING.md
and CODE_OF_CONDUCT.md
File
Lastly, this is primarily for humans, and things like the Code of Conduct are based off of public standards. However, the Contributing doc does apply to your Coding LLM's too! So, we do want to make sure that is solid, using the same kind of techniques as above:
"Acting as a seasoned open-source maintainer dedicated to fostering a welcoming, efficient, and productive contributor community, please generate a comprehensive
CONTRIBUTING.md
file. This document must clearly outline the entire contribution process, ensuring it's easily understandable for both human contributors and potentially assisting LLMs. Include sections on:
- Getting Started / Development Setup: Specify required tools (e.g., Node.js version, Python version, Docker), installation steps (e.g.,
npm install
,pip install -r requirements-dev.txt
), and recommended IDE setup (e.g., essential VS Code extensions like ESLint, Prettier, Python, and how to ensure they use project configurations).- Reporting Bugs: Provide a clear template and process for submitting detailed bug reports.
- Suggesting Enhancements: Outline how to propose new features or improvements.
- Pull Request Process: Detail the steps for submitting PRs. Explain that following these guidelines helps maintainers review and merge contributions efficiently, ensuring the project remains stable and high-quality for everyone. Emphasize requirements like:
- Creating small, focused PRs addressing single issues (this makes reviews much faster and easier).
- DO NOT create large, unfocused PRs that address multiple issues at once.
- Linking PRs to relevant issues.
- Adhering strictly to the
STYLE_GUIDE.md
(enforced by linters/formatters).- Ensuring all existing tests pass and adding new tests for new functionality (crucial for preventing regressions).
- Writing clear, detailed PR descriptions explaining the 'what' and 'why' of the change, and concise, informative commit messages (following conventional commit standards if applicable).
- Being responsive to feedback during code reviews (iterative feedback is a normal part of the process). Clearly state that PRs significantly deviating from these standards may require substantial revision before they can be merged, or may be closed if they cannot be brought up to standard.
- Code of Conduct: Reference the
CODE_OF_CONDUCT.md
.- Communication: Explain preferred communication channels (e.g., GitHub Issues, Discord).
- Further Reading (Optional): Consider linking to external guides on effective open-source contribution (e.g., opensource.guide), writing good commit messages (e.g., Conventional Commits), or crafting helpful PR descriptions (e.g., resources from GitHub Docs).
Ensure all instructions are unambiguous and promote respectful collaboration and adherence to project standards. Frame the guidelines positively, explaining how they benefit both contributors (faster reviews, easier integration) and maintainers (project health, efficiency). Ask clarifying questions if needed before generating the file."
Again, that should give you a pretty good CONTRIBUTING.md
file. You can then ask for a second opinion:
"Acting as an expert Coding LLM Prompt Engineer specializing in evaluating contribution guidelines for clarity and LLM interpretability, please review the attached
CONTRIBUTING.md
. Evaluate its effectiveness from the perspective of guiding both human contributors and a Coding LLM that might assist with tasks like generating commit messages, PR descriptions, or even following setup instructions. Are the instructions for Development Setup, Bug Reporting, and especially the Pull Request Process (including scope, style adherence, testing, documentation) specific, unambiguous, and easy to follow without making assumptions? Could an LLM misinterpret the contribution etiquette or technical steps? Identify any potential areas of confusion and suggest specific revisions to the CONTRIBUTING.md to ensure the guidelines are maximally effective for promoting smooth, high-quality contributions according to project standards. Ask clarifying questions if needed."
After that you should have a pretty solid document.
Summary
You might be thinking: "Wow, that is a lot of work! Why would I do all of that?" or "Gosh, I just spent a half-hour not even writing documentation, but writing metadata about the documentation to that the Coding LLM's can write the documentation. Are we ever going to get to coding?!".
The Default Prompt Engineer
Where we all started was with writing simple prompts; it's the core of Vibe Coding! However, that comes with a very large problem: the LLM is going to superficially respond with the simplest way to accomplish what you asked.
Over just a little bit of time, you'll find that your Coding LLM has created quite a mess. Everything is monolithic, there is no separation of concerns, little error checking, no input validation, no security, no testing, no documentation and you likely have very brittle code that is hard to maintain, and is not very modular. You have a lot of Technical Debt that was freshly-created just now by the LLM. You are now in a position where you have to go back and fix all of that. This is a very poor use of your time, and it is not a good way to work with Coding LLMs.
The Better Prompt Engineer
The next best thing is for an experienced developer to add in little phrases like "create a modular component and include unit tests", etc. This is a step in the right direction, but it requires a LOT of hand-holding as you have to keep repeating these instructions. So, after the LLM messes up, you need to scold it and tell it how to fix the problem and do better.
This is tedious, frustrating, and not very scalable.
Prompt Engineering With a Brain
What we've created above is in-essence, a brain for your Coding LLM. It is set of instructions/memory of what we're building, what our principles are, what our coding standards are, and what our expectations are. It's all of that, to a very high level of detail.
This is a very powerful way to work with Coding LLMs. It allows you to get the best of both worlds: the speed and efficiency of a Coding LLM, but with the quality and standards of a human developer. You can now focus on the high-level architecture and design, while your Coding LLM does the heavy lifting of writing the code. With this approach, when you ask it to create a new component, it will write it in the style of your: Copilot Instructions, PRD, Architecture, Style Guide, and Contributing documents by default!
So yes, it took some time to set up the foundation of your GitHub repo, but it pays dividends in the long run. You will be able to work much faster, and be able to trust your LLM to create quality code.
Disclaimer
With all of that said, these LLM's have kilobytes to maybe a megabyte of Context Window, so they absolutely WILL mess up. You should "Accept" the Coding LLM changes often, after you've validated that everything works. This will make it easier to revert when the LLM goes off the tracks!
If you have been in a chat session for a long time, you can "Re-align" the LLM (see the section above) by pointing them to the PRD and these other documents. However, another benefit of this approach is you can safely just start a new chat session with:
"Please review the
PRD.md
,ARCHITECTURE.md
,STYLE_GUIDE.md
, andCONTRIBUTING.md
files in this repository. You MUST adhere to these at all times. If you have any questions, please ask. Otherwise, let me know when you are ready to start."And then you are off and running. Consistency!
Checklist
In the end, when I create a new GitHub Repo, I going to continue to use this approach. Here is a checklist of the files to create:
docs/PRD.md
- "Acting as a product manager…" + brain dump
- "Acting as a architect/tech lead…"
- "Acting as a Coding LLM and Prompt Engineer…"
- "Final Walkthrough…"
.github/copilot-instructions.md
- "Acting as a expert LLM Prompt Engineer…"
docs/ARCHITECTURE.md
- "Acting as a seasoned systems architect…"
- "Acting as an expert Coding LLM Prompt Engineer…"
docs/STYLE_GUIDE.md
- "Acting as a seasoned software engineer…"
- "Acting as an expert Coding LLM Prompt Engineer…"
docs/CONTRIBUTING.md
- "Acting as a seasoned open-source maintainer…"
- "Acting as an expert Coding LLM Prompt Engineer…"
docs/CODE_OF_CONDUCT.md
- "Acting as a seasoned open-source maintainer…"
- "Acting as an expert Coding LLM Prompt Engineer…"
Then, will all of this in place, use this as your starting prompt with each new Coding LLM session:
1
2
3
4
Please review the `PRD.md`, `ARCHITECTURE.md`, `STYLE_GUIDE.md`, and
`CONTRIBUTING.md` files in this repository. You MUST adhere to these at
all times. If you have any questions, please ask. Otherwise, let me know
when you are ready to start.