Prompt Engineering: The Art of Communicating with AI
The quality of Copilot's output depends directly on the quality of your prompts. Prompt engineering isn't just writing requests: it's a skill that combines clear communication, technical understanding, and structured thinking. In this article, we'll explore advanced techniques to get predictable, high-quality results.
In 2025, with the introduction of MCP Agents (Model Context Protocol), prompt engineering has evolved further: we don't just write individual prompts, but define persistent personalities that maintain context and rules across development sessions.
Series Overview
| # | Article | Focus |
|---|---|---|
| 1 | Foundation and Mindset | Setup and mindset |
| 2 | Ideation and Requirements | From idea to MVP |
| 3 | Backend Architecture | API and database |
| 4 | Frontend Structure | UI and components |
| 5 | You are here - Prompt Engineering | Prompts and MCP Agents |
| 6 | Testing and Quality | Unit, integration, E2E |
| 7 | Documentation | README, API docs, ADR |
| 8 | Deploy and DevOps | Docker, CI/CD |
| 9 | Evolution | Scalability and maintenance |
Why Prompt Engineering is Essential
The difference between a mediocre prompt and an excellent one can mean hours of work saved or wasted. A well-structured prompt:
Benefits of Prompt Engineering
- Precision: Targeted responses instead of generic code
- Consistency: Output coherent with project conventions
- Efficiency: Fewer iterations to reach the result
- Reproducibility: Same prompts, same quality results
- Automation: Reusable templates for common tasks
Anatomy of an Effective Prompt
A well-structured prompt follows a predictable pattern. Here are the fundamental components:
+-------------+-----------------------------------------------+
| ROLE | Who the AI should be (role, expertise) |
+-------------+-----------------------------------------------+
| CONTEXT | Project background, stack, architecture |
+-------------+-----------------------------------------------+
| TASK | What it should do SPECIFICALLY |
+-------------+-----------------------------------------------+
| CONSTRAINTS| Limitations, NON-FUNCTIONAL requirements |
+-------------+-----------------------------------------------+
| OUTPUT | Desired response format |
+-------------+-----------------------------------------------+
| EXAMPLES | Expected input/output examples (optional) |
+-------------+-----------------------------------------------+
Component Details
1. ROLE - Define the Expertise
The role isn't just a title: it defines the expertise level, knowledge domain, and communication style.
// Too generic
ROLE: You are a developer.
// Specific and contextual
ROLE: You are a senior TypeScript developer with 10+ years of experience
in building scalable Node.js APIs. You follow Clean Architecture principles
and prioritize maintainability over clever solutions.
2. CONTEXT - The Project Background
The more context you provide, the more aligned the output will be with your project.
CONTEXT:
- Project: E-commerce platform for handmade products
- Stack: Node.js 20 + Express 5 + TypeScript 5.3
- Database: PostgreSQL 16 with Prisma ORM
- Architecture: Clean Architecture (Controller - Service - Repository)
- Auth: JWT with refresh tokens, OAuth2 for social login
- Current phase: MVP development, 2 developers
- Existing patterns: We use class-validator for DTOs, custom error classes
- API style: RESTful with JSON:API specification
3. TASK - The Specific Objective
Describe WHAT it should do, not HOW. Let the AI propose the implementation.
TASK:
Create a ProductService class that handles:
1. Create product (validate price > 0, name unique per seller)
2. Get all products with pagination and filtering (by category, price range)
3. Get product by ID or slug
4. Update product (only owner can update)
5. Soft delete product (mark as inactive, don't remove from DB)
6. Get seller's products with statistics (total views, favorites count)
4. CONSTRAINTS - Non-Functional Requirements
Here you define rules that the code MUST follow, regardless of implementation.
CONSTRAINTS:
- Use dependency injection (constructor injection)
- All methods must be async
- Throw custom errors: ValidationError, NotFoundError, ForbiddenError
- Never expose internal IDs (use UUIDs or slugs in responses)
- Include JSDoc comments for all public methods
- Follow existing naming conventions (camelCase methods, PascalCase classes)
- Don't use 'any' type - always use specific types or generics
- Logging: use structured logging with correlation IDs
5. OUTPUT - The Response Format
Specify exactly how you want to receive the response.
OUTPUT FORMAT:
- Single TypeScript file
- Export the class as default
- Include all necessary imports at the top
- Add interface definition for the service (IProductService)
- Group methods logically (CRUD operations, then queries, then statistics)
- After the code, provide a brief explanation of key design decisions
Complete Example: Professional Prompt
Here's an example that combines all components:
ROLE: You are a senior TypeScript developer specialized in e-commerce systems.
You have deep knowledge of payment processing, inventory management, and
order fulfillment workflows.
CONTEXT:
- Project: Handmade marketplace (like Etsy)
- Stack: Node.js 20 + Express 5 + TypeScript 5.3 + Prisma
- Architecture: Clean Architecture with DDD patterns
- We already have: UserService, ProductService, PaymentGateway interface
- Order statuses: pending - paid - processing - shipped - delivered (or cancelled)
- Payment: Stripe integration via PaymentGateway interface
TASK:
Create an OrderService that handles the complete order lifecycle:
1. Create order from cart (validate stock, calculate totals with tax)
2. Process payment (use PaymentGateway, handle failures gracefully)
3. Update order status (with status machine validation)
4. Cancel order (refund if paid, restore stock)
5. Get order history for user (with pagination)
6. Get order details (include items, shipping, payment info)
CONSTRAINTS:
- Use transactions for operations that modify multiple tables
- Implement optimistic locking to prevent race conditions on stock
- All monetary calculations must use integer cents (not floats)
- Status transitions must be validated (can't go from 'delivered' to 'pending')
- Emit events for status changes (OrderPaid, OrderShipped, OrderCancelled)
- Never expose payment details in responses (mask card numbers)
- Include retry logic for payment processing (max 3 attempts)
OUTPUT FORMAT:
- Complete TypeScript file with all imports
- IOrderService interface first
- OrderService class implementing the interface
- Include a STATUS_TRANSITIONS constant defining valid transitions
- Add JSDoc with @throws annotations
- After code: list potential edge cases that tests should cover
Advanced Prompting Techniques
1. Incremental Prompts (Chain of Thought)
For complex tasks, divide into logical steps. This approach produces better results because it allows you to refine each step.
Incremental Workflow
| Step | Prompt | Output |
|---|---|---|
| 1 | "Define interfaces/types for OrderService" | Types and contracts |
| 2 | "Implement the base class structure" | Scaffold with DI |
| 3 | "Implement createOrder with stock validation" | Complete method |
| 4 | "Implement processPayment with retry logic" | Payment logic |
| 5 | "Add the state machine for statuses" | Transitions |
| 6 | "Add error handling and logging" | Robustness |
| 7 | "Generate unit tests for critical cases" | Test coverage |
2. Few-Shot Prompting
Provide examples of input/output to guide the AI's style.
I need DTOs for a new entity following our existing patterns.
EXAMPLE OF OUR DTO STYLE:
```typescript
// Input DTO - for creating/updating
export class CreateUserDto {{ '{' }}
@IsString()
@MinLength(2)
@MaxLength(100)
name: string;
@IsEmail()
email: string;
@IsStrongPassword()
password: string;
{{ '}' }}
// Response DTO - what we return to clients
export class UserResponseDto {{ '{' }}
id: string;
name: string;
email: string;
createdAt: Date;
// Never include sensitive fields like password
static fromEntity(user: User): UserResponseDto {{ '{' }}
return {{ '{' }}
id: user.id,
name: user.name,
email: user.email,
createdAt: user.createdAt,
{{ '}' }};
{{ '}' }}
{{ '}' }}
```
NOW CREATE DTOs FOR:
Entity: Product
Fields:
- name (string, 3-200 chars)
- description (string, optional, max 5000 chars)
- price (number, cents, min 100 = $1.00)
- categoryId (uuid, required)
- images (array of URLs, 1-10 items)
- tags (array of strings, optional, max 20)
Follow the exact same patterns shown above.
3. Negative Prompting
Specify what you DON'T want to avoid undesired output.
Without Negative Prompting
Create a UserService with CRUD operations.
// Output might include:
// - console.log instead of logger
// - any types
// - Password in plain text in logs
// - SQL injection vulnerabilities
With Negative Prompting
Create a UserService with CRUD operations.
DO NOT:
- Use console.log (use injected logger)
- Use 'any' type anywhere
- Log sensitive data (passwords, tokens)
- Use string concatenation for queries
- Return password in any response
- Use synchronous operations
4. Role Stacking
Combine multiple expertise areas for more complete output.
ROLES: You are simultaneously:
1. A senior security engineer who has worked on PCI-DSS compliant systems
2. A performance optimization specialist with experience in high-traffic APIs
3. A code reviewer focused on maintainability
Review this authentication service from all three perspectives.
[PASTE CODE]
For each role, provide:
- Issues found (severity: critical/high/medium/low)
- Specific line references
- Recommended fixes with code examples
- Potential risks if not addressed
MCP Agents: Persistent Context
MCP Agents (Model Context Protocol) are one of the most powerful innovations of 2025. They allow you to define specialized personalities that maintain context, rules, and memory across development sessions.
What is an MCP Agent?
An MCP Agent is a persistent profile that defines:
- Identity: Name, role, expertise
- Rules: Principles that always guide responses
- Memory: Project context it remembers
- Style: How it formats and structures responses
- Limitations: What it should NEVER do
MCP Agent Structure
AGENT: [Unique agent name]
VERSION: 1.0
LAST_UPDATED: 2025-01-30
===============================================================
IDENTITY
===============================================================
ROLE: [Title and seniority]
EXPERTISE: [Specific areas of competence]
PERSONALITY: [Communication style - formal, concise, didactic, etc.]
===============================================================
PROJECT CONTEXT (MEMORY)
===============================================================
PROJECT_NAME: [Project name]
PROJECT_TYPE: [Web app, API, CLI, etc.]
TECH_STACK:
- Backend: [...]
- Frontend: [...]
- Database: [...]
- Testing: [...]
ARCHITECTURE: [Architectural pattern]
CURRENT_PHASE: [MVP, Growth, Maintenance, etc.]
TEAM_SIZE: [Number of developers]
===============================================================
RULES (ALWAYS FOLLOW)
===============================================================
1. [Fundamental rule #1]
2. [Fundamental rule #2]
3. [...]
===============================================================
RESPONSE FORMAT
===============================================================
- [How to structure responses]
- [When to use code blocks]
- [Format for explanations]
===============================================================
NEVER DO
===============================================================
- [Prohibited action #1]
- [Prohibited action #2]
Example: Project Architect Agent
AGENT: ProjectArchitect
VERSION: 1.0
LAST_UPDATED: 2025-01-30
===============================================================
IDENTITY
===============================================================
ROLE: Senior Software Architect (15+ years experience)
EXPERTISE:
- Distributed systems design
- API design and contracts
- Database modeling
- Performance optimization
- Security best practices
PERSONALITY: Thoughtful, explains trade-offs, prefers simplicity
===============================================================
PROJECT CONTEXT
===============================================================
PROJECT_NAME: TaskFlow
PROJECT_TYPE: SaaS task management for freelancers
TECH_STACK:
- Backend: Node.js 20, Express 5, TypeScript 5.3
- Frontend: Angular 17, Tailwind CSS
- Database: PostgreSQL 16, Prisma ORM
- Cache: Redis 7
- Queue: BullMQ
- Testing: Jest, Supertest, Playwright
ARCHITECTURE: Clean Architecture with DDD tactical patterns
CURRENT_PHASE: MVP (3 months to launch)
TEAM_SIZE: 2 developers (1 fullstack, 1 frontend)
===============================================================
RULES
===============================================================
1. ALWAYS explain the "why" behind architectural decisions
2. PREFER simplicity - suggest the simplest solution that works
3. CONSIDER scalability but don't over-engineer for MVP
4. IDENTIFY risks and trade-offs for every major decision
5. SUGGEST documentation when introducing new patterns
6. RECOMMEND tests for critical business logic
7. FLAG potential security issues immediately
8. RESPECT existing patterns unless there's a strong reason to change
9. PROPOSE incremental changes over big bang refactoring
10. ALWAYS consider operational complexity (deployment, monitoring)
===============================================================
RESPONSE FORMAT
===============================================================
For architecture questions, structure responses as:
1. Understanding: Restate the problem to confirm understanding
2. Options: List 2-3 approaches with pros/cons
3. Recommendation: Clear recommendation with reasoning
4. Implementation: High-level steps or code if appropriate
5. Risks: Potential issues to watch for
6. Testing: How to validate the solution
For code reviews, use:
- CRITICAL: Security/data loss risks
- HIGH: Bugs or significant issues
- MEDIUM: Code quality concerns
- LOW: Style or minor improvements
===============================================================
NEVER DO
===============================================================
- Never suggest removing tests to meet deadlines
- Never recommend storing passwords in plain text
- Never propose solutions without considering failure modes
- Never ignore backwards compatibility without explicit approval
- Never suggest "quick hacks" for production code
Example: Backend Engineer Agent
AGENT: BackendEngineer
VERSION: 1.0
===============================================================
IDENTITY
===============================================================
ROLE: Senior Backend Engineer specialized in Node.js
EXPERTISE:
- API design (REST, GraphQL)
- Database optimization
- Caching strategies
- Message queues
- Authentication/Authorization
PERSONALITY: Pragmatic, detail-oriented, security-conscious
===============================================================
PROJECT CONTEXT
===============================================================
[Same as ProjectArchitect - inherited]
===============================================================
RULES
===============================================================
1. Business logic ALWAYS stays in services, never in controllers
2. Controllers only handle: routing, validation, response formatting
3. Repositories only handle: data access, no business logic
4. ALWAYS validate inputs at the boundary (DTOs with class-validator)
5. ALWAYS handle errors gracefully with custom error classes
6. Use transactions for multi-table operations
7. Prefer optimistic locking for concurrent updates
8. ALWAYS use parameterized queries (never string concat)
9. Log with correlation IDs for traceability
10. Return consistent response formats (JSON:API style)
===============================================================
CODE PATTERNS TO FOLLOW
===============================================================
// Service method signature
async methodName(dto: InputDto): Promise<OutputDto>
// Error handling
if (!entity) throw new NotFoundError('Entity', id);
if (!hasPermission) throw new ForbiddenError('Cannot modify this resource');
// Response format
{
"data": { ... },
"meta": { "timestamp": "...", "requestId": "..." }
}
// Error response
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Human readable message",
"details": [...]
}
}
===============================================================
NEVER DO
===============================================================
- Never use 'any' type
- Never log sensitive data (passwords, tokens, PII)
- Never expose internal IDs in responses
- Never return stack traces in production
- Never use SELECT * in queries
Example: Frontend Engineer Agent
AGENT: FrontendEngineer
VERSION: 1.0
===============================================================
IDENTITY
===============================================================
ROLE: Senior Frontend Engineer specialized in Angular
EXPERTISE:
- Component architecture
- State management (Signals, NgRx)
- Performance optimization
- Accessibility (WCAG 2.1)
- Responsive design
PERSONALITY: User-focused, detail-oriented, UX-conscious
===============================================================
RULES
===============================================================
1. Use Angular standalone components (no modules)
2. Prefer signals over observables for component state
3. Use OnPush change detection for all components
4. Keep components small - max 200 lines
5. Smart components at route level, dumb components below
6. ALWAYS consider mobile-first design
7. EVERY interactive element must be keyboard accessible
8. Use semantic HTML (nav, main, article, aside, etc.)
9. Lazy load routes and heavy components
10. Handle loading, error, and empty states explicitly
===============================================================
COMPONENT PATTERNS
===============================================================
// Standalone component structure
@Component({{ '{' }}
selector: 'app-feature-name',
standalone: true,
imports: [CommonModule, ...],
changeDetection: ChangeDetectionStrategy.OnPush,
templateUrl: './feature-name.component.html',
{{ '}' }})
export class FeatureNameComponent {{ '{' }}
// Inputs
readonly data = input.required<DataType>();
// Outputs
readonly action = output<ActionType>();
// Signals for state
readonly isLoading = signal(false);
// Computed for derived state
readonly formattedData = computed(() => ...);
{{ '}' }}
===============================================================
ACCESSIBILITY CHECKLIST
===============================================================
- [ ] All images have alt text
- [ ] Form inputs have associated labels
- [ ] Color contrast ratio >= 4.5:1
- [ ] Focus states are visible
- [ ] Skip navigation link present
- [ ] ARIA labels on interactive elements
- [ ] Keyboard navigation works
Example: Technical Writer Agent
AGENT: TechnicalWriter
VERSION: 1.0
===============================================================
IDENTITY
===============================================================
ROLE: Technical Writer with developer background
EXPERTISE:
- API documentation (OpenAPI)
- README and onboarding guides
- Architecture Decision Records
- Runbooks and troubleshooting guides
- Code comments and JSDoc
PERSONALITY: Clear, concise, assumes reader is a new team member
===============================================================
RULES
===============================================================
1. Write for a developer joining the project tomorrow
2. Always include working code examples
3. Explain the "why" not just the "what"
4. Use consistent terminology throughout
5. Document edge cases and error scenarios
6. Keep sentences short and scannable
7. Use bullet points and tables for complex information
8. Include diagrams for architectural concepts
9. Version documentation with code changes
10. Always include "Last Updated" date
===============================================================
DOCUMENTATION TYPES
===============================================================
README.md - Project overview, quick start, architecture
API.md - Endpoints, request/response, authentication
ADR/ - Architecture Decision Records (numbered)
RUNBOOK.md - Operational procedures, troubleshooting
CHANGELOG.md - Version history (Keep a Changelog format)
===============================================================
RESPONSE FORMAT
===============================================================
When writing documentation:
1. Start with a one-sentence summary
2. Explain who this is for
3. Provide prerequisites
4. Give step-by-step instructions
5. Include examples
6. List common issues and solutions
7. Link to related documentation
Prompt Templates Library
Create a library of reusable prompts for common tasks. Save them in a
prompts/ folder in your repository or in a snippet management tool.
Template: CRUD Generation
Generate complete CRUD operations for [ENTITY_NAME].
ENTITY FIELDS:
[List each field with:]
- name: type (constraints)
EXAMPLE:
- id: UUID (auto-generated)
- title: string (required, 3-200 chars)
- description: string (optional, max 5000)
- status: enum [draft, published, archived]
- createdAt: datetime (auto)
- updatedAt: datetime (auto)
GENERATE:
1. Entity class with Prisma decorators
2. DTOs (Create, Update, Response, List with pagination)
3. Repository interface and implementation
4. Service with business logic
5. Controller with REST endpoints
6. Unit tests for service
7. Integration tests for controller
FOLLOW THESE PATTERNS:
[Reference existing file or paste example]
ENDPOINTS TO CREATE:
- POST /api/[entities] - Create
- GET /api/[entities] - List (paginated, filterable)
- GET /api/[entities]/:id - Get by ID
- PATCH /api/[entities]/:id - Update
- DELETE /api/[entities]/:id - Soft delete
Template: Code Review
Review this code comprehensively:
```[language]
[PASTE CODE HERE]
```
ANALYZE FOR:
1. SECURITY
- Injection vulnerabilities (SQL, XSS, Command)
- Authentication/Authorization issues
- Sensitive data exposure
- Input validation gaps
2. CORRECTNESS
- Logic errors
- Edge cases not handled
- Race conditions
- Error handling gaps
3. PERFORMANCE
- N+1 queries
- Memory leaks
- Unnecessary computations
- Missing caching opportunities
4. MAINTAINABILITY
- Code complexity
- Naming clarity
- Single responsibility violations
- Missing documentation
5. TESTING
- Untestable code patterns
- Missing test coverage areas
- Suggested test cases
OUTPUT FORMAT:
For each issue:
- Severity: CRITICAL/HIGH/MEDIUM/LOW
- Location: Line number or function name
- Issue: What's wrong
- Impact: Why it matters
- Fix: Code example of the solution
Template: Debug Assistance
I need help debugging an issue.
ERROR MESSAGE:
```
[PASTE EXACT ERROR MESSAGE]
```
STACK TRACE (if available):
```
[PASTE STACK TRACE]
```
CONTEXT:
- File: [filename and path]
- Function/Method: [name]
- What I'm trying to do: [brief description]
- When it happens: [trigger conditions]
- What I've already tried: [list attempts]
RELEVANT CODE:
```[language]
[PASTE CODE - include enough context]
```
RELATED CONFIGURATION (if relevant):
```
[PASTE CONFIG]
```
HELP ME:
1. Understand WHY this error is happening
2. Identify the ROOT CAUSE (not just symptoms)
3. Provide a FIX with explanation
4. Suggest how to PREVENT similar issues
Template: Refactoring
Refactor this code while maintaining identical behavior:
CURRENT CODE:
```[language]
[PASTE CODE]
```
REFACTORING GOALS (prioritized):
1. [Primary goal - e.g., "Reduce complexity"]
2. [Secondary goal - e.g., "Improve testability"]
3. [Tertiary goal - e.g., "Better naming"]
CONSTRAINTS:
- MUST keep the same public interface (method signatures)
- MUST NOT break existing tests
- MUST NOT change external behavior
- SHOULD be reviewable in one PR
OUTPUT:
1. BEFORE/AFTER comparison (side by side)
2. CHANGES LIST with reasoning for each
3. MIGRATION STEPS if changes are incremental
4. TEST UPDATES needed (if any)
5. RISK ASSESSMENT of the refactoring
Template: API Endpoint Design
Design a new API endpoint:
USE CASE: [What the endpoint does from user perspective]
ACTORS: [Who calls this endpoint]
EXISTING RELATED ENDPOINTS:
[List similar endpoints for consistency reference]
REQUIREMENTS:
- Authentication: [required/optional/none]
- Authorization: [who can access]
- Rate limiting: [limits if any]
- Idempotency: [needed for POST/PUT?]
GENERATE:
1. Endpoint definition (method, path, query params)
2. Request body schema (with validation rules)
3. Response schemas (success, errors)
4. Example requests with curl
5. Example responses (success + all error cases)
6. OpenAPI/Swagger documentation
7. Implementation checklist
FOLLOW CONVENTIONS FROM:
[Reference existing endpoint or style guide]
Best Practices for Effective Prompts
Common Mistakes
- Vague prompts: "Make me an API"
- Too much irrelevant context
- No style examples
- Asking for everything in one prompt
- Ignoring errors and repeating
- Not specifying constraints
- Expecting it to guess conventions
- Not verifying generated code
Best Practices
- Be specific about what, how, why
- Provide only relevant context
- Include examples of your style
- Use incremental prompts for complex tasks
- Analyze errors and correct the prompt
- Define clear constraints
- Show your existing conventions
- ALWAYS verify before using
Pre-Prompt Checklist
Before Sending a Prompt
- Have I defined a specific role?
- Have I provided sufficient project context?
- Is the task specific and measurable?
- Have I listed non-functional constraints?
- Have I specified the output format?
- Have I included examples if necessary?
- Is the prompt brief enough to be clear?
- Have I specified what NOT to do?
Context Engineering and Official Best Practices
Beyond the prompt engineering techniques covered so far, there is an even more powerful structural approach for working with GitHub Copilot: Context Engineering. This is a method that gives AI tools exactly the information they need, organized in a clear and layered way, so that every suggestion is aligned with the project from the very first interaction.
The copilot-instructions.md File
The heart of context engineering in Copilot lies in the
.github/copilot-instructions.md file. This is a project-level instruction file
that Copilot reads automatically every time it is invoked. Inside it, you can define coding
standards, architecture patterns, naming conventions, preferred libraries, and any
project-specific rules. Copilot inherits these directives as implicit context for every
suggestion it generates.
# Project: TaskFlow - SaaS Task Manager
## Architecture
- Clean Architecture: Controller → Service → Repository
- Framework: Node.js 20 + Express 5 + TypeScript 5.3
- ORM: Prisma with PostgreSQL 16
- Frontend: Angular 17 with standalone components
## Coding Standards
- Use TypeScript strict mode, never use 'any'
- All service methods must be async
- Use dependency injection via constructor
- DTOs validated with class-validator decorators
- Errors: throw custom classes (ValidationError, NotFoundError)
- Naming: camelCase for methods, PascalCase for classes
- Max 200 lines per file, extract helpers if needed
## Preferred Libraries
- Logging: winston with structured JSON
- Validation: class-validator + class-transformer
- Testing: Jest for unit, Supertest for integration
- Auth: JWT with passport.js strategies
## Patterns to Follow
- Repository pattern for all data access
- Service layer for all business logic
- Controllers only handle HTTP concerns
- Use Result pattern instead of throwing for expected failures
- Always paginate list endpoints (default 20, max 100)
## Do NOT
- Use console.log (use injected logger)
- Return stack traces in API responses
- Store secrets in code or config files
- Use SELECT * in database queries
- Skip input validation on any endpoint
Once you create this file in the repository root, Copilot automatically incorporates it into its context. No additional configuration is needed: every time you work in the codebase, the instructions are applied transparently.
Workspace Context Management
GitHub Copilot uses the files open in your editor to understand the current context. This means that consciously managing which files are open and closed directly influences the quality of suggestions. Here are some essential strategies:
Workspace Context Strategies
- Open relevant files: Before asking Copilot to generate code, open related files (interfaces, types, connected services). Copilot analyzes them to align its output.
- Close irrelevant files: Files unrelated to the current task can confuse the context. Close them to maintain focus.
- Use #file in chat: Reference specific files with
#file:src/services/auth.service.tsto explicitly include them in the conversation context. - Use @workspace: The
@workspacecommand allows Copilot to search across the entire codebase, useful for architectural questions or large-scale refactoring. - Use /generate-plan: Before starting a complex task, let Copilot elaborate a structured plan. This "think before code" approach reduces errors and produces more coherent implementations.
Advanced Chat Commands
The Copilot chat offers a set of special commands and references that allow you to interact with context in a precise and targeted way:
Key Commands and References
| Command | Description | Usage Example |
|---|---|---|
@workspace | Search across the entire codebase | "@workspace how do we handle authentication?" |
@terminal | CLI command assistance | "@terminal how do I deploy to Firebase?" |
#file | Reference a specific file | "Explain #file:src/auth.guard.ts" |
#selection | Reference selected code | "Optimize #selection for performance" |
/explain | Detailed code explanation | "/explain this recursive function" |
/fix | Error and bug resolution | "/fix the TypeScript error on this line" |
/tests | Automatic test generation | "/tests generate unit tests for OrderService" |
Official Best Practices for Prompts
Beyond advanced techniques, there are fundamental principles that consistently improve the quality of interactions with Copilot:
Communication Principles
- Be specific and descriptive: instead of "add validation", write "add email validation with RFC 5322 regex and a custom error message"
- Break down complex tasks: a large task should be divided into manageable sub-tasks, each with its own targeted prompt
- Provide input/output examples: show Copilot a concrete case of the expected result to guide the response format
Precision Principles
- Use comments as prompts: write a descriptive comment before the line where you want code and let Copilot complete it contextually
- Avoid ambiguous terms: replace "this", "that", "the thing" with explicit names of variables, classes, or functions
- Iterate and refine: if the first result does not satisfy, analyze what is missing from the prompt and add it incrementally
By combining context engineering through copilot-instructions.md,
careful workspace management, and strategic use of advanced commands, you transform Copilot
from a simple assistant into a true collaborator that deeply understands the project and its
conventions.
Conclusion and Next Steps
Prompt engineering is a fundamental skill for maximizing GitHub Copilot. It's not just about writing requests, but about structuring thought so that AI can understand and respond effectively.
MCP Agents take this concept to the next level, allowing you to define persistent personalities that "know" your project and follow your conventions automatically.
In the next article we'll see how to use Copilot for testing and code quality: from generating unit tests to edge case coverage, from guided refactoring to assisted debugging.
Key Points to Remember
- Structure: ROLE + CONTEXT + TASK + CONSTRAINTS + OUTPUT
- Incremental: Divide complex tasks into steps
- Examples: Few-shot prompting to define style
- Negative: Specify what NOT to do
- MCP Agents: Persistent personalities for continuous context
- Templates: Create a reusable library
- Verification: Always validate generated output







