Overview: Eight Advanced Servers for the MCP Ecosystem
In the previous articles we analyzed MCP servers dedicated to project management, productivity and DevOps. With this eleventh article in the series we tackle the eight advanced servers that complete the architecture of the Tech-MCP project: incident management, quality gates, workflow orchestration, access control, decision logs, cross-server analytics, service registry and aggregated dashboards.
These servers represent the most sophisticated layer of the ecosystem. They do not operate in isolation but are designed to collaborate with each other through the EventBus and the ClientManager, creating an integrated system capable of reacting automatically to events, enforcing security policies, verifying code quality and orchestrating complex workflows.
What You Will Learn in This Article
- How to manage the complete lifecycle of an incident with 6 dedicated tools
- How to record and track Architecture Decision Records (ADR) with states and links
- How to implement RBAC access control with allow/deny policies
- How to define and evaluate metric-based quality gates with comparison operators
- How to orchestrate event-driven workflows with cross-server execution via ClientManager
- How to aggregate analytics and metrics from all servers with the insight engine
- How to implement service discovery and health monitoring with the registry
- How to expose aggregated dashboards with caching and multi-source data
Advanced Server Map
Before analyzing each server in detail, here is a concise overview of the roles and relationships among the eight advanced servers:
Summary of the 8 Advanced Servers
| Server | Role | Tools | EventBus | ClientManager |
|---|---|---|---|---|
| incident-manager | Incident lifecycle management | 6 | Produces events | No |
| decision-log | Architecture Decision Records | 5 | Produces events | No |
| access-policy | RBAC/ABAC access control | 5 | Produces events | No |
| quality-gate | Metric-based quality gates | 4 | Produces events | No |
| workflow-orchestrator | Event-driven workflow orchestration | 5 | Produces + Consumes | Yes |
| insight-engine | Cross-server analytics and correlation | 4 | No | Yes |
| mcp-registry | Service discovery and health checks | 4 | Produces events | No |
| dashboard-api | Aggregated multi-server dashboard | 4 | No | Yes |
1. Incident Manager: Incident Lifecycle Management
The incident-manager server manages the entire lifecycle of an incident: from opening to resolution, through investigation, mitigation and automatic post-mortem generation. Every state transition is tracked in a detailed timeline, and critical events are published to the EventBus to enable automatic reactions from other servers.
The Incident Lifecycle
An incident goes through five distinct states, each with a precise meaning in the management process:
[open] ──────> [investigating] ──────> [mitigating] ──────> [resolved] ──────> [postmortem]
│ │ │ │
│ │ │ └── Generates post-mortem
│ │ └── Impact reduced report
│ └── Root cause under analysis
└── Incident just opened
Published events:
incident:opened → When an incident is opened
incident:escalated → When severity changes
incident:resolved → When the incident is resolved
The 6 Incident Manager Tools
Available Tools
| Tool | Description | Main Parameters |
|---|---|---|
open-incident |
Opens a new incident | title, severity, description, affectedSystems |
update-incident |
Updates status and/or adds a note | id, status, note |
add-timeline-entry |
Adds an entry to the timeline | incidentId, description, source |
resolve-incident |
Resolves with summary and root cause | id, resolution, rootCause |
generate-postmortem |
Generates a formatted post-mortem report | id |
list-incidents |
Lists incidents with filters | status, severity, limit |
Example: Opening and Resolving an Incident
Here is the complete flow from creation to resolution of a critical incident, with event publishing to the EventBus:
// 1. Opening the incident
server.tool('open-incident', 'Open a new incident...', {
title: z.string().describe('Short title of the incident'),
severity: z.enum(['critical', 'high', 'medium', 'low'])
.describe('Incident severity level'),
description: z.string().describe('Detailed description'),
affectedSystems: z.array(z.string()).optional()
.describe('List of affected systems or services'),
}, async ({ title, severity, description, affectedSystems }) => {
const incident = store.openIncident({
title, severity, description, affectedSystems
});
// Publishing event to the EventBus
eventBus?.publish('incident:opened', {
incidentId: String(incident.id),
title: incident.title,
severity: incident.severity,
affectedSystems: incident.affectedSystems,
});
return {
content: [{ type: 'text', text: JSON.stringify(incident, null, 2) }]
};
});
When the incident is resolved, the resolve-incident tool automatically calculates
the incident duration and generates post-mortem data:
// 2. Resolution with root cause analysis
server.tool('resolve-incident', '...', {
id: z.number().int().positive(),
resolution: z.string().describe('Summary of how it was resolved'),
rootCause: z.string().optional().describe('Root cause analysis'),
}, async ({ id, resolution, rootCause }) => {
const resolved = store.resolveIncident(id, resolution, rootCause);
const postmortem = store.generatePostmortemData(id);
const durationMinutes = postmortem?.durationMinutes ?? 0;
eventBus?.publish('incident:resolved', {
incidentId: String(id),
title: resolved.title,
resolution,
durationMinutes,
});
return {
content: [{ type: 'text', text: JSON.stringify(resolved, null, 2) }]
};
});
Data Model: Incident and Timeline
The SQLite store manages two tables: incidents for main data and
incident_timeline for tracking every event during the lifecycle:
interface Incident {
id: number;
title: string;
severity: string; // 'critical' | 'high' | 'medium' | 'low'
description: string;
status: string; // 'open' | 'investigating' | 'mitigating' | 'resolved' | 'postmortem'
affectedSystems: string[];
resolution: string | null;
rootCause: string | null;
createdAt: string;
resolvedAt: string | null;
}
interface TimelineEntry {
id: number;
incidentId: number;
description: string;
source: string | null; // e.g. 'monitoring', 'engineer', 'automated'
timestamp: string;
}
Collaboration with Other Servers
The incident-manager also listens for events from other servers through collaboration handlers.
For example, a perf:bottleneck-found event from the performance profiler or a
cicd:build-failed event from the CI/CD monitor can automatically add entries
to the timeline of a related incident.
2. Decision Log: Architecture Decision Records
The decision-log server implements the management of Architecture Decision Records (ADR), a well-established pattern for documenting the architectural decisions of a project. Each decision is recorded with its context, alternatives considered, expected consequences and current status.
Decision States
A decision goes through a lifecycle with four possible states:
[proposed] ──────> [accepted] ──────> [deprecated]
│
└──────> [superseded] ← (replaced by a new decision)
Published events:
decision:created → When a new decision is recorded
decision:superseded → When a decision is replaced
The 5 Decision Log Tools
Available Tools
| Tool | Description | Main Parameters |
|---|---|---|
record-decision |
Records a new ADR | title, context, decision, alternatives, status |
list-decisions |
Lists recorded decisions | Optional filters |
get-decision |
Retrieves a decision by ID | id |
supersede-decision |
Replaces a decision with a new one | id, supersededBy |
link-decision |
Links to a ticket, commit or impact | decisionId, linkType, targetId |
Example: Recording and Linking an ADR
An architectural decision is recorded with context, alternatives and consequences, then linked to related tickets and commits:
// Recording a decision
server.tool('record-decision', '...', {
title: z.string().describe('Short title, e.g. "Use PostgreSQL for analytics"'),
context: z.string().describe('Context and problem statement'),
decision: z.string().describe('The decision that was made'),
alternatives: z.array(z.string()).optional()
.describe('Alternative options considered'),
consequences: z.string().optional()
.describe('Expected consequences'),
status: z.enum(['proposed', 'accepted', 'deprecated', 'superseded'])
.optional().default('proposed'),
relatedTickets: z.array(z.string()).optional()
.describe('Related ticket IDs, e.g. ["PROJ-123"]'),
}, async (args) => {
const record = store.recordDecision(args);
eventBus?.publish('decision:created', {
decisionId: String(record.id),
title: record.title,
status: record.status,
});
return { content: [{ type: 'text', text: JSON.stringify(record, null, 2) }] };
});
// Linking to tickets and commits
server.tool('link-decision', '...', {
decisionId: z.number().int().positive(),
linkType: z.enum(['ticket', 'commit', 'impact', 'related']),
targetId: z.string().describe('Ticket ID, commit hash, etc.'),
description: z.string().optional(),
}, async ({ decisionId, linkType, targetId, description }) => {
const link = store.linkDecision({ decisionId, linkType, targetId, description });
return { content: [{ type: 'text', text: JSON.stringify(link, null, 2) }] };
});
Data Model: Decision and Links
interface Decision {
id: number;
title: string;
context: string;
decision: string;
alternatives: string[];
consequences: string;
status: string; // 'proposed' | 'accepted' | 'deprecated' | 'superseded'
relatedTickets: string[];
supersededBy: number | null;
createdAt: string;
updatedAt: string;
}
interface DecisionLink {
id: number;
decisionId: number;
linkType: string; // 'ticket' | 'commit' | 'impact' | 'related'
targetId: string;
description: string | null;
createdAt: string;
}
3. Access Policy: RBAC Access Control
The access-policy server implements an
RBAC/ABAC (Role-Based / Attribute-Based Access Control) system that operates across
all servers and tools in the MCP ecosystem. It allows defining access policies
with allow or deny effects, assigning roles to users and verifying
permissions before tool execution. Every check is recorded in an audit log.
The 5 Access Policy Tools
Available Tools
| Tool | Description | Main Parameters |
|---|---|---|
create-policy |
Creates a new access policy | name, effect, rules[] |
check-access |
Checks whether a user has access | userId, server, tool |
list-policies |
Lists active policies | None |
assign-role |
Assigns a role to a user | userId, roleName |
audit-access |
Retrieves the audit log | userId, server, limit |
Example: Defining an RBAC Policy
A policy defines which roles can access specific servers and tools.
The effect can be allow (permit) or deny (reject).
Rules use the * wildcard to indicate all servers or tools:
// Creating a policy: only admins can open incidents
server.tool('create-policy', '...', {
name: z.string().describe('Unique name for the policy'),
effect: z.enum(['allow', 'deny']),
rules: z.array(z.object({
server: z.string().describe('Server name or "*" for all'),
tool: z.string().optional().describe('Tool name, omit for all'),
roles: z.array(z.string()).describe('Roles this rule applies to'),
})),
}, async ({ name, effect, rules }) => {
const policy = store.createPolicy({ name, effect, rules });
eventBus?.publish('access:policy-updated', {
policyId: String(policy.id),
name: policy.name,
effect: policy.effect,
});
return { content: [{ type: 'text', text: JSON.stringify(policy, null, 2) }] };
});
Policy Example
Here is how a policy granting access to senior developers for specific servers and tools is structured:
{
"name": "senior-dev-policy",
"effect": "allow",
"rules": [
{
"server": "scrum-board",
"roles": ["senior-dev", "tech-lead"]
},
{
"server": "incident-manager",
"tool": "open-incident",
"roles": ["senior-dev", "tech-lead", "ops"]
},
{
"server": "quality-gate",
"tool": "evaluate-gate",
"roles": ["senior-dev"]
}
]
}
Access Verification and Audit Log
The check-access tool verifies permissions and automatically records every
attempt in the audit log. If access is denied, an access:denied event
is published to the EventBus:
// Access check with automatic audit
server.tool('check-access', '...', {
userId: z.string(),
server: z.string(),
tool: z.string().optional(),
}, async ({ userId, server: targetServer, tool }) => {
const result = store.checkAccess(userId, targetServer, tool);
// Automatic audit logging
store.logAccess(userId, targetServer, tool ?? '*',
result.allowed ? 'allowed' : 'denied', result.reason);
// Event for denied access
if (!result.allowed) {
eventBus?.publish('access:denied', {
userId, server: targetServer,
tool: tool ?? '*', reason: result.reason,
});
}
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
});
Data Model: Policies, Roles and Audit
The SQLite store manages four tables: policies for access policies,
roles for role definitions, role_assignments for user-role
associations and audit_log for tracking every access check.
This structure enables complete reconstruction of access history for compliance and debugging.
4. Quality Gate: Metric-Based Quality Checks
The quality-gate server allows defining and evaluating quality gates based on quantitative metrics. Each gate contains a list of checks with a metric name, comparison operator and threshold. When a gate is evaluated, the result (passed/failed) is published to the EventBus to trigger automated workflows.
The 4 Quality Gate Tools
Available Tools
| Tool | Description | Main Parameters |
|---|---|---|
define-gate |
Defines a new quality gate | name, projectName, checks[] |
evaluate-gate |
Evaluates a gate with actual metrics | gateId, metrics |
list-gates |
Lists defined gates | None |
get-gate-history |
Evaluation history | gateId |
Defining a Quality Gate with Rules
A quality gate defines a series of checks, each with a metric, a comparison
operator and a threshold. Supported operators are: >=, <=,
>, <, == and !=.
// Defining a quality gate
server.tool('define-gate', '...', {
name: z.string().describe('Name of the quality gate'),
projectName: z.string().optional(),
checks: z.array(z.object({
metric: z.string().describe('Metric name, e.g. "coverage"'),
operator: z.enum(['>=', '<=', '>', '<', '==', '!=']),
threshold: z.number().describe('Threshold value'),
})),
}, async ({ name, projectName, checks }) => {
const gate = store.defineGate({ name, projectName, checks });
return { content: [{ type: 'text', text: JSON.stringify(gate, null, 2) }] };
});
Example: Configuring a Quality Gate for Deployment
{
"name": "deploy-readiness",
"projectName": "Tech-MCP",
"checks": [
{ "metric": "coverage", "operator": ">=", "threshold": 80 },
{ "metric": "complexity", "operator": "<=", "threshold": 15 },
{ "metric": "bugs", "operator": "==", "threshold": 0 },
{ "metric": "duplication", "operator": "<=", "threshold": 3 },
{ "metric": "buildTime", "operator": "<", "threshold": 300 }
]
}
Evaluation and Events
When the gate is evaluated against the actual project metrics, the result determines
which event is published. If all metrics satisfy their checks, quality:gate-passed
is emitted; otherwise quality:gate-failed with failure details:
// Gate evaluation
server.tool('evaluate-gate', '...', {
gateId: z.number().int().positive(),
metrics: z.record(z.string(), z.number())
.describe('Map of metric names to current values'),
}, async ({ gateId, metrics }) => {
const evaluation = store.evaluateGate(gateId, metrics);
const gate = store.getGate(gateId);
if (evaluation.passed) {
eventBus?.publish('quality:gate-passed', {
gateName: gate?.name ?? String(gateId),
project: gate?.projectName ?? '',
results: evaluation.results,
});
} else {
eventBus?.publish('quality:gate-failed', {
gateName: gate?.name ?? String(gateId),
project: gate?.projectName ?? '',
failures: evaluation.failures,
});
}
return { content: [{ type: 'text', text: JSON.stringify(evaluation, null, 2) }] };
});
Collaboration with CI/CD
The quality-gate listens for events from the CI/CD monitor. When a cicd:pipeline-completed
event is published, the server can automatically evaluate the quality gates associated with the project.
Similarly, a test:coverage-report event can trigger the evaluation
of coverage-related gates.
5. Workflow Orchestrator: Event-Driven Automation
The workflow-orchestrator is the most sophisticated server in the entire ecosystem.
It implements an event-driven orchestration engine that allows defining
workflows composed of sequential steps, each of which invokes a tool on a different MCP server
through the ClientManager. Workflows are triggered automatically
by events published to the EventBus, or manually through the trigger-workflow tool.
The 5 Workflow Orchestrator Tools
Available Tools
| Tool | Description | Main Parameters |
|---|---|---|
create-workflow |
Creates an event-driven workflow | name, triggerEvent, steps[] |
list-workflows |
Lists defined workflows | None |
trigger-workflow |
Manually triggers a workflow | workflowId, payload |
get-workflow-run |
Details of a workflow execution | runId |
toggle-workflow |
Enables/disables a workflow | workflowId, active |
Defining a Workflow with Cross-Server Steps
A workflow defines a trigger event, optional conditions and an ordered sequence
of steps. Each step specifies the target server, the tool to invoke and the arguments.
Arguments support {{payload.field}} and
{{steps[N].result.field}} templates for passing data between steps:
// Creating a workflow
server.tool('create-workflow', '...', {
name: z.string(),
description: z.string().optional(),
triggerEvent: z.string()
.describe('Event that triggers this workflow, e.g. "incident:opened"'),
triggerConditions: z.record(z.unknown()).optional()
.describe('Conditions to match against the event payload'),
steps: z.array(z.object({
server: z.string().describe('Target MCP server name'),
tool: z.string().describe('Tool to invoke'),
arguments: z.record(z.unknown()).optional()
.describe('Arguments with { {payload.field} } template support'),
})),
}, async (args) => {
const workflow = store.createWorkflow(args);
return { content: [{ type: 'text', text: JSON.stringify(workflow, null, 2) }] };
});
Example: Automated Workflow for Critical Incidents
Here is a complete workflow that triggers when an incident with severity
critical is opened. The workflow executes three steps on three different servers:
{
"name": "critical-incident-response",
"description": "Automated response for critical incidents",
"triggerEvent": "incident:opened",
"triggerConditions": {
"severity": "critical"
},
"steps": [
{
"server": "decision-log",
"tool": "record-decision",
"arguments": {
"title": "Emergency: {{payload.title}}",
"context": "Critical incident opened affecting {{payload.affectedSystems}}",
"decision": "Activating emergency response protocol",
"status": "accepted"
}
},
{
"server": "quality-gate",
"tool": "evaluate-gate",
"arguments": {
"gateId": 1,
"metrics": {
"activeIncidents": 1,
"severity": 4
}
}
},
{
"server": "incident-manager",
"tool": "add-timeline-entry",
"arguments": {
"incidentId": "{{payload.incidentId}}",
"description": "Automated response workflow triggered",
"source": "workflow-orchestrator"
}
}
]
}
The WorkflowEngine: Execution and Template Resolution
At the heart of the workflow-orchestrator is the WorkflowEngine, which handles template resolution, trigger condition evaluation and sequential step execution through the ClientManager:
class WorkflowEngine {
constructor(
private store: WorkflowStore,
private clientManager?: McpClientManager,
private eventBus?: EventBus,
) {}
// Resolves templates like {{payload.title}} and {{steps[0].result.id}}
resolveTemplates(
template: Record<string, unknown>,
context: {
payload: Record<string, unknown>;
steps: Array<{ result: Record<string, unknown> | null }>;
}
): Record<string, unknown>;
// Evaluates trigger conditions: every key must match the payload
evaluateTrigger(
conditions: Record<string, unknown>,
payload: Record<string, unknown>
): boolean;
// Executes the workflow step by step via ClientManager
async executeWorkflow(
workflow: Workflow,
triggerPayload: Record<string, unknown>
): Promise<WorkflowRunRecord>;
// Handles events: finds active workflows and executes them
async handleEvent(event: string, payload: unknown): Promise<void>;
}
EventBus Wildcard Pattern
The workflow-orchestrator collaboration handler uses the * pattern to
subscribe to all events on the EventBus. For each received event,
the engine searches for active workflows whose triggerEvent matches the event
and whose triggerConditions are satisfied by the payload. This enables
creating reactive automations without modifying the server code.
Data Model: Workflow, Run and Steps
interface Workflow {
id: number;
name: string;
description: string | null;
triggerEvent: string;
triggerConditions: Record<string, unknown>;
steps: {
server: string;
tool: string;
arguments: Record<string, unknown>;
}[];
active: boolean;
createdAt: string;
updatedAt: string;
}
interface WorkflowRunRecord {
id: number;
workflowId: number;
status: string; // 'running' | 'completed' | 'failed'
triggerPayload: Record<string, unknown>;
error: string | null;
startedAt: string;
completedAt: string | null;
durationMs: number | null;
}
6. Insight Engine: Cross-Server Analytics
The insight-engine server represents the intelligence layer of the ecosystem. It uses the ClientManager to gather data from all available servers, correlate metrics from different sources and expose an aggregated view of the project state. The internal CorrelationEngine maps metrics to source servers and handles graceful degradation when a server is unavailable.
The 4 Insight Engine Tools
Available Tools
| Tool | Description | Main Parameters |
|---|---|---|
query-insight |
Natural language query | question, forceRefresh |
correlate-metrics |
Correlates metrics from different servers | metrics[], period |
explain-trend |
Explains a trend for a metric | metric, direction |
health-dashboard |
Project health dashboard | forceRefresh |
The CorrelationEngine: Metric-to-Server Mapping
At the core of the insight-engine is the CorrelationEngine, which maintains an internal
map of available metrics and the servers that provide them. When a correlation is requested,
the engine calls the appropriate servers through the ClientManager
and aggregates the results:
class CorrelationEngine {
// Metric -> server/tool mapping
private metricMap = {
'velocity': { server: 'agile-metrics', tool: 'calculate-velocity' },
'time-logged': { server: 'time-tracking', tool: 'get-timesheet' },
'budget-spent': { server: 'project-economics', tool: 'get-budget-status' },
};
// Safe call: returns null if the server is unavailable
async safeCall(server: string, tool: string, args = {})
: Promise<Record<string, unknown> | null>;
// Multi-metric correlation
async correlateMetrics(metrics: string[])
: Promise<{ metrics: Record; dataSources: Record; analyzedAt: string }>;
// Project health dashboard with health score
async getProjectHealth()
: Promise<{ healthScore: number; velocity; timeTracking; budget; dataSources }>;
// Keyword-based query
async queryInsight(question: string)
: Promise<{ question; relevantMetrics; data; generatedAt }>;
}
Intelligent Caching
The insight-engine uses an internal caching system through the InsightStore.
Responses are stored with a composite key of analysis type and query.
The forceRefresh parameter allows bypassing the cache when necessary.
The health-dashboard tool uses a TTL of 300 seconds to balance
data freshness and performance.
7. MCP Registry: Service Discovery and Health Monitoring
The mcp-registry server functions as a service registry for the MCP ecosystem. Each server can register itself with its URL, transport protocol and list of capabilities. The registry supports periodic health checks and publishes events when a server becomes unavailable.
The 4 MCP Registry Tools
Available Tools
| Tool | Description | Main Parameters |
|---|---|---|
register-server |
Registers a new server | name, url, transport, capabilities |
discover-servers |
Discovers registered servers | status, transport |
health-check |
Performs a health check | serverId |
get-capabilities |
Gets a server's capabilities | serverId |
Server Registration and Discovery
// Registering a server in the ecosystem
server.tool('register-server', '...', {
name: z.string().describe('Unique name of the MCP server'),
url: z.string().describe('URL or endpoint'),
transport: z.enum(['stdio', 'http', 'in-memory'])
.optional().default('stdio'),
capabilities: z.array(z.string()).optional()
.describe('List of capabilities'),
}, async ({ name, url, transport, capabilities }) => {
const record = store.registerServer({ name, url, transport, capabilities });
eventBus?.publish('registry:server-registered', {
serverName: record.name,
url: record.url,
capabilities: record.capabilities,
});
return { content: [{ type: 'text', text: JSON.stringify(record, null, 2) }] };
});
// Discovery with status and transport filters
server.tool('discover-servers', '...', {
status: z.enum(['healthy', 'unhealthy', 'unknown']).optional(),
transport: z.enum(['stdio', 'http', 'in-memory']).optional(),
}, async ({ status, transport }) => {
const servers = store.listServers({ status, transport });
return { content: [{ type: 'text', text: JSON.stringify(servers, null, 2) }] };
});
Health Checks and Monitoring
The health-check tool performs a status verification on a registered server.
If the server is found to be unavailable, a registry:server-unhealthy event
is published to the EventBus:
// Health check with event publishing
server.tool('health-check', '...', {
serverId: z.number().int().positive(),
}, async ({ serverId }) => {
const srv = store.getServer(serverId);
const responseTimeMs = /* response time measurement */;
const status = responseTimeMs > 450 ? 'unhealthy' : 'healthy';
const healthCheck = store.recordHealthCheck({
serverId, status, responseTimeMs,
error: status === 'unhealthy' ? 'Timeout' : undefined,
});
if (status === 'unhealthy') {
eventBus?.publish('registry:server-unhealthy', {
serverName: srv.name,
lastHealthy: srv.lastHealthCheck ?? srv.createdAt,
error: 'Health check failed',
});
}
return { content: [{ type: 'text', text: JSON.stringify(healthCheck, null, 2) }] };
});
8. Dashboard API: Multi-Server Aggregation
The dashboard-api server acts as a unified access point for obtaining an aggregated view of the entire ecosystem. It uses the ClientManager to simultaneously query multiple servers (scrum-board, agile-metrics, time-tracking, project-economics, decision-log, incident-manager, quality-gate, retrospective-manager) and present data in a consistent format with integrated caching.
The 4 Dashboard API Tools
Available Tools
| Tool | Description | Queried Servers |
|---|---|---|
get-overview |
Aggregated project overview | scrum-board, agile-metrics, time-tracking, project-economics |
get-server-status |
Status of registered servers | mcp-registry |
get-recent-activity |
Aggregated recent activity | decision-log, incident-manager, retrospective-manager |
get-project-summary |
Complete project summary | All available servers (7+) |
Example: Aggregated Overview
The get-overview tool gathers data from four servers simultaneously
using Promise.all to maximize performance. For each server,
the availability status is indicated, ensuring
graceful degradation:
server.tool('get-overview', '...', {
forceRefresh: z.boolean().optional(),
}, async ({ forceRefresh }) => {
// Check cache (TTL 120 seconds)
if (!forceRefresh) {
const cached = store.getCached('overview', 'main');
if (cached) return { content: [{ type: 'text',
text: JSON.stringify({ ...cached, fromCache: true }, null, 2) }] };
}
// Parallel aggregation from 4 servers
const [scrumBoard, velocity, timesheet, budget] = await Promise.all([
safeCall(clientManager, 'scrum-board', 'list-sprints', {}),
safeCall(clientManager, 'agile-metrics', 'calculate-velocity', {...}),
safeCall(clientManager, 'time-tracking', 'get-timesheet', {}),
safeCall(clientManager, 'project-economics', 'get-budget-status', {...}),
]);
const overview = {
velocity: velocity ?? { status: 'unavailable' },
scrumBoard: scrumBoard ?? { status: 'unavailable' },
timeTracking: timesheet ?? { status: 'unavailable' },
budget: budget ?? { status: 'unavailable' },
dataSources: {
scrumBoard: scrumBoard ? 'available' : 'unavailable',
velocity: velocity ? 'available' : 'unavailable',
timeTracking: timesheet ? 'available' : 'unavailable',
budget: budget ? 'available' : 'unavailable',
},
generatedAt: new Date().toISOString(),
};
store.setCache('overview', 'main', overview, 120);
return { content: [{ type: 'text', text: JSON.stringify(overview, null, 2) }] };
});
Project Summary: Complete View
The get-project-summary tool represents the most comprehensive aggregation,
simultaneously querying all available servers to generate a
360-degree view of the project:
// Parallel aggregation from 7 servers
const [velocity, timesheet, budget, decisions,
incidents, qualityGates, retros] = await Promise.all([
safeCall(clientManager, 'agile-metrics', 'calculate-velocity', {...}),
safeCall(clientManager, 'time-tracking', 'get-timesheet', {}),
safeCall(clientManager, 'project-economics', 'get-budget-status', {...}),
safeCall(clientManager, 'decision-log', 'list-decisions', {}),
safeCall(clientManager, 'incident-manager', 'list-incidents', {}),
safeCall(clientManager, 'quality-gate', 'list-gates', {}),
safeCall(clientManager, 'retrospective-manager', 'list-retros', {}),
]);
Safe Call Pattern and Graceful Degradation
Both the dashboard-api and the insight-engine use the safeCall pattern
for cross-server communication. If a server is unavailable, the call
returns null instead of throwing an exception. The dataSources field
in the response indicates the availability status of each server, allowing the client
to know which data is real and which is missing.
Ecosystem Event Map
The eight advanced servers produce and consume events through the EventBus. Here is the complete map of events produced by these servers:
Events Published by Advanced Servers
| Event | Source Server | Main Payload |
|---|---|---|
incident:opened |
incident-manager | incidentId, title, severity, affectedSystems |
incident:escalated |
incident-manager | incidentId, previousSeverity, newSeverity |
incident:resolved |
incident-manager | incidentId, title, resolution, durationMinutes |
decision:created |
decision-log | decisionId, title, status |
decision:superseded |
decision-log | decisionId, supersededBy, title |
access:policy-updated |
access-policy | policyId, name, effect |
access:denied |
access-policy | userId, server, tool, reason |
quality:gate-passed |
quality-gate | gateName, project, results |
quality:gate-failed |
quality-gate | gateName, project, failures |
workflow:triggered |
workflow-orchestrator | workflowId, name, triggeredBy |
workflow:completed |
workflow-orchestrator | workflowId, runId, name, durationMs |
workflow:failed |
workflow-orchestrator | workflowId, runId, name, error |
registry:server-registered |
mcp-registry | serverName, url, capabilities |
registry:server-unhealthy |
mcp-registry | serverName, lastHealthy, error |
Architecture: Autonomous vs Integration Servers
The eight advanced servers fall into two distinct architectural categories based on how they interact with the rest of the ecosystem:
Architectural Comparison
| Category | Servers | Characteristic |
|---|---|---|
| Autonomous | incident-manager, decision-log, access-policy, quality-gate | Manage their own domain, produce events, do not require ClientManager |
| Integration | workflow-orchestrator, insight-engine, dashboard-api, mcp-registry | Aggregate data from other servers, require ClientManager for cross-server communication |
Autonomous servers can function even in isolation: they manage their own SQLite database and publish events to the EventBus without depending on other servers. Integration servers are designed to operate within the context of the complete ecosystem, using the ClientManager to query other servers and aggregate data.
Conclusions
The eight advanced servers complete the architecture of the Tech-MCP project, bringing the ecosystem to an enterprise level of sophistication. From incident management to quality assurance, from access control to workflow orchestration, each server contributes a specific domain to the overall platform.
The pattern that emerges is that of an event-driven distributed system: autonomous servers manage their own domains and publish events, the workflow-orchestrator reacts to events by executing cross-server automations, and the aggregation servers (insight-engine, dashboard-api) offer unified views of the entire ecosystem.
In the next article we will explore inter-server collaboration: how the 31 MCP servers in the ecosystem communicate with each other through the EventBus, how the ClientManager enables cross-server calls and how collaboration patterns allow building complex automations from simple components.
The complete code for all servers is available in the Tech-MCP repository on GitHub.







