Introduction: MCP for DevOps
Modern DevOps requires container orchestration, continuous application log analysis, and CI/CD pipeline monitoring. These activities, when performed manually, consume valuable time and are prone to errors. With the Model Context Protocol, we can automate these operations directly from the IDE, without context-switching between the terminal, GitHub Actions dashboards, and monitoring web interfaces.
In this article, we analyze the three MCP servers in the DevOps category of the
Tech-MCP project:
docker-compose for Docker stack management, log-analyzer for
automated log file analysis, and cicd-monitor for GitHub Actions pipeline monitoring.
What You Will Learn in This Article
- How the
docker-composeserver parses YAML and analyzes Dockerfiles for best practices - How
log-analyzerautomatically detects log format and groups error patterns - How
cicd-monitorintegrates with GitHub Actions via theghCLI - Zod schemas for parameter validation of each tool
- The
cicd:pipeline-completedandcicd:build-failedevents published to the Event Bus - The complementary interactions between the three DevOps servers
1. docker-compose Server: Docker Stack Management
The docker-compose server provides tools for parsing, analyzing, monitoring,
and generating Docker Compose configurations. The problem it solves is the growing complexity
of docker-compose.yml files and Dockerfiles, where configuration
errors, bad practices, and lack of standardization can cause production issues.
Server Architecture
The server operates on two main channels: the filesystem for reading YAML and Dockerfile files,
and child_process for executing Docker commands. It does not publish or subscribe
to events on the Event Bus: all operations are stateless and on-demand.
+------------------------------------------------------------+
| docker-compose server |
| |
| +-------------------------------------------------------+ |
| | Tool Layer | |
| | | |
| | parse-compose analyze-dockerfile | |
| | list-services generate-compose | |
| +-------------------------------------------------------+ |
| | | |
| v v |
| +------------+ +------------------+ |
| | fs | | child_process | |
| | readFile | | execSync | |
| | (YAML/ | | (docker compose | |
| | Docker) | | ps --format | |
| +------------+ | json) | |
| +------------------+ |
+------------------------------------------------------------+
Tool Table
docker-compose Server Tools
| Tool | Description | Parameters |
|---|---|---|
parse-compose |
Parses and validates a docker-compose.yml file, extracting services, networks, volumes, and issues | filePath (string) |
analyze-dockerfile |
Analyzes a Dockerfile for best practices, security, and optimization | filePath (string) |
list-services |
Lists running Docker services, with optional Compose project filter | composePath? (string) |
generate-compose |
Generates a docker-compose.yml file from an array of service definitions | services (array of objects) |
Zod Schema: parse-compose
The parse-compose tool accepts a single filePath parameter pointing
to the docker-compose.yml file to analyze. The internal parser is line-based and does not
depend on external YAML libraries.
// Zod schema for parse-compose
const ParseComposeSchema = z.object({
filePath: z.string().describe("Path to the docker-compose.yml file")
});
Line-Based YAML Parser
The parser analyzes the file line by line, identifying top-level sections
(services, networks, volumes) and for each service
extracts properties: image, build, ports,
environment, and volumes. The parser also performs automatic validations:
- Image without a specific tag (e.g.,
nginxwithout:1.25-alpine) - Use of
privileged: true(security risk) - Use of
network_mode: host(exposes all ports) - Service without
imageorbuilddefined - Presence of the deprecated
versionfield
Example request and response:
// Request
{
"tool": "parse-compose",
"arguments": {
"filePath": "/home/user/project/docker-compose.yml"
}
}
// Response
{
"filePath": "/home/user/project/docker-compose.yml",
"services": ["web", "db", "redis"],
"serviceCount": 3,
"networks": ["backend"],
"volumes": ["db_data"],
"validationIssues": [
"Service \"web\" uses image \"nginx\" without a specific tag."
],
"hasIssues": true
}
Zod Schema: analyze-dockerfile
The analyze-dockerfile tool examines a Dockerfile by applying a set of best
practice rules with three severity levels: error, warning,
and info.
// Zod schema for analyze-dockerfile
const AnalyzeDockerfileSchema = z.object({
filePath: z.string().describe("Path to the Dockerfile to analyze")
});
Dockerfile Analysis Rules
| Rule | Severity | Description |
|---|---|---|
no-latest-tag |
warning | Base image with :latest tag |
no-tag |
warning | Base image without an explicit tag |
large-base-image |
info | Use of full distributions (ubuntu, debian, centos) |
consecutive-run |
warning | Consecutive RUN instructions that increase layers |
apt-no-recommends |
info | apt-get install without --no-install-recommends |
apt-update-alone |
warning | apt-get update without install in the same RUN |
pipe-to-shell |
warning | Download with curl/wget piped to shell |
use-copy-over-add |
info | ADD used where COPY would suffice |
missing-healthcheck |
info | No HEALTHCHECK instruction present |
running-as-root |
warning | No USER instruction (container runs as root) |
Example response from a Dockerfile analysis:
{
"filePath": "/home/user/project/Dockerfile",
"baseImage": "node:latest",
"stages": 1,
"totalInstructions": 12,
"issues": [
{
"line": 1,
"severity": "warning",
"rule": "no-latest-tag",
"message": "Base image \"node:latest\" uses the :latest tag.",
"suggestion": "Pin to a specific version tag (e.g., node:20-alpine)."
},
{
"line": 0,
"severity": "warning",
"rule": "running-as-root",
"message": "No USER instruction found.",
"suggestion": "Add a USER instruction to run as non-root."
}
],
"summary": { "errors": 0, "warnings": 2, "info": 1 }
}
Zod Schema: generate-compose
The generate-compose tool receives an array of service definitions and produces
a valid docker-compose.yml file. It automatically adds
restart: unless-stopped to every service and declares named volumes in the
volumes: section.
// Zod schema for generate-compose
const GenerateComposeSchema = z.object({
services: z.array(z.object({
name: z.string().describe("Service name"),
image: z.string().describe("Docker image to use"),
ports: z.array(z.string()).optional()
.describe("Array of port mappings (e.g., '3000:3000')"),
environment: z.record(z.string()).optional()
.describe("Environment variables as key-value pairs"),
volumes: z.array(z.string()).optional()
.describe("Array of mounts (bind mount or named volume)")
})).describe("Array of service definitions")
});
Generation example:
// Request
{
"tool": "generate-compose",
"arguments": {
"services": [
{
"name": "api",
"image": "node:20-alpine",
"ports": ["3000:3000"],
"environment": { "NODE_ENV": "production" },
"volumes": ["./src:/app/src"]
},
{
"name": "postgres",
"image": "postgres:16-alpine",
"ports": ["5432:5432"],
"environment": { "POSTGRES_PASSWORD": "secret" },
"volumes": ["pg_data:/var/lib/postgresql/data"]
}
]
}
}
# Generated output
services:
api:
image: node:20-alpine
ports:
- "3000:3000"
environment:
NODE_ENV: "production"
volumes:
- ./src:/app/src
restart: unless-stopped
postgres:
image: postgres:16-alpine
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: "secret"
volumes:
- pg_data:/var/lib/postgresql/data
restart: unless-stopped
volumes:
pg_data:
Zod Schema: list-services
The list-services tool lists running Docker services. Internally it executes
docker compose ps --format json and, on failure, falls back to the legacy
command docker-compose ps --format json.
// Zod schema for list-services
const ListServicesSchema = z.object({
composePath: z.string().optional()
.describe("Optional path to docker-compose.yml for filtering")
});
// Response format
interface ServiceInfo {
name: string;
status: string;
image: string;
ports: string;
}
2. log-analyzer Server: Automated Log Analysis
The log-analyzer server addresses a universal problem: log files grow rapidly, contain thousands of lines, and identifying error patterns, anomalies, or trends requires time and specific expertise. This server automates the analysis, supporting both plain text logs and structured JSON logs (NDJSON).
Server Architecture
Like docker-compose, log-analyzer is also a stateless server
that does not publish or subscribe to events. It operates directly on the filesystem via
fs/promises.
+------------------------------------------------------------+
| log-analyzer server |
| |
| +-------------------------------------------------------+ |
| | Tool Layer | |
| | | |
| | analyze-log-file find-error-patterns | |
| | tail-log generate-summary | |
| +-------------------------------------------------------+ |
| | |
| v |
| +-------------------------------------------------------+ |
| | fs/promises (readFile) | |
| | | |
| | Supported formats: | |
| | - Plain text (syslog, Apache, custom) | |
| | - JSON lines (NDJSON, structured logging) | |
| | - Auto-detection based on sampling | |
| +-------------------------------------------------------+ |
+------------------------------------------------------------+
Tool Table
log-analyzer Server Tools
| Tool | Description | Parameters |
|---|---|---|
analyze-log-file |
Analyzes a log file: level counts, error extraction, time range | filePath (string); format? (auto | json | plain) |
find-error-patterns |
Finds recurring error patterns by grouping similar messages | filePath (string); minCount? (number, default: 2) |
tail-log |
Returns the last N lines of a log file with optional filter | filePath (string); lines? (number, default: 50); filter? (string) |
generate-summary |
Generates a readable summary with counts, health indicators, and top errors | filePath (string) |
Format Auto-Detection
The server samples the first 10 lines of the file to automatically determine whether the
log is in JSON or plain text format. If every line starts with { and parses
correctly as JSON, the format is set to json. Otherwise, the server uses regex
to extract log levels and timestamps.
Supported Timestamp Patterns
| Format | Example |
|---|---|
| ISO 8601 | 2024-01-15T10:30:00.000Z |
| Common Log Format | 15/Jan/2024:10:30:00 +0000 |
| Syslog | Jan 15 10:30:00 |
| Generic date-time | 2024-01-15 10:30:00 |
Recognized log levels are: ERROR, WARN (WARNING),
INFO, DEBUG, TRACE, FATAL,
CRITICAL, and NOTICE.
Zod Schema: analyze-log-file
// Zod schema for analyze-log-file
const AnalyzeLogFileSchema = z.object({
filePath: z.string().describe("Path to the log file"),
format: z.enum(["auto", "json", "plain"]).optional()
.default("auto")
.describe("Log format: auto-detect, JSON lines, or plain text")
});
Example response:
{
"totalLines": 5420,
"levels": {
"INFO": 5100,
"WARN": 250,
"ERROR": 65,
"DEBUG": 5
},
"topErrors": [
{ "message": "Connection refused to redis:6379", "count": 30 },
{ "message": "Request timeout after 30000ms", "count": 20 }
],
"timeRange": {
"earliest": "2024-06-15T08:00:01.234Z",
"latest": "2024-06-15T18:45:30.567Z"
}
}
Zod Schema: find-error-patterns
The find-error-patterns tool normalizes error messages to group occurrences
of the same problem with different data. UUIDs, IP addresses, numbers, URLs, and paths
are replaced with generic placeholders.
// Zod schema for find-error-patterns
const FindErrorPatternsSchema = z.object({
filePath: z.string().describe("Path to the log file"),
minCount: z.number().optional().default(2)
.describe("Minimum number of occurrences to consider a pattern")
});
Error Pattern Normalization
| Element | Replacement | Example |
|---|---|---|
| Timestamp | <TIMESTAMP> |
2024-01-15T10:30:00Z becomes <TIMESTAMP> |
| UUID | <UUID> |
550e8400-e29b-... becomes <UUID> |
| Hex hash | <HEX> |
a1b2c3d4e5f6 becomes <HEX> |
| IP address | <IP> |
192.168.1.1:3000 becomes <IP> |
| URL | <URL> |
https://api.example.com/v1 becomes <URL> |
| File path | <PATH> |
/var/log/app/error.log becomes <PATH> |
| Numbers | <NUM> |
42, 1024 become <NUM> |
| Quoted strings | "<STR>" |
"user not found" becomes "<STR>" |
Example response with normalized patterns:
{
"totalPatternsFound": 4,
"minCount": 3,
"patterns": [
{
"pattern": "Connection refused to <IP>",
"count": 30,
"examples": [
"Connection refused to 10.0.0.5:6379",
"Connection refused to 10.0.0.5:6380"
]
},
{
"pattern": "Request timeout after <NUM>ms for <URL>",
"count": 20,
"examples": [
"Request timeout after 30000ms for https://api.external.com/v1/users"
]
}
]
}
Zod Schema: tail-log
// Zod schema for tail-log
const TailLogSchema = z.object({
filePath: z.string().describe("Path to the log file"),
lines: z.number().optional().default(50)
.describe("Number of lines to return from the end of the file"),
filter: z.string().optional()
.describe("Filter string to select only matching lines")
});
Example with error filter:
// Request
{
"tool": "tail-log",
"arguments": {
"filePath": "/var/log/app/application.log",
"lines": 20,
"filter": "ERROR"
}
}
// Response
"2024-06-15T18:30:00Z ERROR Connection refused to redis:6379\n
2024-06-15T18:35:12Z ERROR Request timeout after 30000ms\n
..."
Zod Schema: generate-summary
The generate-summary tool produces a comprehensive text report that includes
level counts, time range, health indicators (error rate, warning rate), and the top 5
errors by frequency.
// Zod schema for generate-summary
const GenerateSummarySchema = z.object({
filePath: z.string().describe("Path to the log file to summarize")
});
Example of formatted output:
Log File Summary: /path/to/file.log
==================================================
Total lines: 15234
Log Level Breakdown:
ERROR: 42 (0.3%)
WARN: 156 (1.0%)
INFO: 14836 (97.4%)
DEBUG: 200 (1.3%)
Time Range:
Earliest: 2024-01-15T00:00:01Z
Latest: 2024-01-15T23:59:58Z
Health Indicators:
Error rate: 0.3%
Warning rate: 1.0%
Top Error Messages:
1. [15x] Connection timeout to database
2. [12x] Failed to parse request body
3. [8x] Authentication token expired
3. cicd-monitor Server: CI/CD Pipeline Monitoring
The cicd-monitor server is the bridge between the MCP suite and Continuous
Integration and Continuous Deployment pipelines. It integrates natively with
GitHub Actions via the gh CLI, enabling workflow run monitoring,
build log viewing, and flaky test identification without leaving the development environment.
cicd-monitor Prerequisites
- GitHub CLI (
gh) installed and authenticated viagh auth login - Access to the target GitHub repository (owner or collaborator)
Server Architecture
Unlike the two previous servers, cicd-monitor publishes events
to the Event Bus. When the get-pipeline-status tool detects a completed or
failed workflow, it emits the corresponding events that can be consumed by other servers
such as standup-notes and agile-metrics.
+------------------------------------------------------------+
| cicd-monitor server |
| |
| +-------------------------------------------------------+ |
| | Tool Layer | |
| | | |
| | list-pipelines get-pipeline-status | |
| | get-build-logs get-flaky-tests | |
| +-------------------------------------------------------+ |
| | |
| v |
| +-------------------------------------------------------+ |
| | child_process.execSync | |
| | Commands: gh run list / gh run view | |
| +-------------------------------------------------------+ |
| | |
| v |
| +-------------------------------------------------------+ |
| | Event Bus | |
| | cicd:pipeline-completed | |
| | cicd:build-failed | |
| +-------------------------------------------------------+ |
+------------------------------------------------------------+
Tool Table
cicd-monitor Server Tools
| Tool | Description | Parameters |
|---|---|---|
list-pipelines |
Lists recent GitHub Actions workflow runs | repo? (string, owner/repo); limit (number, default: 10) |
get-pipeline-status |
Details of a specific workflow run with jobs and steps | runId (string); repo? (string) |
get-build-logs |
Downloads the last N lines of logs from a workflow run | runId (string); repo? (string); lines (number, default: 100) |
get-flaky-tests |
Analyzes recent runs to find tests that pass and fail intermittently | repo? (string); branch? (string); runs (number, default: 20) |
Zod Schema: list-pipelines
// Zod schema for list-pipelines
const ListPipelinesSchema = z.object({
repo: z.string().optional()
.describe("Repository in owner/repo format"),
limit: z.number().optional().default(10)
.describe("Maximum number of runs to return")
});
Internally the tool executes the command
gh run list --json databaseId,displayTitle,headBranch,event,status,conclusion,createdAt,updatedAt,url,workflowName
and maps the result to a simplified format.
// Response
{
"total": 5,
"runs": [
{
"id": 12345678,
"title": "feat: add user authentication",
"branch": "feature/auth",
"event": "push",
"status": "completed",
"conclusion": "success",
"workflow": "CI",
"createdAt": "2024-06-15T10:00:00Z",
"url": "https://github.com/my-org/my-project/actions/runs/12345678"
}
]
}
Zod Schema: get-pipeline-status
The get-pipeline-status tool is the most important in the server, because it not
only provides workflow run details, but also triggers Event Bus event publication.
// Zod schema for get-pipeline-status
const GetPipelineStatusSchema = z.object({
runId: z.string().describe("GitHub Actions workflow run ID"),
repo: z.string().optional()
.describe("Repository in owner/repo format")
});
The tool executes two gh commands in sequence: one for run details and one
for jobs and steps. Based on the result, it publishes the appropriate events:
- If
conclusion == 'success': publishescicd:pipeline-completedwith statussuccess - If
conclusion == 'failure': publishes bothcicd:pipeline-completedwith statusfailedandcicd:build-failedwith the failing job name
// Response with job and step details
{
"id": 12345678,
"title": "feat: add user authentication",
"branch": "feature/auth",
"sha": "abc123def456",
"conclusion": "failure",
"jobs": [
{
"name": "build",
"conclusion": "success",
"steps": [
{ "name": "Checkout", "conclusion": "success", "number": 1 },
{ "name": "Install", "conclusion": "success", "number": 2 },
{ "name": "Build", "conclusion": "success", "number": 3 }
]
},
{
"name": "test",
"conclusion": "failure",
"steps": [
{ "name": "Run tests", "conclusion": "failure", "number": 1 }
]
}
]
}
Zod Schema: get-build-logs
// Zod schema for get-build-logs
const GetBuildLogsSchema = z.object({
runId: z.string().describe("Workflow run ID"),
repo: z.string().optional()
.describe("Repository in owner/repo format"),
lines: z.number().optional().default(100)
.describe("Number of log lines to return")
});
The tool executes gh run view <id> --log with a 60-second timeout
(higher than other commands, since logs can reach 10MB) and returns the last N lines.
Zod Schema: get-flaky-tests
Flaky test detection is the most complex operation of the server. The algorithm analyzes the last N runs, groups by branch and workflow, and identifies steps that have both passes and failures.
// Zod schema for get-flaky-tests
const GetFlakyTestsSchema = z.object({
repo: z.string().optional()
.describe("Repository in owner/repo format"),
branch: z.string().optional()
.describe("Specific branch to analyze"),
runs: z.number().optional().default(20)
.describe("Number of recent runs to analyze")
});
Flaky Test Detection Algorithm
- Retrieve the last N runs (default 20) via
gh run list - Group by
branch + workflow - For each group with mixed results (success + failure): sample up to 5 runs
- For each sampled run, retrieve jobs and steps via
gh run view - For each step, track pass/fail counts
- Identify steps with both
passCount > 0andfailCount > 0 - Calculate the flakiness rate:
min(passCount, failCount) / totalRuns * 100 - Sort by flakiness rate in descending order
// get-flaky-tests response
{
"analyzedRuns": 20,
"branchesAnalyzed": 1,
"flakyStepsFound": 2,
"flaky": [
{
"branch": "main",
"workflow": "CI",
"job": "test",
"step": "Run integration tests",
"passCount": 3,
"failCount": 2,
"totalRuns": 5,
"flakinessRate": 40.0
}
]
}
Event Bus Events
Among the three DevOps servers, only cicd-monitor publishes events to the
Event Bus. The docker-compose and log-analyzer servers are
completely stateless and do not interact with the event system.
Events Published by cicd-monitor
| Event | Emitted by | Payload | Condition |
|---|---|---|---|
cicd:pipeline-completed |
get-pipeline-status |
{ pipelineId, status, branch, duration } |
Always, after retrieving run details |
cicd:build-failed |
get-pipeline-status |
{ pipelineId, error, stage, branch } |
Only if conclusion == 'failure' |
The pipeline duration is calculated as the difference in milliseconds between
updatedAt and createdAt. Events are primarily consumed by:
- standup-notes: receives notification of completed and failed builds for automatic daily report generation
- agile-metrics: aggregates pipeline speed metrics and failure rates for the project dashboard
Interactions Between DevOps Servers
The three DevOps servers are designed to work complementarily. Although each can operate independently, combining all three covers the entire DevOps cycle:
Integrated Workflow
- docker-compose validates Docker configuration before deployment
- cicd-monitor monitors the pipeline execution that includes the Docker build
- log-analyzer analyzes container logs after deployment
- If
cicd-monitordetects a failure, build logs downloaded withget-build-logscan be analyzed withlog-analyzer
Interaction Matrix
| Server | Interacts with | Type | Description |
|---|---|---|---|
docker-compose |
log-analyzer |
Complementary | Docker container logs can be analyzed |
docker-compose |
cicd-monitor |
Complementary | Docker builds monitored as part of the pipeline |
cicd-monitor |
log-analyzer |
Complementary | Build logs downloaded and analyzed in detail |
cicd-monitor |
standup-notes |
Via Event Bus | Notifies completed/failed builds for reports |
cicd-monitor |
agile-metrics |
Via Event Bus | Pipeline speed metrics and failure rates |
Claude Desktop Configuration
To use the three DevOps servers with Claude Desktop, add the following entries to the configuration file:
{
"mcpServers": {
"docker-compose": {
"command": "node",
"args": ["path/to/Tech-MCP/servers/docker-compose/dist/index.js"],
"env": {}
},
"log-analyzer": {
"command": "node",
"args": ["path/to/Tech-MCP/servers/log-analyzer/dist/index.js"],
"env": {}
},
"cicd-monitor": {
"command": "node",
"args": ["path/to/Tech-MCP/servers/cicd-monitor/dist/index.js"],
"env": {}
}
}
}
Future Developments
The roadmap for the DevOps servers includes significant improvements:
- docker-compose: Docker Swarm support, network validation, multi-stage Dockerfile analysis, templates for common stacks (LAMP, MEAN), Event Bus integration for container start/stop/crash events
- log-analyzer: real-time streaming with
fs.watch, multi-file correlation, anomaly detection, support for compressed.gzand.bz2files, ASCII histogram output - cicd-monitor: multi-platform support (GitLab CI, Bitbucket Pipelines, Jenkins), intelligent caching, historical trend analysis, DORA metrics (lead time, cycle time, deployment frequency)
Conclusions
The three DevOps servers in Tech-MCP cover the fundamental DevOps automation needs:
docker-compose ensures Docker configuration quality through YAML parsing
and Dockerfile analysis, log-analyzer automates application log analysis
with format auto-detection and error pattern normalization, and cicd-monitor
brings GitHub Actions pipeline monitoring directly into the IDE with automatic flaky
test detection.
The most interesting aspect is the complementarity: build logs downloaded
by cicd-monitor can be analyzed with log-analyzer, while Docker
configurations validated by docker-compose feed the pipelines monitored by
cicd-monitor. Events published to the Event Bus connect the DevOps cycle
to project management, closing the loop.
In the next article, we will analyze the Database and Testing servers:
db-schema-explorer for schema exploration, data-mock-generator
for test data generation, test-generator for automatic test creation, and
performance-profiler for performance profiling.
The complete code for all servers is available in the Tech-MCP GitHub repository.







