Introduction: Database and Testing Servers
In a development workflow managed through the Model Context Protocol, understanding data structure and code quality are two fundamental pillars. When working with an existing database, you need to quickly understand which tables exist, how they are connected, and which indexes are missing. When writing code, you need to automatically generate tests, identify edge cases, and measure performance.
In this ninth article of the MCP series, we analyze four servers from the Tech-MCP project that cover these two critical areas: two servers dedicated to database operations (db-schema-explorer and data-mock-generator) and two servers dedicated to testing (test-generator and performance-profiler).
What You'll Learn in This Article
- How to explore SQLite database schemas with PRAGMA introspection
- Automatic ERD diagram generation in Mermaid syntax
- Automatic index suggestion for unindexed foreign keys
- Realistic mock data generation with 16 built-in generators
- Automatic unit test skeleton creation from source code
- Static performance analysis and anti-pattern identification
- Typed events:
db:index-suggestion,test:generated,test:coverage-report,perf:bottleneck-found
Server 1: db-schema-explorer
The db-schema-explorer server enables interactive exploration of SQLite database schemas. It solves the problem of understanding data structure: when working with an existing database, you need to quickly understand which tables exist, how they are connected, which indexes are missing, and how to visualize relationships.
The server provides read-only access to the database, ensuring that no operation can
modify data or schema. It uses the better-sqlite3 library with the readonly: true
flag for direct, synchronous, and performant access.
Server Architecture
+------------------------------------------------------------+
| db-schema-explorer server |
| |
| +--------------------------------------------------------+ |
| | Tool Layer | |
| | | |
| | explore-schema describe-table | |
| | suggest-indexes generate-erd | |
| +--------------------------------------------------------+ |
| | |
| v |
| +--------------------------------------------------------+ |
| | better-sqlite3 (readonly: true) | |
| | | |
| | PRAGMA table_info PRAGMA index_list | |
| | PRAGMA index_info PRAGMA foreign_key_list | |
| | sqlite_master SELECT COUNT(*) | |
| +--------------------------------------------------------+ |
| | |
| v |
| +--------------------------------------------------------+ |
| | Event Bus: db:index-suggestion | |
| +--------------------------------------------------------+ |
+------------------------------------------------------------+
Available Tools
Tool Table - db-schema-explorer
| Tool | Description | Parameters |
|---|---|---|
explore-schema |
Explores the database schema, returning all tables and their columns | dbPath (string) - Path to SQLite file |
describe-table |
Table detail: columns, indexes, foreign keys, and row count | dbPath (string); tableName (string) |
suggest-indexes |
Analyzes tables and suggests missing indexes for better performance | dbPath (string) |
generate-erd |
Generates an entity-relationship diagram in Mermaid erDiagram syntax | dbPath (string) |
Core Data Types
The server defines four TypeScript interfaces that structure the information returned from database introspection:
// TableInfo: table information
interface TableInfo {
name: string;
columns: ColumnInfo[];
}
// ColumnInfo: column detail
interface ColumnInfo {
name: string;
type: string; // INTEGER, TEXT, REAL, BLOB, BOOLEAN, DATETIME
nullable: boolean;
primaryKey: boolean;
defaultValue: string | null;
}
// IndexInfo: index information
interface IndexInfo {
name: string;
unique: boolean;
columns: string[];
}
// ForeignKeyInfo: foreign key information
interface ForeignKeyInfo {
id: number;
table: string; // referenced table
from: string; // local column
to: string; // referenced column
onUpdate: string;
onDelete: string;
}
PRAGMA Queries Used
All analysis is based on standard SQLite PRAGMA commands, which provide metadata about the database structure without modifying any data:
PRAGMA and Returned Information
| PRAGMA | Usage | Returned Information |
|---|---|---|
sqlite_master |
Table list | Table names (excluding sqlite_*) |
PRAGMA table_info |
Table columns | cid, name, type, notnull, dflt_value, pk |
PRAGMA index_list |
Index list | seq, name, unique, origin, partial |
PRAGMA index_info |
Index columns | seqno, cid, name |
PRAGMA foreign_key_list |
Foreign keys | id, seq, table, from, to, on_update, on_delete |
Example: Schema Exploration
// Request
{
"tool": "explore-schema",
"arguments": {
"dbPath": "/home/user/data/app.sqlite"
}
}
// Response
{
"dbPath": "/home/user/data/app.sqlite",
"tableCount": 3,
"tables": [
{
"name": "users",
"columns": [
{ "name": "id", "type": "INTEGER", "nullable": false, "primaryKey": true },
{ "name": "email", "type": "TEXT", "nullable": false, "primaryKey": false },
{ "name": "name", "type": "TEXT", "nullable": true, "primaryKey": false }
]
},
{
"name": "posts",
"columns": [
{ "name": "id", "type": "INTEGER", "nullable": false, "primaryKey": true },
{ "name": "user_id", "type": "INTEGER", "nullable": false, "primaryKey": false },
{ "name": "title", "type": "TEXT", "nullable": false, "primaryKey": false }
]
}
]
}
Example: Detailed Table Description
// Request
{
"tool": "describe-table",
"arguments": {
"dbPath": "/home/user/data/app.sqlite",
"tableName": "posts"
}
}
// Response
{
"tableName": "posts",
"rowCount": 1523,
"columns": [
{ "name": "id", "type": "INTEGER", "nullable": false,
"primaryKey": true, "defaultValue": null },
{ "name": "user_id", "type": "INTEGER", "nullable": false,
"primaryKey": false, "defaultValue": null },
{ "name": "title", "type": "TEXT", "nullable": false,
"primaryKey": false, "defaultValue": null }
],
"indexes": [
{ "name": "idx_posts_user_id", "unique": false, "columns": ["user_id"] }
],
"foreignKeys": [
{ "id": 0, "table": "users", "from": "user_id", "to": "id",
"onUpdate": "NO ACTION", "onDelete": "CASCADE" }
]
}
Automatic Index Suggestion
The suggest-indexes tool analyzes all database tables and identifies two types of
performance issues: foreign keys without a dedicated index and tables with many rows lacking user-defined
indexes. For each foreign key whose from field is not indexed, the server automatically
generates the corresponding CREATE INDEX SQL command.
// suggest-indexes response
{
"tablesAnalyzed": 5,
"suggestionsCount": 2,
"suggestions": [
{
"table": "comments",
"column": "post_id",
"reason": "Foreign key column referencing \"posts\"(\"id\") is not indexed.",
"suggestedSql": "CREATE INDEX idx_comments_post_id ON \"comments\"(\"post_id\");"
}
]
}
ERD Diagram Generation
The generate-erd tool produces entity-relationship diagrams in Mermaid erDiagram
syntax, automatically converting SQLite types to Mermaid types: INTEGER becomes
int, TEXT/CHAR becomes string, REAL/FLOAT becomes
float, BLOB becomes blob, and DATE/TIME becomes
datetime.
erDiagram
users {
int id PK
string email
string name
datetime created_at
}
posts {
int id PK
int user_id
string title
string body
}
users ||--o{ posts : "id -> user_id"
Event: db:index-suggestion
Every time suggest-indexes identifies an unindexed foreign key, it publishes the
db:index-suggestion event on the Event Bus with the payload:
{
database: string, // path to SQLite file
table: string, // table name
columns: string[], // columns to index
reason: string // suggestion reason
}
This event can be subscribed to by servers such as agile-metrics or
standup-notes to automatically track suggested optimizations.
Server 2: data-mock-generator
The data-mock-generator server generates realistic mock data for testing, development,
and prototyping. To test an application you need data that resembles real data, but creating it manually
does not scale. This server offers 16 generator types, supports JSON and CSV output,
and has no external dependencies for data generation: it exclusively uses Math.random()
and crypto.randomUUID().
Available Tools
Tool Table - data-mock-generator
| Tool | Description | Parameters |
|---|---|---|
generate-mock-data |
Generates data rows based on a field/type schema | schema (array of {field, type}); count (1-10000, default: 10) |
generate-json |
Generates JSON objects from a JSON Schema with properties and format hints | jsonSchema (object with properties); count (1-10000, default: 10) |
generate-csv |
Generates data in CSV format with header and configurable delimiter | columns (array of {name, type}); count (1-10000); delimiter (default: ",") |
list-generators |
Lists all available generator types with descriptions | None |
The 16 Built-in Generators
The service layer generators.ts contains base datasets (first names, last names, streets,
cities, companies, lorem words, domains) and 16 generator functions, each registered in a
Record<string, GeneratorInfo> registry:
Available Generators
| Name | Output Type | Description | Example |
|---|---|---|---|
firstName |
string | Random first name from pool of 40 | "Jennifer" |
lastName |
string | Random last name from pool of 40 | "Martinez" |
email |
string | Email combining name+surname+domain | "jennifer_martinez42@example.com" |
phone |
string | US phone format (XXX) XXX-XXXX | "(415) 555-1234" |
address |
string | Address: number + street + city | "4521 Oak Ave, Seattle" |
company |
string | Company name from pool of 20 | "Stark Industries" |
date |
string | Date YYYY-MM-DD between 2000 and 2025 | "2018-07-23" |
integer |
number | Integer between 0 and 10000 | 4287 |
float |
number | Decimal with 2 digits, 0-10000 | 3456.78 |
boolean |
boolean | true or false (50/50) | true |
uuid |
string | UUID v4 via crypto.randomUUID() | "550e8400-e29b-..." |
sentence |
string | Lorem ipsum sentence 5-15 words | "Lorem ipsum dolor sit amet." |
paragraph |
string | Paragraph of 3-7 lorem sentences | "Lorem ipsum... Dolor sit..." |
url |
string | URL with protocol, domain, and path | "https://app.demo.io/docs" |
ipv4 |
string | Valid IPv4 address | "192.168.42.1" |
hexColor |
string | Hexadecimal color with # | "#a3f2c1" |
Example: Schema-based Data Generation
// Request
{
"tool": "generate-mock-data",
"arguments": {
"schema": [
{ "field": "id", "type": "uuid" },
{ "field": "name", "type": "firstName" },
{ "field": "surname", "type": "lastName" },
{ "field": "email", "type": "email" },
{ "field": "active", "type": "boolean" }
],
"count": 3
}
}
// Response
[
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"name": "Jennifer",
"surname": "Martinez",
"email": "jennifer.martinez@example.com",
"active": true
},
{
"id": "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
"name": "Robert",
"surname": "Smith",
"email": "robert_smith42@test.org",
"active": false
}
]
Example: JSON Schema Generation
The generate-json tool accepts a JSON Schema with properties and automatically resolves
the correct generator: first it checks the format field (email, uri, uuid, ipv4, date),
then falls back to type (string becomes sentence, number becomes float, integer becomes
integer, boolean becomes boolean).
// Request
{
"tool": "generate-json",
"arguments": {
"jsonSchema": {
"properties": {
"userId": { "type": "string", "format": "uuid" },
"email": { "type": "string", "format": "email" },
"registeredAt": { "type": "string", "format": "date" },
"score": { "type": "integer" }
}
},
"count": 2
}
}
// Response
[
{ "userId": "abc-123-...", "email": "james.lee@mock.dev",
"registeredAt": "2019-03-15", "score": 7842 },
{ "userId": "def-456-...", "email": "mary_white@demo.net",
"registeredAt": "2022-11-08", "score": 1256 }
]
Example: CSV Generation
The generate-csv tool correctly handles escaping of the delimiter, quotes, and newline
characters. If a value contains the delimiter, quotes, or \n, it is wrapped in double
quotes with internal quotes doubled.
// Request with delimiter: ";"
Nome;Cognome;IP
James;Smith;192.168.1.42
Linda;Johnson;10.0.0.15
Robert;Williams;172.16.5.200
Integration with db-schema-explorer
The two database servers are complementary: the schema explored with db-schema-explorer
guides the schema definition for mock data. The AI can explore a database structure, extract column
types, and pass them directly to generate-mock-data to populate tables with realistic
test data.
Server 3: test-generator
The test-generator server is dedicated to automatic unit test generation, edge case identification, and code coverage analysis. It is the starting point for any testing workflow within the MCP Suite, allowing you to go from source code to test skeletons in seconds.
The server is completely stateless: it has no database or internal store. It receives source code as input, analyzes it through function signature parsing, and returns structured output without maintaining state between invocations.
Available Tools
Tool Table - test-generator
| Tool | Description | Parameters |
|---|---|---|
generate-unit-tests |
Generates unit test skeletons by analyzing function signatures | code (string); language (default: typescript); framework (vitest | jest | mocha) |
find-edge-cases |
Analyzes code and suggests edge cases classified by severity | code (string) |
analyze-coverage |
Compares functions in source with references in tests and calculates coverage | sourceCode (string); testCode (string) |
Automatic Test Generation
The generate-unit-tests tool parses source code looking for two declaration patterns:
classic functions (export function name(params)) and arrow functions
(export const name = (params) =>). For each function found, it generates a
describe/it block with three predefined tests.
// Input: source code
export function calculateTotal(items: Item[], tax: number): number {
return items.reduce((sum, item) => sum + item.price, 0) * (1 + tax);
}
export const formatCurrency = (amount: number): string => {
return new Intl.NumberFormat('it-IT',
{ style: 'currency', currency: 'EUR' }
).format(amount);
};
// Generated output (framework: vitest)
import { describe, it, expect } from 'vitest';
// import { calculateTotal, formatCurrency } from './source';
describe('calculateTotal', () => {
it('should exist and be callable', () => {
expect(calculateTotal).toBeDefined();
expect(typeof calculateTotal).toBe('function');
});
it('should return expected result with valid input', () => {
const result = calculateTotal(/* items */, /* tax */);
expect(result).toBeDefined();
// TODO: Add specific assertions
});
it('should handle edge cases', () => {
// TODO: Test with null/undefined inputs
// TODO: Test with empty values
// TODO: Test with boundary values
});
});
Edge Case Identification
The find-edge-cases tool performs static code analysis to identify potential edge cases.
Checks are classified by severity (high, medium, low)
and cover numerous categories:
Detected Edge Case Categories
| Category | Severity | Detection Condition |
|---|---|---|
null/undefined |
high | Presence of function parameters |
empty-string |
high | Usage of .length, .trim(), .split() |
empty-array |
high | Usage of .map(), .filter(), .reduce() |
zero |
high | Usage of parseInt, parseFloat, Math.* |
division-by-zero |
high | Presence of the / operator |
async-rejection |
high | Presence of async, await, Promise |
error-propagation |
high | Presence of try/catch blocks |
file-not-found |
high | Usage of readFile, writeFile, fs.* |
whitespace-string |
medium | Usage of string methods |
negative-numbers |
high | Usage of numeric functions |
nested-null |
medium | Usage of ?., ||, && |
regex-edge-cases |
medium | Usage of RegExp, .match(), .test() |
Coverage Analysis
The analyze-coverage tool compares source code with test code to determine which functions
have coverage. The process involves three phases: extracting function names from the source, searching
for references in tests (describe() blocks, it()/test() blocks, direct calls),
and calculating the percentage.
Source Tests
+------------------+ +------------------+
| function add() | ------> | describe('add') | COVERED
| function sub() | ------> | test('sub ...') | COVERED
| function mul() | --X | | UNCOVERED
| function div() | --X | | UNCOVERED
+------------------+ +------------------+
Coverage: 2/4 = 50%
test-generator Events
The server publishes two events on the Event Bus:
-
test:generated: emitted bygenerate-unit-testswith payload{ filePath: string, testCount: number, framework: string } -
test:coverage-report: emitted byanalyze-coveragewith payload{ filePath: string, coverage: number, uncoveredLines: number[] }
These events can be subscribed to by agile-metrics to include coverage metrics in
sprint analysis, or by standup-notes to automatically log generated tests.
Server 4: performance-profiler
The performance-profiler server provides static performance analysis tools for JavaScript and TypeScript code. Unlike a runtime profiler, this server analyzes source code without executing it, identifying known performance anti-patterns, heavy dependencies, and generating executable benchmark templates.
A fundamental principle of this server is security by design: eval()
is never executed, nor is arbitrary code. The benchmark-compare tool generates a Node.js
template that the user can execute separately in their own environment.
Available Tools
Tool Table - performance-profiler
| Tool | Description | Parameters |
|---|---|---|
analyze-bundle |
Analyzes JS/TS files for bundle size issues and heavy imports | filePath (string) - path to file or directory |
find-bottlenecks |
Static code analysis for performance anti-patterns | code (string); language (typescript | javascript | jsx | tsx) |
benchmark-compare |
Generates a benchmark template to compare two code snippets | codeA (string); codeB (string); iterations (default: 1000) |
Bundle Analysis
The analyze-bundle tool reads JavaScript and TypeScript files (single files or entire
directories, excluding node_modules and dist) and analyzes imports looking
for known heavy dependencies. For each heavy package found, the server provides the estimated size
and a lighter alternative.
Recognized Heavy Packages
| Package | Estimated Size | Suggested Alternative |
|---|---|---|
moment |
~300KB with locales | date-fns (~20KB) or dayjs (~2KB) |
lodash |
~70KB minified | lodash-es with tree shaking |
rxjs |
~50KB+ (full import) | Specific imports: rxjs/operators |
aws-sdk |
>100MB | @aws-sdk/client-* v3 modular |
jquery |
~85KB minified | Native DOM APIs |
three |
~600KB+ | Import from three/examples/jsm/ |
Bottleneck Identification
The find-bottlenecks tool performs line-by-line static analysis of the code, detecting
performance anti-patterns with severity classified as critical, warning,
or info:
Detected Performance Anti-patterns
| Type | Severity | Detected Pattern |
|---|---|---|
nested-loop |
critical | Nested for/while loops (O(n^2)) |
sync-io |
warning | readFileSync, writeFileSync, execSync |
linear-search-in-loop |
warning | .indexOf(), .includes(), .find() inside a loop |
dom-query-in-loop |
critical | document.querySelector inside a loop |
sequential-await |
warning | await inside a for/while loop |
json-in-loop |
warning | JSON.parse/stringify inside a loop |
missing-pagination |
warning | .findAll(), SELECT * FROM without LIMIT |
string-concat-in-loop |
info | String concatenation += inside a loop |
recursion |
info | Function calling itself without visible memoization |
For each bottleneck found, the output includes the type, severity, line number, a problem description, and a fix suggestion:
{
"type": "nested-loop",
"severity": "critical",
"line": 42,
"description": "Nested loop detected - potential O(n^2)...",
"suggestion": "Consider using a Map/Set for lookups...",
"pattern": "for (const item of items) {"
}
Benchmark Generation
The benchmark-compare tool generates a complete Node.js file to compare the performance
of two code snippets. The template includes four phases: warmup (10% of iterations), measurement
with performance.now(), statistics calculation (mean, median, min, max, p95, p99, stdDev,
ops/sec), and a final comparison indicating which snippet is faster and by how much.
// Request
{
"tool": "benchmark-compare",
"arguments": {
"codeA": "const result = arr.filter(x => x > 0).map(x => x * 2);",
"codeB": "const result = []; for (const x of arr) { if (x > 0) result.push(x * 2); }",
"iterations": 5000
}
}
The output is a complete Node.js template ready to execute with node benchmark.js.
The server never executes the generated code: security is guaranteed by design.
performance-profiler Events
The server publishes two events on the Event Bus:
-
perf:bottleneck-found: emitted byfind-bottlenecksonly forcriticalseverity bottlenecks, with payload{ location: string, metric: string, value: number, threshold: number } -
perf:profile-completed: emitted bybenchmark-comparewith payload{ target: string, durationMs: number, results: object }
Interactions Between the Four Servers
The four servers analyzed in this article do not operate in isolation but complement each other through complementary workflows and through the Event Bus:
+---------------------+ db:index-suggestion +-------------------+
| db-schema-explorer | ------------------------> | agile-metrics |
| | | standup-notes |
+---------------------+ +-------------------+
^
| (dbPath as input)
|
+---------------------+
| data-mock-generator | generates data that can
| | populate the database
+---------------------+
+-------------------+ test:generated +-------------------+
| test-generator | --------------------------> | standup-notes |
| | test:coverage-report +-------------------+
| | --------------------------> | agile-metrics |
+-------------------+ +-------------------+
+----------------------+ perf:bottleneck-found +-------------------+
| performance-profiler | -----------------------> | standup-notes |
| | perf:profile-completed +-------------------+
| | -----------------------> | agile-metrics |
+----------------------+ +-------------------+
Published Events Summary
| Event | Source Server | Emitting Tool | Potential Subscribers |
|---|---|---|---|
db:index-suggestion |
db-schema-explorer | suggest-indexes | agile-metrics, standup-notes |
test:generated |
test-generator | generate-unit-tests | standup-notes |
test:coverage-report |
test-generator | analyze-coverage | agile-metrics |
perf:bottleneck-found |
performance-profiler | find-bottlenecks | standup-notes, agile-metrics |
perf:profile-completed |
performance-profiler | benchmark-compare | agile-metrics |
Typical Combined Workflow
A typical workflow combining all four servers might look like this:
- Schema exploration: the AI uses
explore-schemato understand the database structure - Index suggestion:
suggest-indexesidentifies missing optimizations and publishesdb:index-suggestion - Data generation:
generate-mock-datacreates test fixtures based on the explored schema - Test generation:
generate-unit-testscreates test skeletons for application code and publishestest:generated - Edge case analysis:
find-edge-casesidentifies edge cases to cover in tests - Profiling:
find-bottlenecksanalyzes code for anti-patterns and publishesperf:bottleneck-found - Benchmarking:
benchmark-comparegenerates templates to compare alternative implementations
Conclusions
The four servers analyzed in this article demonstrate how MCP can cover two critical phases of the
development cycle: understanding data structure and ensuring code quality.
The db-schema-explorer server offers complete introspection with PRAGMA, index suggestion,
and ERD diagram generation. The data-mock-generator server provides 16 generators for
creating realistic data in JSON and CSV. The test-generator server automates unit test
creation with edge case identification. The performance-profiler server performs static
analysis to identify bottlenecks and generate benchmarks.
In the next article, we will analyze the Project Management Servers:
scrum-board, agile-metrics, time-tracking,
project-economics, and retrospective-manager, five servers that bring
agile project management directly into the AI interface.
The complete code for all servers is available in the Tech-MCP GitHub repository.







