Introduction: Why the Model Context Protocol
The Model Context Protocol (MCP) is an open standard developed by Anthropic that defines how AI applications communicate with external tools. MCP solves a fundamental problem: giving language models the ability to act in the real world, not just generate text.
Before MCP, every integration between an AI and an external tool required a custom implementation, with proprietary protocols and fragile integrations. MCP standardizes this communication, creating an ecosystem where any tool can be made accessible to any compatible AI, just as USB-C unified hardware connectors.
In this 14-article series, we'll explore MCP from the foundations to advanced implementation, analyzing in detail the open source project Tech-MCP, a TypeScript monorepo with 31 MCP servers, 85+ tools, and 6 shared packages.
What You'll Learn in This Series
- MCP protocol fundamentals and its Host/Client/Server architecture
- The three primitives: Tools, Resources, and Prompts with Zod validation
- How to create MCP servers and clients in TypeScript from scratch
- Architectural patterns: monorepo, EventBus, shared packages
- 31 real MCP servers for productivity, DevOps, database, project management
- Inter-server collaboration with typed events and cross-server invocation
- Testing, best practices, and production deployment
The Problem MCP Solves
Large Language Models (LLMs) are powerful at reasoning and text generation, but they have fundamental limitations that restrict their practical usefulness:
- Cannot access real-time data: training has a cutoff date, so the AI doesn't know about recent events or updated data
- Cannot execute actions: creating files, calling APIs, interacting with databases, managing deployments are impossible operations without external integrations
- Lack local context: the AI doesn't know your project, your tasks, your codebase structure, or your development environment
MCP bridges this gap by providing a bidirectional communication protocol between the AI and the external world, enabling language models to become true operational assistants.
The N x M Integration Problem
Without a standard, connecting N AI applications to M external tools requires N x M custom integrations. Each combination has its own protocol, authentication, and data format. This approach doesn't scale.
MCP reduces complexity to N + M: each AI application implements an MCP client, each tool implements an MCP server, and all communicate through the same standard protocol.
Analogy: MCP as USB-C for AI
Before USB-C, every device had its own proprietary connector. Today USB-C unifies everything: charging, data, video. MCP does the same for AI: a single standard protocol that connects any language model to any external tool, eliminating custom integrations.
MCP Architecture: Host, Client, and Server
The MCP architecture is based on three fundamental components that collaborate to enable interaction between AI and external tools:
1. Host
The Host is the AI application that the user interacts with directly. It manages the user session, connects MCP servers, and orchestrates tool calls. Examples of Hosts include:
- Claude Desktop: Anthropic's desktop application
- Cursor IDE: the IDE with integrated AI
- VS Code with MCP extensions
- Custom applications using the MCP SDK
2. Client
The Client is the software component within the Host that implements the client side of the MCP protocol. It maintains a 1:1 connection with an MCP server and manages:
- Capability negotiation during initialization
- Message routing via JSON-RPC between Host and Server
- Connection lifecycle management
3. Server
The Server is the component that exposes capabilities to the AI. An MCP server can offer three types of primitives:
The Three MCP Primitives
| Primitive | Description | Control | Example |
|---|---|---|---|
| Tools | Functions the AI can invoke | AI decides when to call them | create-sprint, analyze-diff |
| Resources | Data the application can read | Application controls access | Files, database, API responses |
| Prompts | Reusable predefined templates | User selects which to use | "Analyze this code for bugs" |
Transport: How Client and Server Communicate
The Transport is the physical communication mechanism between Client and Server. MCP supports two main modes:
STDIO (Standard Input/Output)
The most common transport for local servers. Client and Server communicate via stdin and
stdout of the process. Each message is a JSON-RPC object separated by newline.
The stderr channel remains available for logging.
- Pros: simple setup, no network configuration, secure (local)
- Cons: local only, one process per connection
HTTP + SSE (Streamable HTTP)
For remote servers or when multiple connections are needed. The client sends HTTP POST requests, the server responds via Server-Sent Events for streaming. Supports session IDs to maintain state between requests.
- Pros: remote support, multiple connections, scalable
- Cons: more complex to configure, requires an HTTP server
How a Tool Call Works
To understand concretely how MCP operates, let's follow the flow of a tool call step by step:
User: "Create a sprint called Sprint-15"
|
v
[1] The AI reasons and decides to call the "create-sprint" tool
|
v
[2] The MCP Client serializes the request as JSON-RPC:
{ "method": "tools/call",
"params": { "name": "create-sprint",
"arguments": { "name": "Sprint-15" } } }
|
v
[3] The MCP Server receives, executes the logic, returns the result:
{ "content": [{ "type": "text", "text": "Sprint created with id 42" }] }
|
v
[4] The AI receives the result and presents it to the user:
"I've created Sprint-15 (ID: 42)"
Detailed Protocol Flow
- Discovery: at startup, the Client asks the Server for the list of available tools (
tools/list) - Schema: each tool declares its parameters with a validatable JSON/Zod schema
- Invocation: the AI autonomously decides when and which tool to call based on conversation context
- Execution: the Server executes the business logic and returns the result
- Composition: the AI can combine multiple tool calls to complete complex tasks
MCP vs REST API vs AI Plugins
MCP is not the only way to integrate tools with AI, but it offers significant advantages over traditional alternatives:
Integration Approaches Comparison
| Feature | REST API | AI Plugin | MCP |
|---|---|---|---|
| Standardized | Yes (HTTP) | No (vendor-specific) | Yes (open protocol) |
| Auto-discovery | No | Partial | Yes |
| Parameter typing | OpenAPI | Varies | Zod / JSON Schema |
| Bidirectional | No | No | Yes |
| Native streaming | No | Varies | Yes (SSE) |
| Vendor lock-in | No | Yes | No |
| Composability | Manual | Limited | Native |
Why MCP Matters for Developers
MCP represents a paradigm shift in the interaction between developers and AI. Here are the main advantages that make it fundamental:
- Interoperability: an MCP server works with Claude Desktop, Cursor, VS Code, and any other compatible client without modifications. Write once, works everywhere.
-
Composability: the AI can combine tools from different servers in a single workflow.
For example: analyze the code (
code-review), generate tests (test-generator), and log time (time-tracking) - all in a single conversation. - Security: the protocol clearly defines what a server can do. The user always maintains control over approving sensitive actions.
- Open ecosystem: anyone can create an MCP server. No vendor permission needed, no marketplace dependencies.
- Persistent context: unlike stateless APIs, MCP supports stateful sessions, allowing the server to maintain context between successive calls.
The Tech-MCP Project: A Complete Suite
To demonstrate MCP's power and flexibility, I developed Tech-MCP, an open source project implementing 31 MCP servers organized in a TypeScript monorepo. These servers cover the entire software development lifecycle:
Tech-MCP by the Numbers
| Metric | Value |
|---|---|
| MCP Servers | 31 |
| Available Tools | 85+ |
| Shared Packages | 6 |
| Typed Events | 29 |
| Language | TypeScript 5.7 |
| Runtime | Node.js 20+ |
| Build System | Turborepo + pnpm |
Server Categories
The 31 servers are organized into functional categories covering every aspect of development:
- Productivity (3): code-review, dependency-manager, project-scaffolding
- DevOps (3): docker-compose, log-analyzer, cicd-monitor
- Database (2): db-schema-explorer, data-mock-generator
- Documentation (2): api-documentation, codebase-knowledge
- Testing (2): test-generator, performance-profiler
- Utility (3): regex-builder, http-client, snippet-manager
- Project Management (5): scrum-board, agile-metrics, time-tracking, project-economics, retrospective-manager
- Communication (2): standup-notes, environment-manager
- Advanced (9): incident-manager, decision-log, access-policy, quality-gate, workflow-orchestrator, insight-engine, mcp-registry, dashboard-api
Development Environment Setup
To follow this series and start developing MCP servers, make sure you have installed:
Prerequisites
- Node.js 20+: JavaScript/TypeScript runtime (
node --version) - pnpm 9+: package manager for monorepo (
npm install -g pnpm) - TypeScript 5.7+: typed language (
npx tsc --version) - Claude Desktop: to test MCP servers (download from
claude.ai) - Git: to clone the Tech-MCP repository
Clone and Start Tech-MCP
To explore the complete project, clone the repository and install dependencies:
# Clone the repository
git clone https://github.com/fedcal/Tech-MCP.git
cd Tech-MCP
# Install dependencies with pnpm
pnpm install
# Build all packages and servers
pnpm build
# Verify everything works
pnpm test
Configure Claude Desktop
To test an MCP server with Claude Desktop, modify the configuration file:
// macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
// Windows: %APPDATA%\Claude\claude_desktop_config.json
// Linux: ~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"scrum-board": {
"command": "node",
"args": ["path/to/Tech-MCP/servers/scrum-board/dist/index.js"],
"env": {}
}
}
}
Series Structure
This 14-article series follows a progressive path, from theoretical foundations to advanced implementation:
Article Roadmap
| # | Topic | Level |
|---|---|---|
| 01 | Introduction to Model Context Protocol | Beginner |
| 02 | MCP Primitives: Tools, Resources, and Prompts | Beginner |
| 03 | Monorepo Architecture and Project Patterns | Intermediate |
| 04 | Building Your First MCP Server in TypeScript | Intermediate |
| 05 | Building an MCP Client and HTTP Transport | Intermediate |
| 06 | Shared Packages: Core, EventBus, and Database | Intermediate |
| 07-11 | Servers by Category (Productivity, DevOps, DB, PM, Advanced) | Intermediate/Advanced |
| 12 | Inter-Server Collaboration and EventBus | Advanced |
| 13 | Testing, Best Practices, and Production | Advanced |
| 14 | Tech-MCP: Complete Open Source Project Overview | Advanced |
Conclusions
The Model Context Protocol represents a qualitative leap in integrating AI with development tools. It's not just a simple wrapper around API calls, but a structured protocol that enables automatic discovery, strong typing, bidirectional communication, and native composability.
In the next article, we'll dive deep into the three MCP primitives (Tools, Resources, and Prompts), analyzing in detail how to define them, validate them with Zod, and manage their lifecycle. We'll start writing code, exploring the structure of a tool handler and the response format.
The complete code for all examples is available in the Tech-MCP GitHub repository.







