I create modern web applications and custom digital tools to help businesses grow through technological innovation. My passion is combining computer science and economics to generate real value.
My passion for computer science was born at the Technical Commercial Institute of Maglie, where I discovered the power of programming and the fascination of creating digital solutions. From the start, I understood that computer science was not just code, but an extraordinary tool for turning ideas into reality.
During my studies in Business Information Systems, I began to interweave computer science and economics, understanding how technology can be the engine of growth for any business. This vision accompanied me to the University of Bari, where I obtained my degree in Computer Science, deepening my technical skills and passion for software development.
Today I put this experience at the service of businesses, professionals and startups, creating tailor-made digital solutions that automate processes, optimize resources and open new business opportunities. Because true innovation begins when technology meets the real needs of people.
My Skills
Data Analysis & Predictive Models
I transform data into strategic insights with in-depth analysis and predictive models for informed decisions
Process Automation
I create custom tools that automate repetitive operations and free up time for value-added activities
Custom Systems
I develop tailor-made software systems, from platform integrations to customized dashboards
Credo fermamente che l'informatica sia lo strumento più potente per trasformare le idee in realtà e migliorare la vita delle persone.
Democratizzare la Tecnologia
La mia missione è rendere l'informatica accessibile a tutti: dalle piccole imprese locali alle startup innovative, fino ai professionisti che vogliono digitalizzare la propria attività. Ogni realtà merita di sfruttare le potenzialità del digitale.
Unire Informatica ed Economia
Non è solo questione di scrivere codice: è capire come la tecnologia possa generare valore reale. Intrecciando competenze informatiche e visione economica, aiuto le attività a crescere, ottimizzare processi e raggiungere nuovi traguardi di efficienza e redditività.
Creare Soluzioni su Misura
Ogni attività è unica, e così devono esserlo le soluzioni. Sviluppo strumenti personalizzati che rispondono alle esigenze specifiche di ciascun cliente, automatizzando processi ripetitivi e liberando tempo per ciò che conta davvero: far crescere il business.
Trasforma la Tua Attività con la Tecnologia
Che tu gestisca un negozio, uno studio professionale o un'azienda, posso aiutarti a sfruttare le potenzialità dell'informatica per lavorare meglio, più velocemente e in modo più intelligente.
Bari, Puglia, Italy · Hybrid
Analysis and development of computer systems through the use of Java and Quarkus in Health and Public Sector. Continuous training on modern technologies for creating customized and efficient software solutions and on agents.
💼
06/2022 - 12/2024
Software analyst and Back End Developer Associate Consultant
Links Management and Technology SpA
Experience analyzing as-is software systems and ETL flows using PowerCenter. Completed Spring Boot training for developing modern and scalable backend applications. Backend developer specialized in Spring Boot, with experience in database design, analysis, development and testing of assigned tasks.
💼
02/2021 - 10/2021
Software programmer
Adesso.it (prima era WebScience srl)
Experience in AS-IS and TO-BE analysis, SEO evolutions and website evolutions to improve user performance and engagement.
🎓
2018 - 2025
Degree in Computer Science
University of Bari Aldo Moro
Bachelor's degree in Computer Science, focusing on software engineering, algorithms, and modern development practices.
📚
2013 - 2018
Diploma - Corporate Information Systems
Technical Commercial Institute of Maglie
Technical diploma specializing in Business Information Systems, combining IT knowledge with business management.
Contattami
Hai un progetto in mente? Parliamone! Compila il form qui sotto e ti risponderò al più presto.
* Campi obbligatori. I tuoi dati saranno utilizzati solo per rispondere alla tua richiesta.
Introduction: The 6 Shared Packages of Tech-MCP
In a monorepo with 22 MCP servers, code duplication would be a devastating problem.
Each server would need its own factory, its own logger, its own database management, multiplying
maintenance complexity by 22. The architectural solution adopted by the
Tech-MCP project
is shared packages: 6 internal libraries that centralize all cross-cutting logic
in a single point, reusable by every server.
These packages are not simple utilities: they represent the fundamental infrastructure
upon which every single server is built. The factory creates servers in 4 lines of code, the EventBus
enables collaboration between servers with 29 typed events, the database provides SQLite persistence
with automatic migrations, the testing package allows in-process testing without network, the CLI
manages servers from the terminal, and the Client Manager orchestrates synchronous server-to-server
communication.
The 6 Shared Packages
Package
Responsibility
Key Dependencies
@mcp-suite/core
Server factory, configuration, logger, errors, types
@modelcontextprotocol/sdk, zod
@mcp-suite/event-bus
29 typed events, LocalEventBus, pattern matching
micromatch
@mcp-suite/database
createDatabase, WAL mode, incremental migrations
better-sqlite3
@mcp-suite/testing
TestHarness, MockEventBus, InMemoryTransport
@modelcontextprotocol/sdk
@mcp-suite/cli
list, start, status commands for server management
commander
@mcp-suite/client-manager
Client pool, 3 transports, server-to-server communication
@modelcontextprotocol/sdk
@mcp-suite/core: The Heart of the Suite
The @mcp-suite/core package is the foundation upon which every server is built.
It provides five critical modules: the server creation factory, the
configuration system with Zod validation, a structured logger
that writes to stderr, a typed error hierarchy, and the domain types
shared across all 22 servers.
At the core of @mcp-suite/core is the pair of interfaces that define how a server
is created and what it represents once instantiated. CreateServerOptions describes
the input options, while McpSuiteServer represents the central contract
of the architecture: every server implements it, every tool registration function receives it.
export interface CreateServerOptions {
name: string; // Unique server name (e.g., 'scrum-board')
version: string; // Semantic version (e.g., '0.1.0')
description?: string; // Human-readable description
config?: Partial<ServerConfig>; // Configuration override
eventBus?: EventBus; // Optional EventBus for collaboration
}
export interface McpSuiteServer {
name: string; // Unique server name
server: McpServer; // MCP server instance from official SDK
config: ServerConfig; // Configuration loaded and validated with Zod
logger: Logger; // Structured logger (writes to stderr)
eventBus?: EventBus; // Optional EventBus (undefined if not provided)
httpServer?: Server; // HTTP server reference (if HTTP transport active)
}
createMcpServer(): The Main Factory
The createMcpServer() function is the factory that every server invokes to create
itself. In just a few lines it orchestrates four fundamental operations: loads the configuration
from environment variables, creates the logger with the configured level, instantiates the
McpServer from the official SDK, and returns the complete McpSuiteServer bundle.
export function createMcpServer(options: CreateServerOptions): McpSuiteServer {
// 1. Load configuration from env + overrides
const config = loadConfig(options.name, options.config);
// 2. Create logger with configured level
const logger = new Logger(options.name, config.logLevel);
logger.info(`Initializing
#123;options.name} v#123;options.version}`);
// 3. Instantiate McpServer from official SDK
const server = new McpServer({
name: options.name,
version: options.version,
});
// 4. Return the complete bundle
return { server, config, logger, eventBus: options.eventBus };
}
Once the McpSuiteServer is created, the startServer() function launches
it by automatically selecting the transport based on configuration. The STDIO transport uses
stdin/stdout for JSON-RPC communication, while the HTTP transport
uses the Streamable HTTP protocol from the MCP SDK in stateful
mode (each session gets a UUID).
export async function startStdioServer(suite: McpSuiteServer): Promise<void> {
const transport = new StdioServerTransport();
suite.logger.info('Starting server with STDIO transport');
await suite.server.connect(transport);
}
export async function startHttpServer(suite: McpSuiteServer): Promise<void> {
const port = suite.config.port ?? 3000;
const app = createMcpExpressApp();
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => randomUUID(),
});
await suite.server.connect(transport);
app.post('/mcp', async (req, res) => {
await transport.handleRequest(req, res, req.body);
});
app.get('/mcp', async (req, res) => {
await transport.handleRequest(req, res);
});
app.get('/health', (_req, res) => {
res.json({ status: 'ok', server: suite.name });
});
suite.httpServer = app.listen(port);
}
export async function startServer(suite: McpSuiteServer): Promise<void> {
if (suite.config.transport === 'http') {
await startHttpServer(suite);
} else {
await startStdioServer(suite);
}
}
config.ts: Configuration with Zod Validation
The configuration system follows a priority cascade: server-specific environment
variable, then global variable, then programmatic override, finally the Zod schema default.
The ServerConfigSchema defines valid parameters and their default values.
The loadConfig() function converts the server name into environment variable format,
allowing granular per-server configurations:
# Global (all servers)
MCP_SUITE_TRANSPORT=http
# Specific to scrum-board
MCP_SUITE_SCRUM_BOARD_LOG_LEVEL=debug
MCP_SUITE_SCRUM_BOARD_TRANSPORT=stdio
logger.ts: Structured Logger on stderr
The Logger writes structured JSON logs to stderr. This choice is
critical: the MCP protocol uses stdout for JSON-RPC communication,
so logs must go to a separate channel. If logs went to stdout, they would corrupt the JSON-RPC
stream and the MCP client would not be able to communicate with the server.
The MCP protocol uses stdout exclusively for JSON-RPC messages between client and server.
If the logger wrote to stdout, log messages would mix with the protocol, corrupting communication.
stderr is the designated channel for diagnostics, maintaining separation between
data and logs.
errors.ts: Typed Error Hierarchy
The package defines a typed error hierarchy that allows servers to handle different failure
types uniformly. Each error has a machine-readable code and optional details for debugging.
Error Hierarchy
Class
Code
Usage
McpSuiteError
-
Base class for all errors
ConfigError
CONFIG_ERROR
Invalid or missing configuration
ConnectionError
CONNECTION_ERROR
Database or service connection problems
ToolExecutionError
TOOL_EXECUTION_ERROR
Error during MCP tool execution
NotFoundError
NOT_FOUND
Requested resource not found
ValidationError
VALIDATION_ERROR
Invalid user input
import { NotFoundError, ToolExecutionError } from '@mcp-suite/core';
// In a store
const sprint = db.prepare('SELECT * FROM sprints WHERE id = ?').get(id);
if (!sprint) {
throw new NotFoundError('Sprint', String(id));
// Message: "Sprint with id '42' not found"
}
// In a tool handler
try {
const result = store.createSprint(input);
return { content: [{ type: 'text', text: JSON.stringify(result) }] };
} catch (error) {
if (error instanceof NotFoundError) {
return { content: [{ type: 'text', text: error.message }], isError: true };
}
throw new ToolExecutionError('Failed to create sprint', error);
}
types.ts: Shared Domain Types
The types.ts module defines over 30 domain types shared across servers,
organized by functional area. This ensures all servers use the same data structures
for common concepts like tasks, sprints, metrics, and tool results.
The index.ts file re-exports everything in an organized way, allowing servers
to import any component from a single entry point:
import {
createMcpServer,
type McpSuiteServer,
type EventBus,
Logger,
NotFoundError
} from '@mcp-suite/core';
@mcp-suite/event-bus: Inter-Server Collaboration with Typed Events
The @mcp-suite/event-bus package implements a typed event system for asynchronous
inter-server collaboration. It defines 29 events organized by domain,
a generic EventBus interface, and a local implementation based on Node.js
EventEmitter with pattern matching support via the
micromatch library.
packages/event-bus/
+-- package.json
+-- tsconfig.json
+-- src/
+-- index.ts # Re-export of public modules
+-- bus.ts # EventBus interface, handler types
+-- events.ts # EventMap with all 29 typed events
+-- local-bus.ts # LocalEventBus (in-process implementation)
EventBus Interface
The EventBus interface defines the contract for any event bus implementation.
It exposes four methods: publish to emit typed events, subscribe
to listen to a specific event, subscribePattern to listen to events matching
a wildcard pattern, and clear to remove all subscriptions.
export interface EventBus {
publish<E extends EventName>(
event: E, payload: EventPayload<E>
): Promise<void>;
subscribe<E extends EventName>(
event: E, handler: EventHandler<E>
): () => void;
subscribePattern(
pattern: string, handler: PatternHandler
): () => void;
clear(): void;
}
// Handler for specific event (typed payload)
export type EventHandler<E extends EventName> = (
payload: EventPayload<E>
) => void | Promise<void>;
// Handler for pattern (event name + generic payload)
export type PatternHandler = (
event: string, payload: unknown
) => void | Promise<void>;
EventMap: The 29 Events Organized by Domain
The EventMap is a TypeScript interface that maps each event name to its typed
payload. Events follow the domain:action convention and cover 11 functional
domains, from code review to retrospectives, from CI/CD to standup.
Derived types from the EventMap enable strong compile-time typing:
// Name of any valid event (union of 29 literal strings)
export type EventName = keyof EventMap;
// Typed payload for a specific event
export type EventPayload<E extends EventName> = EventMap[E];
// EventPayload<'scrum:sprint-started'>
// => { sprintId: string; name: string; startDate: string; endDate: string }
LocalEventBus: In-Process Implementation
The LocalEventBus is the default implementation based on Node.js
EventEmitter. When an event is published, the bus first emits for direct
subscribers, then checks all pattern subscribers using micromatch for
wildcard matching.
Tools publish events using the optional chaining operator (?.)
to handle the case where the EventBus is not present. This fire-and-forget
pattern ensures the tool never waits for handler completion, and handler errors never
propagate to the calling tool.
// In the tool handler: fire-and-forget
eventBus?.publish('scrum:sprint-started', {
sprintId: String(sprint.id),
name: sprint.name,
startDate: sprint.startDate,
endDate: sprint.endDate,
});
The micromatch library supports advanced wildcard patterns for subscriptions:
Supported Wildcard Patterns
Pattern
Matches
Example
scrum:*
All scrum events
scrum:sprint-started, scrum:task-updated
*:*-completed
All completion events
scrum:sprint-completed, retro:completed
*:*-alert
All alerts
code:dependency-alert, economics:budget-alert
{scrum,time}:*
Scrum or time events
scrum:task-updated, time:entry-logged
// Subscribe to all scrum domain events
eventBus.subscribePattern('scrum:*', (eventName, payload) => {
console.log(`Scrum event: #123;eventName}`, payload);
});
// Subscribe to all completion events
eventBus.subscribePattern('*:*-completed', (eventName, payload) => {
console.log(`Completion: #123;eventName}`, payload);
});
// Cancel a subscription
const unsubscribe = eventBus.subscribe('scrum:task-updated', handler);
unsubscribe(); // removes the listener
@mcp-suite/database: SQLite Persistence with WAL and Migrations
The @mcp-suite/database package provides an abstraction for creating and managing
SQLite databases. It offers two main features: a factory for creating database
connections (createDatabase) and an incremental migration system
(runMigrations). It uses better-sqlite3 as a high-performance
synchronous native driver.
The createDatabase function automatically handles directory creation,
WAL mode configuration, and foreign key enablement.
It also supports in-memory mode for testing.
export interface DatabaseOptions {
serverName: string; // Server name (becomes .db filename)
dataDir?: string; // Custom directory (default: ~/.mcp-suite/data/)
inMemory?: boolean; // If true, in-memory database (for tests)
}
export function createDatabase(options: DatabaseOptions): Database.Database {
if (options.inMemory) {
return new Database(':memory:');
}
const dataDir = options.dataDir || DEFAULT_DATA_DIR;
if (!existsSync(dataDir)) {
mkdirSync(dataDir, { recursive: true });
}
const dbPath = join(dataDir, `#123;options.serverName}.db`);
const db = new Database(dbPath);
// Optimal SQLite configuration
db.pragma('journal_mode = WAL'); // Write-Ahead Logging
db.pragma('foreign_keys = ON'); // Referential integrity
return db;
}
Why WAL Mode?
Write-Ahead Logging (WAL) is a SQLite journal mode that offers significant
advantages: concurrent reads without locks, non-blocking writes,
better I/O performance, and automatic crash recovery. Readers read from the
main file while the writer writes to the WAL file, without mutual interference. Periodically
a checkpoint synchronizes the WAL into the main file.
The default database directory structure is ~/.mcp-suite/data/, with one
.db file per server that needs persistence:
The migration system allows servers to evolve their database schema incrementally and safely.
Each migration has a progressive version number, a human-readable description, and the SQL
to execute. The _migrations table tracks already-applied migrations.
export interface Migration {
version: number; // Progressive number
description: string; // Human-readable description
up: string; // SQL to execute
}
export function runMigrations(db: Database.Database, migrations: Migration[]): void {
// 1. Create _migrations table if not exists
db.exec(`
CREATE TABLE IF NOT EXISTS _migrations (
version INTEGER PRIMARY KEY,
description TEXT NOT NULL,
applied_at TEXT NOT NULL DEFAULT (datetime('now'))
)
`);
// 2. Read already-applied migrations
const applied = db
.prepare('SELECT version FROM _migrations ORDER BY version')
.all() as Array<{ version: number }>;
const appliedVersions = new Set(applied.map((m) => m.version));
// 3. Sort and apply only new migrations
const sorted = [...migrations].sort((a, b) => a.version - b.version);
const insertMigration = db.prepare(
'INSERT INTO _migrations (version, description) VALUES (?, ?)',
);
for (const migration of sorted) {
if (appliedVersions.has(migration.version)) continue;
db.exec(migration.up);
insertMigration.run(migration.version, migration.description);
}
}
How Servers Use the Database
Each server that needs persistence defines its own migrations and applies them
in its Store constructor:
// In servers/scrum-board/src/services/scrum-store.ts
const migrations: Migration[] = [
{
version: 1,
description: 'Create sprints, stories, and tasks tables',
up: `
CREATE TABLE IF NOT EXISTS sprints (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
startDate TEXT NOT NULL,
endDate TEXT NOT NULL,
goals TEXT NOT NULL DEFAULT '[]',
status TEXT NOT NULL DEFAULT 'planning',
createdAt TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS stories (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
storyPoints INTEGER NOT NULL DEFAULT 0,
status TEXT NOT NULL DEFAULT 'todo',
sprintId INTEGER REFERENCES sprints(id),
createdAt TEXT NOT NULL DEFAULT (datetime('now'))
);
`,
},
];
export class ScrumStore {
private db: Database.Database;
constructor(options?: { inMemory?: boolean }) {
this.db = createDatabase({
serverName: 'scrum-board',
inMemory: options?.inMemory,
});
runMigrations(this.db, migrations);
}
}
@mcp-suite/testing: TestHarness and MockEventBus
The @mcp-suite/testing package provides utilities for testing MCP servers in
isolation, without requiring a real client or network connection. It offers
two main components: a TestHarness based on the MCP SDK's
InMemoryTransport and a MockEventBus for verifying event
emission in tests.
TestHarness: In-Process Testing
The TestHarness creates a client-server pair connected in memory using
InMemoryTransport.createLinkedPair(). The test client can call tools,
list resources, and verify results exactly like a real client, but without external
processes, network ports, or STDIO files.
export interface TestHarness {
client: Client; // MCP client connected to the server
close: () => Promise<void>; // Function to close the connection
}
export async function createTestHarness(server: McpServer): Promise<TestHarness> {
// 1. Create a pair of linked in-memory transports
const [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair();
// 2. Create a test client
const client = new Client({
name: 'test-client',
version: '1.0.0',
});
// 3. Connect server and client to their respective transports
await server.connect(serverTransport);
await client.connect(clientTransport);
// 4. Return client and cleanup function
return {
client,
close: async () => {
await client.close();
await server.close();
},
};
}
The MockEventBus implements the EventBus interface, recording all
published events to enable assertions in tests. It offers utility methods such as
wasPublished() and getPublishedEvents().
@mcp-suite/cli: Server Management from the Terminal
The @mcp-suite/cli package provides a command-line interface built with
Commander.js for managing MCP Suite servers. It offers three commands:
list to enumerate all available servers, start to launch a specific
server, and status to check which servers are built.
# List all available servers
npx @mcp-suite/cli list
# Start a server with STDIO transport (default)
npx @mcp-suite/cli start scrum-board
# Start a server with HTTP transport
npx @mcp-suite/cli start scrum-board --transport http
# Check build status
npx @mcp-suite/cli status
The CLI scans the servers/ directory to automatically discover all available servers
and verifies the presence of the compiled dist/index.js file before starting a server.
The start command launches the Node.js process as a subprocess, forwarding
stdin/stdout/stderr and setting the appropriate environment variables.
npx @mcp-suite/cli status
MCP Suite Status:
Total servers: 22
Built: 20
Not built: 2
Built servers:
+ agile-metrics
+ api-documentation
+ scrum-board
...
Not built:
x docker-compose
x performance-profiler
@mcp-suite/client-manager: Client Pool for Server-to-Server Communication
The @mcp-suite/client-manager package manages a pool of MCP clients
for synchronous server-to-server communication. It allows a server to call tools exposed by
other servers as if they were local functions, completely abstracting transport details.
Unlike the EventBus (fire-and-forget, asynchronous), the Client Manager provides
synchronous request/response communication.
ServerRegistryEntry and McpClientManager
The Client Manager works with a registry + lazy connection + pool pattern:
target servers are registered, connections are created only on first use, and are reused
across subsequent calls.
The Client Manager creates clients lazily: the connection is established
only on the first callTool() call. Subsequent calls reuse the client from the
cache. If a target server is unreachable, calling tools use the graceful degradation
pattern: they continue to work without the cross-server enrichment.
// Complete example: usage in a server
const clientManager = new McpClientManager();
clientManager.registerMany([
{
name: 'scrum-board',
transport: 'http',
url: process.env.MCP_SUITE_SCRUM_BOARD_URL || 'http://localhost:3018/mcp',
},
{
name: 'time-tracking',
transport: 'http',
url: process.env.MCP_SUITE_TIME_TRACKING_URL || 'http://localhost:3022/mcp',
},
]);
// Cross-server call with graceful degradation
if (enrichFromExternal && clientManager) {
try {
const result = await clientManager.callTool('scrum-board', 'get-sprint',
{ sprintId: 1 });
// use the result to enrich data
} catch (error) {
logger.warn('Cross-server call failed, continuing without enrichment');
}
}
// Cleanup at the end
await clientManager.disconnectAll();
Summary: How the Packages Collaborate
The 6 shared packages form a layered architecture where every server can be created,
configured, tested, and orchestrated with just a few lines of code:
MCP Server Lifecycle
Creation: createMcpServer() from @mcp-suite/core creates the server with configuration, logger, and EventBus
Persistence: createDatabase() from @mcp-suite/database creates the SQLite database with WAL mode
Migrations: runMigrations() applies the incremental schema
Events: LocalEventBus from @mcp-suite/event-bus enables fire-and-forget collaboration
Cross-server: McpClientManager from @mcp-suite/client-manager enables synchronous RPC calls
Startup: startServer() launches with the appropriate transport
Testing: createTestHarness() from @mcp-suite/testing enables in-process testing
Management: @mcp-suite/cli manages servers from the terminal
Conclusions
The shared packages represent the architectural core of Tech-MCP. Without them, each server
would need to reimplement the factory, logger, database management, event system, and
cross-server communication -- multiplying complexity by 22.
Thanks to this architecture, creating a new server is a matter of a few lines: just import
from @mcp-suite/core, define migrations with @mcp-suite/database,
and register the tools.
In the next article, we will move from shared packages to concrete servers,
exploring the productivity servers: code-review for code analysis,
dependency-manager for dependency management, and project-scaffolding for automatic
scaffolding of new projects. We will see how these servers leverage the shared package
infrastructure to offer advanced functionality with minimal code.