I create modern web applications and custom digital tools to help businesses grow through technological innovation. My passion is combining computer science and economics to generate real value.
My passion for computer science was born at the Technical Commercial Institute of Maglie, where I discovered the power of programming and the fascination of creating digital solutions. From the start, I understood that computer science was not just code, but an extraordinary tool for turning ideas into reality.
During my studies in Business Information Systems, I began to interweave computer science and economics, understanding how technology can be the engine of growth for any business. This vision accompanied me to the University of Bari, where I obtained my degree in Computer Science, deepening my technical skills and passion for software development.
Today I put this experience at the service of businesses, professionals and startups, creating tailor-made digital solutions that automate processes, optimize resources and open new business opportunities. Because true innovation begins when technology meets the real needs of people.
My Skills
Data Analysis & Predictive Models
I transform data into strategic insights with in-depth analysis and predictive models for informed decisions
Process Automation
I create custom tools that automate repetitive operations and free up time for value-added activities
Custom Systems
I develop tailor-made software systems, from platform integrations to customized dashboards
Credo fermamente che l'informatica sia lo strumento più potente per trasformare le idee in realtà e migliorare la vita delle persone.
Democratizzare la Tecnologia
La mia missione è rendere l'informatica accessibile a tutti: dalle piccole imprese locali alle startup innovative, fino ai professionisti che vogliono digitalizzare la propria attività. Ogni realtà merita di sfruttare le potenzialità del digitale.
Unire Informatica ed Economia
Non è solo questione di scrivere codice: è capire come la tecnologia possa generare valore reale. Intrecciando competenze informatiche e visione economica, aiuto le attività a crescere, ottimizzare processi e raggiungere nuovi traguardi di efficienza e redditività.
Creare Soluzioni su Misura
Ogni attività è unica, e così devono esserlo le soluzioni. Sviluppo strumenti personalizzati che rispondono alle esigenze specifiche di ciascun cliente, automatizzando processi ripetitivi e liberando tempo per ciò che conta davvero: far crescere il business.
Trasforma la Tua Attività con la Tecnologia
Che tu gestisca un negozio, uno studio professionale o un'azienda, posso aiutarti a sfruttare le potenzialità dell'informatica per lavorare meglio, più velocemente e in modo più intelligente.
Bari, Puglia, Italy · Hybrid
Analysis and development of computer systems through the use of Java and Quarkus in Health and Public Sector. Continuous training on modern technologies for creating customized and efficient software solutions and on agents.
💼
06/2022 - 12/2024
Software analyst and Back End Developer Associate Consultant
Links Management and Technology SpA
Experience analyzing as-is software systems and ETL flows using PowerCenter. Completed Spring Boot training for developing modern and scalable backend applications. Backend developer specialized in Spring Boot, with experience in database design, analysis, development and testing of assigned tasks.
💼
02/2021 - 10/2021
Software programmer
Adesso.it (prima era WebScience srl)
Experience in AS-IS and TO-BE analysis, SEO evolutions and website evolutions to improve user performance and engagement.
🎓
2018 - 2025
Degree in Computer Science
University of Bari Aldo Moro
Bachelor's degree in Computer Science, focusing on software engineering, algorithms, and modern development practices.
📚
2013 - 2018
Diploma - Corporate Information Systems
Technical Commercial Institute of Maglie
Technical diploma specializing in Business Information Systems, combining IT knowledge with business management.
Contattami
Hai un progetto in mente? Parliamone! Compila il form qui sotto e ti risponderò al più presto.
* Campi obbligatori. I tuoi dati saranno utilizzati solo per rispondere alla tua richiesta.
Designing Architecture with Copilot
Good architecture is the foundation of a sustainable project. In this phase,
we'll use Copilot to design both backend and frontend,
defining how they communicate through a clear API contract. Architecture
isn't just "where to put files", but a set of decisions that influence
scalability, testability, and maintainability of the project.
Fundamental Architectural Principles
Separation of Concerns: Each component has a single responsibility
Loose Coupling: Modules depend on abstractions, not implementations
High Cohesion: Related code stays together
Testability: Each part can be tested in isolation
Maintainability: Code is understandable and modifiable
Series Overview
This is the third article of the series "GitHub Copilot: AI-Driven Personal Project".
#
Module
Status
1
Foundation - Copilot as Partner
Completed
2
Ideation and Requirements
Completed
3
Backend and Frontend Architecture
You are here
4
Code Structure
Next
5
Prompt Engineering and MCP Agents
6
Testing and Quality
7
Documentation
8
Deploy and DevOps
9
Evolution and Maintenance
Choosing Backend Architecture
For personal projects, there are different architectural patterns. The choice depends
on domain complexity and team skills (you!).
Architectural Pattern Comparison
Pattern
Complexity
When to Use
Pros
Cons
Simple MVC
Low
CRUD apps, prototypes
Fast to implement
Doesn't scale well
Layered/N-Tier
Medium
Standard business apps
Clear separation
Can become rigid
Clean Architecture
High
Complex domains
Maximum testability
Overengineering for MVP
Hexagonal
High
Many external integrations
Ports & Adapters
High boilerplate
Advice for Personal Projects
For an MVP, use a Simplified Layered Architecture: enough
structure to be maintainable, but not so complex it slows you down.
You can always refactor toward Clean Architecture when the project grows.
Backend: Layered Architecture
Prompt - Backend Architecture Design
Design a backend architecture for my project.
PROJECT: TaskFlow (time tracking + task management)
STACK: Node.js + Express + TypeScript + PostgreSQL
TEAM SIZE: Solo developer
EXPECTED SCALE: 1000 users, 100k requests/day
Requirements:
- Clean separation between layers
- Easy to test each layer independently
- Scalable for future features
- Simple enough for a solo developer to maintain
- Support for both REST API and future GraphQL
Include:
1. Layer diagram with responsibilities
2. Complete folder structure with explanations
3. Data flow example for a typical request
4. Error handling strategy across layers
5. Dependency injection approach
For frontend, a feature-based structure scales better than traditional
type-based structure (components, services, etc.).
Prompt - Frontend Architecture Design
Design a frontend architecture for my project.
PROJECT: TaskFlow (time tracking + task management)
STACK: Angular 21 with standalone components
TEAM SIZE: Solo developer
Requirements:
- Feature-based folder structure
- Lazy loading per route for performance
- State management strategy (signals vs NgRx)
- API communication layer with interceptors
- Reusable component library (design system)
- Offline support consideration
Include:
1. Detailed folder structure with explanations
2. State management approach
3. Data flow diagram
4. Component communication patterns
5. API service architecture
A clear API contract avoids communication issues between frontend and backend.
Define it BEFORE implementing, not during.
Prompt - API Contract Design
Design a complete API contract for my project.
PROJECT: TaskFlow
RESOURCES: Users, Tasks, TimeEntries, Reports
For each resource include:
1. All endpoints (REST conventions)
2. Request/Response schemas (TypeScript interfaces)
3. Standard error response format
4. Authentication requirements per endpoint
5. Pagination strategy for lists
6. Rate limiting considerations
Use OpenAPI/Swagger style descriptions.
Include examples for each endpoint.
## Tasks API v1
Base URL: `https://api.taskflow.com/v1`
Authentication: Bearer token required for all endpoints
---
### List Tasks
`GET /tasks`
**Query Parameters:**
| Param | Type | Default | Description |
|-------|------|---------|-------------|
| page | integer | 1 | Page number (1-indexed) |
| limit | integer | 20 | Items per page (max: 100) |
| status | string | - | Filter by status |
| search | string | - | Search in title/description |
**Example Request:**
```
GET /tasks?page=1&limit=10&status=TODO
Authorization: Bearer eyJhbGciOiJIUzI1...
```
**Response 200:**
```json
{{ '{' }}
"success": true,
"data": [
{{ '{' }}
"id": "task_abc123",
"title": "Complete API documentation",
"description": "Write OpenAPI specs",
"status": "TODO",
"dueDate": "2025-02-15T10:00:00Z",
"createdAt": "2025-01-30T12:00:00Z",
"updatedAt": "2025-01-30T12:00:00Z"
{{ '}' }}
],
"meta": {{ '{' }}
"page": 1,
"limit": 10,
"total": 45,
"totalPages": 5,
"hasNext": true,
"hasPrev": false
{{ '}' }}
{{ '}' }}
```
**Error Responses:**
| Code | When |
|------|------|
| 401 | Missing or invalid token |
| 500 | Server error |
---
### Create Task
`POST /tasks`
**Request Body:**
```json
{{ '{' }}
"title": "Complete API documentation",
"description": "Write OpenAPI specs for all endpoints",
"dueDate": "2025-02-15T10:00:00Z"
{{ '}' }}
```
**Validation Rules:**
| Field | Rules |
|-------|-------|
| title | Required, 3-100 characters |
| description | Optional, max 1000 characters |
| dueDate | Optional, must be future date |
**Response 201:**
```json
{{ '{' }}
"success": true,
"data": {{ '{' }}
"id": "task_xyz789",
"title": "Complete API documentation",
"status": "TODO",
...
{{ '}' }}
{{ '}' }}
```
**Error Response 400:**
```json
{{ '{' }}
"success": false,
"error": {{ '{' }}
"code": "VALIDATION_ERROR",
"message": "Invalid input",
"details": [
{{ '{' }}
"field": "title",
"message": "Title must be at least 3 characters"
{{ '}' }},
{{ '{' }}
"field": "dueDate",
"message": "Due date must be in the future"
{{ '}' }}
]
{{ '}' }}
{{ '}' }}
```
API Versioning
Implement versioning from the start to avoid breaking changes in the future.
Versioning Strategies
Strategy
Example
Pros
Cons
URL Path (recommended)
/api/v1/tasks
Clear, easy to test
Longer URLs
Header
Accept: application/vnd.api+json;v=1
Clean URLs
Less visible
Query Param
/api/tasks?version=1
Easy to add
Can be forgotten
Database Schema Design
The database schema derives directly from requirements and defined entities.
SQL - Database Schema
-- =============================================================
-- PostgreSQL Schema for TaskFlow
-- =============================================================
-- Enable UUID extension
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- -------------------------------------------------------------
-- Users Table
-- -------------------------------------------------------------
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
email VARCHAR(255) NOT NULL UNIQUE,
password_hash VARCHAR(255) NOT NULL,
name VARCHAR(100) NOT NULL,
avatar_url VARCHAR(500),
email_verified_at TIMESTAMP,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
deleted_at TIMESTAMP -- Soft delete
);
CREATE INDEX idx_users_email ON users(email) WHERE deleted_at IS NULL;
-- -------------------------------------------------------------
-- Tasks Table
-- -------------------------------------------------------------
CREATE TYPE task_status AS ENUM ('TODO', 'IN_PROGRESS', 'DONE');
CREATE TABLE tasks (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
title VARCHAR(100) NOT NULL,
description TEXT,
status task_status NOT NULL DEFAULT 'TODO',
due_date TIMESTAMP,
completed_at TIMESTAMP,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
deleted_at TIMESTAMP -- Soft delete
);
CREATE INDEX idx_tasks_user_status ON tasks(user_id, status)
WHERE deleted_at IS NULL;
CREATE INDEX idx_tasks_user_due ON tasks(user_id, due_date)
WHERE deleted_at IS NULL AND status != 'DONE';
-- -------------------------------------------------------------
-- Time Entries Table
-- -------------------------------------------------------------
CREATE TABLE time_entries (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
started_at TIMESTAMP NOT NULL,
ended_at TIMESTAMP, -- NULL = timer running
duration_seconds INTEGER, -- Calculated on stop
notes TEXT,
created_at TIMESTAMP NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_time_entries_task ON time_entries(task_id);
CREATE INDEX idx_time_entries_user_date ON time_entries(user_id, started_at);
-- -------------------------------------------------------------
-- Trigger: Auto-update updated_at
-- -------------------------------------------------------------
CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$ LANGUAGE plpgsql;
CREATE TRIGGER users_updated_at
BEFORE UPDATE ON users
FOR EACH ROW EXECUTE FUNCTION update_updated_at();
CREATE TRIGGER tasks_updated_at
BEFORE UPDATE ON tasks
FOR EACH ROW EXECUTE FUNCTION update_updated_at();
Architecture Decision Records (ADR)
Document important architectural decisions for future reference.
ADRs explain the why, not just the what.
ADR-001: PostgreSQL Choice
# ADR-001: PostgreSQL as Database Choice
## Status
Accepted (2025-01-30)
## Context
We need a database to persist application data for TaskFlow.
Key requirements:
- Support for entity relationships (users -> tasks -> time_entries)
- Complex queries for reports and analytics
- Support for semi-structured data (future metadata)
- Vertical scalability sufficient for MVP (1000 users)
## Decision
We use **PostgreSQL 16** as primary database.
## Alternatives Considered
### MongoDB
- Pros: Schema flexibility
- Pros: Native horizontal scaling
- Cons: Complex JOIN queries difficult
- Cons: Transactional consistency limited
### MySQL
- Pros: Wide adoption, familiar
- Cons: Less mature JSON support
- Cons: Fewer advanced features (CTE, window functions)
### SQLite
- Pros: Zero setup, embedded
- Cons: Not suitable for multi-instance production
- Cons: Concurrency limitations
## Consequences
### Positive
- Powerful and optimized SQL queries
- Native JSONB support
- Complete ACID transactions
- Great tooling (Prisma, pgAdmin)
- Free tier on Supabase/Neon
### Negative
- More complex setup than SQLite
- Requires managed service in production
### Risks
- Might be overkill for initial MVP
- Costly migration if we change DB later
## Notes
Re-evaluate after 6 months in production if access patterns change.
Pre-Implementation Checklist
Architecture Checklist
Item
Status
Architectural pattern chosen and documented (ADR)
Backend folder structure defined
Frontend folder structure defined
API contract documented
Response format standardized
Error handling strategy defined
Database schema designed
Database indexes planned
Authentication flow documented
API versioning implemented
Conclusion
Well-designed architecture makes development smoother and code more
maintainable. Copilot can help you explore options, generate boilerplate,
and document decisions, but architectural choices require understanding
of the domain and specific project needs. Don't copy complex architectures:
choose the right level for your project.
Key Takeaways
Layered Architecture: Separation of concerns between Controller -> Service -> Repository
Feature-Based Frontend: Organize by feature, not by file type
API Contract First: Define the contract BEFORE implementing
Standard Response: Consistent format for successes and errors
Error Handling: Centralized system with typed errors
ADR: Document important architectural decisions
Next Article
In the next article "Code Structure and Organization"
we'll see implementation details: naming conventions, configuration management,
barrel exports, and best practices for a clean, maintainable codebase.