The Infrastructure Behind the Platform
Building a platform like Play The Event is not just about writing application code: it also means designing an infrastructure that guarantees reliability, reproducibility, and ease of deployment. From the local development environment to the production VPS, every infrastructure component has been designed to reduce deployment times, minimize manual errors, and ensure that services are always available.
This article explores the entire DevOps stack of Play The Event: from Docker Compose containerization to Nginx configuration with SSL, from systemd services to automation scripts, through database migrations management and Redis caching.
What You'll Learn
- Docker Compose for the development environment with MySQL, Redis, phpMyAdmin and Redis Commander
- Nginx as a reverse proxy with Let's Encrypt SSL, rate limiting, and security headers
- Systemd services for automatic startup of backend and frontend
- 18+ Bash scripts for deploy, startup, stop, restart, and maintenance
- 199 Flyway migrations for MySQL schema evolution
- Redis with 256MB configuration and allkeys-lru eviction policy
- Log management and service health monitoring
Docker Compose: The Development Environment
The development environment for Play The Event is fully containerized using Docker Compose. Four orchestrated services ensure that every developer can spin up the entire stack with a single command, without worrying about manually installing and configuring MySQL, Redis, or the administration tools.
version: '3.8'
services:
# MySQL 8.4.3 - Primary database
mysql:
image: mysql:8.4.3
container_name: management-events-mysql
restart: unless-stopped
environment:
MYSQL_DATABASE: management_events_db
MYSQL_USER: events_user
TZ: Europe/Rome
ports:
- "3306:3306"
volumes:
- mysql_data:/var/lib/mysql
- ./backend/src/main/resources/db/init:/docker-entrypoint-initdb.d
command:
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
- --max_connections=200
- --innodb_buffer_pool_size=1G
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
# Redis 7.4.1 - Cache and sessions
redis:
image: redis:7.4.1-alpine
container_name: management-events-redis
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: >
redis-server --appendonly yes
--maxmemory 256mb
--maxmemory-policy allkeys-lru
# phpMyAdmin - Web-based database management
phpmyadmin:
image: phpmyadmin:5.2
container_name: management-events-phpmyadmin
ports:
- "8081:80"
depends_on:
mysql:
condition: service_healthy
# Redis Commander - Web-based cache management
redis-commander:
image: rediscommander/redis-commander:latest
container_name: management-events-redis-commander
environment:
REDIS_HOSTS: local:redis:6379
ports:
- "8082:8081"
depends_on:
redis:
condition: service_healthy
Key aspects of the Docker configuration:
- Health checks: both MySQL and Redis have built-in health checks. Dependent services (phpMyAdmin, Redis Commander) wait until the base services are ready before starting
- Persistent volumes:
mysql_dataandredis_dataensure that data survives container restarts - Optimized MySQL:
utf8mb4for full Unicode support,innodb_buffer_pool_size=1Gfor optimal performance, up to 200 simultaneous connections - Redis with AOF:
appendonly yesenables disk persistence. Theallkeys-lrupolicy prevents memory exhaustion by evicting the least recently used keys
Development Service Ports
- 3306 - MySQL 8.4.3 (primary database)
- 6379 - Redis 7.4.1 (cache and sessions)
- 8080 - Spring Boot Backend (REST API)
- 4200 - Angular Frontend (dev server)
- 8081 - phpMyAdmin (database management)
- 8082 - Redis Commander (cache management)
Nginx: Reverse Proxy with SSL
In production, Nginx serves as the reverse proxy in front of all Play The Event services. It handles SSL termination, request routing, gzip compression, static asset caching, and protection through rate limiting.
# Upstream for the Spring Boot backend
upstream backend_api {
server localhost:8080 fail_timeout=5s max_fails=3;
keepalive 32;
}
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/m;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=10r/m;
server {
listen 443 ssl http2;
server_name playtheevent.com www.playtheevent.com;
# SSL Let's Encrypt
ssl_certificate /etc/letsencrypt/live/playtheevent.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/playtheevent.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types text/plain text/css application/json
application/javascript text/xml image/svg+xml;
# Backend API with rate limiting
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://backend_api;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Strict rate limiting for login
location /api/auth/login {
limit_req zone=login_limit burst=5 nodelay;
proxy_pass http://backend_api;
}
# Angular frontend (SSR or static)
location / {
try_files $uri $uri/ /index.html;
}
# Static assets with long cache
location ~* \.(js|css|png|jpg|svg|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}
Network-Level Security
The Nginx configuration implements several protection strategies:
- Two-tier rate limiting: general APIs are limited to 100 requests per minute per IP, while authentication endpoints have a much stricter limit of 10 requests per minute, with HTTP 429 responses when exceeded
- SSL with Let's Encrypt: free certificates with automatic renewal via Certbot. The
setup-ssl.shscript automates DNS verification, Certbot installation, and certificate generation - Security headers: protection against clickjacking (
X-Frame-Options), MIME sniffing (X-Content-Type-Options), and XSS (X-XSS-Protection) - Sensitive file blocking: any request to files starting with
.(such as.env,.git) is automatically blocked
Health Check Endpoint
Nginx exposes a dedicated /health endpoint for external monitoring. This endpoint
returns a simple response without hitting the backend, allowing monitoring systems to verify
that the reverse proxy is operational.
# Health check endpoint (direct Nginx response)
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Nginx status for monitoring (localhost only)
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
Systemd: Auto-Starting Services
The Spring Boot backend and Angular frontend are managed as systemd services on the Ubuntu 24.04 VPS. This ensures that services automatically restart after a crash or a server reboot.
[Unit]
Description=Play the Event - Backend Spring Boot API
After=network.target mysql.service
Wants=mysql.service
[Service]
Type=simple
User=federicocalo
WorkingDirectory=/home/ubuntu/managementevents/backend
# Environment
Environment="JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64"
Environment="SPRING_PROFILES_ACTIVE=prod"
EnvironmentFile=/home/ubuntu/managementevents/backend/.env.prod
# Execute with pre-compiled JAR
ExecStart=/usr/bin/java -jar target/management-events-backend.jar
# Restart policy
Restart=on-failure
RestartSec=10s
StartLimitInterval=5min
StartLimitBurst=3
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=play-the-event-backend
# Security
NoNewPrivileges=true
PrivateTmp=true
# Graceful shutdown
TimeoutStopSec=30s
KillMode=mixed
KillSignal=SIGTERM
[Install]
WantedBy=multi-user.target
[Unit]
Description=Play the Event - Frontend Angular
After=network.target
[Service]
Type=simple
User=federicocalo
WorkingDirectory=/home/ubuntu/managementevents/frontend
# Environment
Environment="NODE_ENV=production"
# Angular serve (SSR or static)
ExecStart=/usr/bin/npm start
# Restart policy
Restart=on-failure
RestartSec=10s
StartLimitInterval=5min
StartLimitBurst=3
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=play-the-event-frontend
# Security
NoNewPrivileges=true
PrivateTmp=true
# Graceful shutdown
TimeoutStopSec=15s
KillMode=mixed
KillSignal=SIGTERM
[Install]
WantedBy=multi-user.target
Key aspects of the systemd configuration:
- Dependencies: the backend declares
After=mysql.serviceandWants=mysql.service, ensuring MySQL is started before the Spring Boot application - Automatic restart:
Restart=on-failurewithRestartSec=10sautomatically restarts the service after a crash, with a limit of 3 attempts in 5 minutes to prevent infinite loops - Environment variables: sensitive configurations are loaded from
.env.prodviaEnvironmentFile, keeping credentials out of the source code - Security:
NoNewPrivileges=trueprevents privilege escalation,PrivateTmp=trueisolates the service's temporary directory - Graceful shutdown:
KillSignal=SIGTERMwithKillMode=mixedfirst sends a termination signal to the main process, then force-kills child processes after the timeout
Automated Deploy Scripts
The deployment of Play The Event is managed by a suite of 18+ Bash scripts that automate every aspect of the application lifecycle on the OVHcloud VPS.
Deploy Scripts
deploy-all.shdeploy-backend.shdeploy-frontend.shdeploy-analytics.shrestart-all.shstart-all.shstop-all.sh
Setup and Maintenance Scripts
setup-nginx.shsetup-ssl.shsetup-mysql-databases.shsetup-analytics.shupdate-nginx-ssr.shupdate-vps-config.shcleanup-logs-vps.shwatch-logs.sh
The Orchestrated Deploy: deploy-all.sh
The main script deploy-all.sh orchestrates the deployment of all services
with support for sequential or parallel execution, selective service skipping, and
detailed reporting with execution times.
# Full deploy (sequential)
./scripts/deploy-all.sh
# Parallel deploy (faster)
./scripts/deploy-all.sh --parallel
# Only frontend and analytics (skip backend)
./scripts/deploy-all.sh --skip-be
# Continue even if a deploy fails
./scripts/deploy-all.sh --continue
The deploy-all.sh workflow includes:
- Argument parsing: support for
--parallel,--skip-be,--skip-fe,--skip-ai,--continue - Sequential or parallel deploy: in parallel mode, the three services are launched as background processes, with the script waiting for all of them to complete
- Health checks: after deployment, verifies that each service responds correctly through its respective health endpoint
- Final report: displays the status of each service (SUCCESS, ERROR, SKIPPED) with execution times
Backend Deploy
The deploy-backend.sh script manages the entire backend deployment lifecycle
in 8 automated steps:
# Step 1: Verify prerequisites (Java 21, Maven, MySQL)
# Step 2: Navigate to backend directory
# Step 3: Load variables from .env.prod + test DB connection
# Step 4: Maven build (mvnw clean package -DskipTests)
# Step 5: Verify database and Flyway migrations
# Step 6: Create/update systemd service
# Step 7: Restart service with status verification
# Step 8: Health check with retry (max 15 attempts, 2s interval)
# Useful post-deploy commands:
sudo systemctl status management-events-backend
sudo journalctl -u management-events-backend -f
curl http://localhost:8080/api/health
VPS Management on OVHcloud
The platform runs on an OVHcloud VPS with Ubuntu 24.04. The start-all.sh,
stop-all.sh, and restart-all.sh scripts manage the entire
service lifecycle on the VPS.
# Service startup order:
# [1/5] Verify and start MySQL
systemctl start mysql
mysql -u root -e "USE management_events_system;"
# [2/5] Stop existing services (cleanup)
systemctl stop management-events-backend
systemctl stop management-events-frontend
# [3/5] Start Spring Boot backend
systemctl start management-events-backend
# Wait for health check at localhost:8080/api/health
# [4/5] Start Angular frontend
systemctl start management-events-frontend
# Wait for health check at localhost:4200
# [5/5] Verify and reload Nginx
systemctl reload nginx
# Verify proxy is working
Critical Startup Order
The startup order is critical: MySQL must be fully operational before the Spring Boot
backend attempts to connect, otherwise Flyway will fail to execute the migrations. For
this reason, start-all.sh verifies not only that MySQL is active, but also
that the specific database is reachable, before proceeding with the backend.
Database Migrations with Flyway
The MySQL database schema of Play The Event is managed through 199 versioned Flyway migrations. Each schema change is a numbered SQL migration that is automatically executed on backend startup.
backend/src/main/resources/db/migration/
├── V1__create_users_table.sql
├── V2__create_events_table.sql
├── V3__create_participants_table.sql
├── ...
├── V98__create_analytics_tables.sql
├── V99__create_tipologie_luogo_table.sql
├── V100__create_luoghi_table.sql
├── V101__create_festival_table.sql
├── V102__create_giornate_festival_table.sql
└── ... (199 total migrations)
Flyway tracks already-executed migrations in the flyway_schema_history table.
On startup, the backend compares migrations present in the classpath with those already executed
and applies only the new ones. This approach guarantees:
- Incremental evolution: the schema evolves gradually without ever losing data
- Reproducibility: any database instance can reach the current state by executing all migrations in order
- Traceability: every schema change is a SQL file in the Git repository, with author, date, and reason for the change
- Safety: migrations are idempotent and cannot be modified after execution (Flyway verifies the checksum)
Redis: Caching and Session Management
Redis plays a critical role in the Play The Event architecture, managing both frequent query caching and user sessions. The configuration is optimized for a memory-constrained environment.
# Redis startup with custom configuration
redis-server \
--appendonly yes \
--maxmemory 256mb \
--maxmemory-policy allkeys-lru
# appendonly yes → AOF persistence to disk
# maxmemory 256mb → Maximum memory limit
# maxmemory-policy → Eviction of least recently used keys
The allkeys-lru (Least Recently Used) policy was chosen because in an event management context, the most recent data is generally the most relevant: active events, current sessions, and the most frequent queries take priority over old or rarely accessed cached data.
Redis Use Cases in the Platform
- Query caching: results of the most frequent queries (public event lists, event details) are cached to reduce the load on MySQL
- User sessions: session data is stored in Redis to support horizontal scaling of the backend
- Rate limiting: rate limiting counters for the APIs are managed in Redis with automatic TTL
- Temporary data: email verification tokens, OTP codes, and Stripe checkout data with automatic expiration
Log Management
The script suite includes dedicated tools for log management, essential for production debugging and for maintaining disk space on the VPS.
Real-Time Log Streaming
The watch-logs.sh script offers an interactive menu for real-time log
streaming, with 9 options covering all services:
Select which log to view:
1) Backend (systemd journal)
2) Backend (application log)
3) Backend (error log)
4) Frontend (systemd journal)
5) Frontend (application log)
6) Frontend (error log)
7) Nginx (access log)
8) Nginx (error log)
9) All backend logs (multiple)
Option 9 uses multitail (if available) to simultaneously display systemd
journal, application log, and error log with colored prefixes. When multitail is not
available, a fallback script combines multiple tail -f processes with
[SYSTEMD], [APP], and [ERROR] prefixes.
Automated Log Cleanup
The cleanup-logs-vps.sh script automates log cleanup on the VPS, scanning
log directories, deleting files, and cleaning up systemd journals older than 7 days.
# Monitored directories for cleanup
LOG_DIRS=(
"/var/log/management-events"
"/home/ubuntu/managementevents/logs"
"/opt/management-events/logs"
)
# For each directory: count files, calculate size, delete
# Also cleans journalctl logs > 7 days
sudo journalctl --vacuum-time=7d
# Show remaining disk space
df -h /
SSL Setup with Let's Encrypt
The setup-ssl.sh script automates the entire HTTPS configuration for the
playtheevent.com domain. The process includes DNS verification, Certbot
installation, certificate generation, and automatic renewal configuration.
# Configuration
DOMAIN="playtheevent.com"
WWW_DOMAIN="www.playtheevent.com"
# Step 1: DNS Verification
# - Get VPS IP (curl ifconfig.me)
# - Verify domain points to correct IP (dig +short)
# - Check both @ and www records
# Step 2: Install Certbot
sudo apt install -y certbot python3-certbot-nginx
# Step 3: Generate certificates
sudo certbot --nginx \
-d playtheevent.com \
-d www.playtheevent.com \
--email admin@playtheevent.com \
--agree-tos
# Step 4: Verify automatic renewal (cron)
sudo certbot renew --dry-run
Monitoring and Health Checks
The monitoring system of Play The Event verifies the health of all services after every deployment and can be run on demand.
# Health checks built into deploy-all.sh:
# Backend - verify HTTP 200 response
curl -s -o /dev/null -w "%{http_code}" \
http://localhost:8080/api/health
# Frontend SSR - verify HTTP 200/301/302 response
curl -s -o /dev/null -w "%{http_code}" \
http://localhost:4200
# Analytics - verify HTTP 200 response
curl -s -o /dev/null -w "%{http_code}" \
http://localhost:8001/health
# Nginx - verify proxy is working
curl -s https://playtheevent.com/health
The final deploy report shows a clear view of the status:
╔════════════════════════════════════════════╗
║ DEPLOY REPORT ║
╚════════════════════════════════════════════╝
Service Status:
✓ Backend SUCCESS (2m 34s)
✓ Frontend SUCCESS (1m 12s)
✓ Analytics SUCCESS (45s)
Total time: 4m 31s
Access URLs:
• Frontend: https://playtheevent.com
• API: https://playtheevent.com/api
• Analytics: http://localhost:8001/docs
Infrastructure Summary
Complete Infrastructure Stack
- VPS: OVHcloud Ubuntu 24.04 LTS
- Containerization: Docker Compose for development (MySQL 8.4.3, Redis 7.4.1, phpMyAdmin, Redis Commander)
- Web Server: Nginx as reverse proxy with Let's Encrypt SSL, rate limiting, and gzip compression
- Process Manager: systemd for automatic startup and on-failure restart of backend and frontend
- Database: MySQL 8.4.3 with 199 versioned Flyway migrations
- Cache: Redis 7.4.1 with 256MB, allkeys-lru policy, AOF persistence
- Automation: 18+ Bash scripts for deploy, setup, maintenance, and monitoring
- SSL: Let's Encrypt with automatic renewal via Certbot
The infrastructure of Play The Event demonstrates how even a project managed by a single developer can achieve a level of automation and reliability comparable to larger teams, thanks to well-structured scripts, systemd services, and a solid Nginx configuration.
The source code is available on GitHub.







