docker implementation

This commit is contained in:
echo 2026-02-01 13:32:37 +01:00
parent 7c7bb45446
commit 0c2433cac6
26 changed files with 2200 additions and 13 deletions

461
DEPLOYMENT.md Normal file
View File

@ -0,0 +1,461 @@
# Placebo.mk Deployment Guide
## Overview
This document outlines the deployment strategy for Placebo.mk - a Macedonian news site with sarcastic tone, built with TanStack (React), NestJS, and Strapi CMS.
## Architecture
- **Frontend**: TanStack (React 19 + Query + Router) + Vite + Tailwind CSS
- **Backend**: NestJS + TypeORM + PostgreSQL (migrated from SQLite)
- **CMS**: Strapi with PostgreSQL
- **Deployment**: Docker + Coolify on VPS
## Deployment Options
### Option 1: Coolify + VPS (Recommended)
**Pros**: Self-hosted, cost-effective, full control, Macedonian audience optimized
**Cons**: Requires server management
### Option 2: Platform-as-a-Service
**Pros**: No server management, automatic scaling
**Cons**: More expensive, less control
## Coolify + VPS Deployment
### Phase 1: Infrastructure Setup
#### 1.1 VPS Requirements
- **Provider**: Hetzner (Germany) or DigitalOcean (Amsterdam)
- **Specs**: 4GB RAM, 2 vCPU, 80GB SSD (€10-15/month)
- **OS**: Ubuntu 22.04 LTS
- **Domain**: placebo.mk
#### 1.2 Install Coolify
```bash
# On VPS
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash
```
Coolify automatically installs:
- Docker and Docker Compose
- Traefik reverse proxy
- Let's Encrypt SSL
- Admin interface
#### 1.3 Domain Configuration
1. Purchase `placebo.mk` domain
2. Configure DNS A records:
- `placebo.mk` → VPS IP
- `www.placebo.mk` → VPS IP
- `api.placebo.mk` → VPS IP
- `cms.placebo.mk` → VPS IP
### Phase 2: Database Migration
#### 2.1 PostgreSQL Setup
Create PostgreSQL database via Coolify one-click apps:
- Database name: `placebomk`
- Username: `placebo`
- Password: Generate secure password
#### 2.2 Migrate from SQLite
Detailed migration plan available in `scripts/migrate-to-postgres.md`
Run migration script:
```bash
# Test migration locally first
docker-compose up -d postgres
./scripts/migrate-data.sh
# Production migration
cd backend
npm run migrate:postgres
```
#### 2.3 Migration Strategy
1. **Development**: Use SQLite for local development
2. **Staging**: PostgreSQL with test data
3. **Production**: PostgreSQL with migrated data
4. **Backup**: Maintain SQLite backups during transition
### Phase 3: Application Deployment
#### 3.1 Connect GitHub Repository
1. Connect your GitHub repo to Coolify
2. Create three applications:
- `placebo-frontend` (from `/frontend`)
- `placebo-backend` (from `/backend`)
- `placebo-cms` (from `/cms/cms`)
#### 3.2 Configure Applications
**Backend Configuration:**
- **Build Command**: `npm run build`
- **Start Command**: `npm run start:prod`
- **Port**: 3000
- **Environment Variables**:
```
NODE_ENV=production
DATABASE_URL=postgresql://placebo:PASSWORD@postgres:5432/placebomk
CORS_ORIGIN=https://placebo.mk
JWT_SECRET=your-secure-jwt-secret
```
**Frontend Configuration:**
- **Build Command**: `npm run build`
- **Output Directory**: `dist`
- **Environment Variables**:
```
VITE_API_URL=https://api.placebo.mk/api/v1
VITE_STRAPI_URL=https://cms.placebo.mk
```
**CMS Configuration:**
- **Build Command**: `npm run build`
- **Start Command**: `npm run start`
- **Port**: 1337
- **Environment Variables**:
```
DATABASE_CLIENT=postgres
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_NAME=strapi
DATABASE_USERNAME=placebo
DATABASE_PASSWORD=PASSWORD
```
#### 3.3 Configure Domains in Coolify
- `placebo.mk` → Frontend application
- `api.placebo.mk` → Backend API
- `cms.placebo.mk` → Strapi CMS admin
### Phase 4: Security Configuration
#### 4.1 Environment Variables
Store sensitive data in Coolify environment variables:
- Database passwords
- JWT secrets
- API keys
- Strapi secrets
#### 4.2 SSL Certificates
Coolify automatically:
- Requests Let's Encrypt certificates
- Configures HTTPS redirect
- Auto-renews certificates
#### 4.3 Security Headers
Configure in Coolify:
- HSTS enabled
- CSP headers
- X-Frame-Options
- X-Content-Type-Options
### Phase 5: Monitoring & Backup
#### 5.1 Monitoring
- **Coolify built-in monitoring**: Resource usage, uptime
- **Application logs**: Access via Coolify UI
- **Health checks**: Configure endpoints
#### 5.2 Backup Strategy
1. **Database backups**: Daily via Coolify to S3-compatible storage
2. **Media files**: Strapi uploads to cloud storage
3. **Configuration**: Export Coolify settings regularly
#### 5.3 Backup Configuration
```yaml
# Coolify backup settings
backup:
schedule: "0 2 * * *" # Daily at 2 AM
retention: 30 # Keep 30 days
destination: s3://backup-bucket/placebo-mk
```
## Docker Setup
### Docker Files Created
```
placeboMk/
├── docker-compose.yml # Production setup with PostgreSQL
├── docker-compose.dev.yml # Development setup with hot reload
├── backend/
│ ├── Dockerfile # Production Dockerfile
│ └── Dockerfile.dev # Development Dockerfile
├── frontend/
│ ├── Dockerfile # Production Dockerfile (Nginx)
│ ├── Dockerfile.dev # Development Dockerfile
│ └── nginx.conf # Nginx configuration
└── cms/cms/
├── Dockerfile # Production Dockerfile
└── Dockerfile.dev # Development Dockerfile
```
### Local Development with Docker
#### Prerequisites
- Docker Desktop
- Docker Compose
- Node.js 20+ (optional, for local development without Docker)
#### Quick Start - Development Mode
```bash
# Clone repository
git clone https://github.com/your-org/placeboMk.git
cd placeboMk
# Start development environment with hot reload
docker-compose -f docker-compose.dev.yml up -d
# Access applications:
# Frontend: http://localhost:5173
# Backend API: http://localhost:3000
# Strapi CMS: http://localhost:1337/admin
```
#### Quick Start - Production Mode (Local Testing)
```bash
# Start production environment
docker-compose up -d
# Access applications:
# Frontend: http://localhost:3001
# Backend API: http://localhost:3000
# Strapi CMS: http://localhost:1337
# PostgreSQL: localhost:5432
```
#### Test Docker Setup
```bash
# Run comprehensive test
./scripts/test-docker.sh
# Or manually test
docker-compose build
docker-compose up -d
docker-compose ps
```
### Development Commands
```bash
# Development environment
docker-compose -f docker-compose.dev.yml up -d # Start dev
docker-compose -f docker-compose.dev.yml down # Stop dev
docker-compose -f docker-compose.dev.yml logs -f # View dev logs
# Production environment (local testing)
docker-compose up -d # Start prod
docker-compose down # Stop prod
docker-compose logs -f # View prod logs
docker-compose build # Rebuild images
# Database operations
docker-compose exec postgres psql -U placebo_user -d placebo_db # PostgreSQL shell
docker-compose exec backend npm run migration:run # Run migrations
# Service management
docker-compose restart backend # Restart backend
docker-compose restart frontend # Restart frontend
docker-compose restart cms # Restart CMS
```
## Production Checklist
### Pre-Deployment
- [ ] Domain DNS propagated (48 hours)
- [ ] SSL certificates issued
- [ ] Database migrated and tested
- [ ] Environment variables configured
- [ ] Backup system tested
- [ ] Monitoring configured
### Deployment Day
1. **Morning**: Final testing of staging environment
2. **Afternoon**: Deploy to production (low traffic time)
3. **Evening**: Monitor performance and fix issues
### Post-Deployment
- [ ] Verify all services are running
- [ ] Test critical user flows
- [ ] Check SSL certificates
- [ ] Verify backups are working
- [ ] Monitor error rates
## Troubleshooting
### Common Issues
#### 1. Database Connection Failed
```bash
# Check PostgreSQL logs
docker-compose logs postgres
# Test connection
docker-compose exec postgres psql -U placebo -d placebomk
```
#### 2. Build Failures
- Check Node.js version compatibility
- Verify package.json dependencies
- Check build logs in Coolify
#### 3. SSL Certificate Issues
- Verify DNS records
- Check Traefik logs: `docker-compose logs traefik`
- Manually renew: `docker-compose exec traefik traefik cert`
#### 4. CORS Errors
- Verify CORS_ORIGIN environment variable
- Check backend CORS configuration
- Test API calls from frontend
### Debug Commands
```bash
# Check container status
docker-compose ps
# View application logs
docker-compose logs --tail=100 backend
# Shell into container
docker-compose exec backend sh
# Check network connectivity
docker-compose exec backend curl http://postgres:5432
```
## Performance Optimization
### Frontend
- Enable Vite build optimization
- Configure CDN for static assets
- Implement code splitting
- Optimize images
### Backend
- Implement Redis caching
- Database query optimization
- Enable compression
- Configure connection pooling
### Database
- Add appropriate indexes
- Regular vacuum and analyze
- Connection pool tuning
- Read replicas for high traffic
## Scaling Strategy
### Vertical Scaling (First Step)
- Upgrade VPS: 8GB RAM, 4 vCPU
- Increase PostgreSQL memory
- Add Redis cache
### Horizontal Scaling
- Add application replicas
- Load balancing with Traefik
- Database read replicas
- CDN for static assets
### Cost Optimization
- Right-size VPS resources
- Use object storage for media
- Implement caching to reduce database load
- Monitor and optimize queries
## Maintenance Schedule
### Daily
- Check application logs
- Verify backup completion
- Monitor resource usage
- Review error rates
### Weekly
- Update dependencies (security patches)
- Review access logs
- Test restore from backup
- Clean up old logs
### Monthly
- Security audit
- Performance review
- Cost optimization review
- Update deployment documentation
## Emergency Procedures
### Database Corruption
1. Stop affected services
2. Restore from latest backup
3. Verify data integrity
4. Restart services
### Application Crash
1. Check logs for root cause
2. Rollback to previous version
3. Fix issue in development
4. Deploy fix
### DDoS Attack
1. Enable rate limiting
2. Block malicious IPs
3. Scale up resources temporarily
4. Contact hosting provider
## Support Contacts
### Technical Support
- **Coolify Documentation**: https://coolify.io/docs
- **Docker Support**: https://docs.docker.com
- **PostgreSQL Docs**: https://www.postgresql.org/docs
### Macedonian Hosting Providers
- **Hetzner**: German-based, good for Macedonian audience
- **DigitalOcean**: Amsterdam datacenter
- **Local Providers**: Check for Macedonian hosting companies
## Cost Estimates
| Resource | Monthly Cost | Annual Cost |
|----------|-------------|-------------|
| VPS (4GB/2CPU) | €12 | €144 |
| Domain (placebo.mk) | €15/year | €15 |
| Backup Storage (100GB) | €5 | €60 |
| **Total** | **€17/month** | **€219/year** |
## Success Metrics
### Technical Metrics
- Uptime: >99.9%
- Page load time: <2 seconds
- API response time: <200ms
- Error rate: <0.1%
### Business Metrics
- Monthly visitors
- Article publication rate
- User engagement
- Revenue (if applicable)
## Next Steps
1. **Set up development environment** with Docker
2. **Test migration** from SQLite to PostgreSQL
3. **Deploy to staging** environment
4. **Perform load testing**
5. **Go live** with production deployment
## Changelog
### v1.0.0 - Initial Deployment
- Dockerized application stack
- PostgreSQL migration
- Coolify deployment configuration
- Basic monitoring and backup
### Future Improvements
- Redis caching implementation
- CDN integration
- Advanced monitoring (Prometheus/Grafana)
- Automated testing pipeline
- Macedonian language optimization

441
DOCKER-README.md Normal file
View File

@ -0,0 +1,441 @@
# Placebo.mk Docker Setup
## Overview
Complete Docker configuration for Placebo.mk - a Macedonian news site with sarcastic tone. Includes both development and production setups.
## Architecture
- **Frontend**: TanStack (React 19) + Vite + Tailwind CSS
- **Backend**: NestJS + TypeORM
- **CMS**: Strapi
- **Database**: PostgreSQL (production), SQLite (development)
- **Reverse Proxy**: Nginx (production)
## Quick Start
### Development Environment (Hot Reload)
```bash
# Start development environment
docker-compose -f docker-compose.dev.yml up -d
# Services available at:
# Frontend: http://localhost:5173
# Backend API: http://localhost:3000
# CMS Admin: http://localhost:1337/admin
# PostgreSQL: localhost:5432
```
### Production Environment (Local Testing)
```bash
# Start production environment
docker-compose up -d
# Services available at:
# Frontend: http://localhost:3001
# Backend API: http://localhost:3000
# CMS: http://localhost:1337
```
### Test Everything
```bash
# Run comprehensive test
chmod +x scripts/test-docker.sh
./scripts/test-docker.sh
```
## Docker Files Structure
### Production Dockerfiles
- `backend/Dockerfile` - NestJS API with Node.js 20 Alpine
- `frontend/Dockerfile` - React app built with Vite, served by Nginx
- `cms/cms/Dockerfile` - Strapi CMS with PostgreSQL support
- `frontend/nginx.conf` - Nginx configuration for frontend
### Development Dockerfiles
- `backend/Dockerfile.dev` - Development with hot reload
- `frontend/Dockerfile.dev` - Development with Vite dev server
- `cms/cms/Dockerfile.dev` - Strapi development mode
### Docker Compose Files
- `docker-compose.yml` - Production setup with all services
- `docker-compose.dev.yml` - Development setup with volume mounts
## Services
### 1. PostgreSQL Database
- **Image**: `postgres:16-alpine`
- **Port**: 5432
- **Database**: `placebo_db`
- **User**: `placebo_user`
- **Password**: `placebo_password`
- **Volume**: `postgres_data` (persistent storage)
### 2. Backend API (NestJS)
- **Production Port**: 3000
- **Development Port**: 3000 (hot reload)
- **Health Check**: `GET /health`
- **Environment**: See `backend/.env.example`
### 3. Frontend (TanStack React)
- **Production Port**: 3001 (via Nginx)
- **Development Port**: 5173 (Vite dev server)
- **Build Tool**: Vite
- **Environment**: See `frontend/.env.example`
### 4. CMS (Strapi)
- **Port**: 1337
- **Admin**: `/admin`
- **Health Check**: `GET /_health`
- **Environment**: See `cms/cms/.env.example`
### 5. Nginx (Production Only)
- **Port**: 80
- **Configuration**: `frontend/nginx.conf`
- **Features**: SSL, compression, security headers, React Router support
## Environment Variables
### Backend (.env)
```bash
NODE_ENV=production
DATABASE_TYPE=postgres
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_USERNAME=placebo_user
DATABASE_PASSWORD=placebo_password
DATABASE_NAME=placebo_db
JWT_SECRET=your-jwt-secret-key-change-in-production
CORS_ORIGIN=http://localhost:5173,http://localhost:3001
```
### Frontend (.env)
```bash
NODE_ENV=production
VITE_API_URL=http://localhost:3000
VITE_CMS_URL=http://localhost:1337
```
### CMS (.env)
```bash
NODE_ENV=production
DATABASE_CLIENT=postgres
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_NAME=placebo_db
DATABASE_USERNAME=placebo_user
DATABASE_PASSWORD=placebo_password
JWT_SECRET=your-jwt-secret-key-change-in-production
```
## Common Commands
### Development
```bash
# Start development environment
docker-compose -f docker-compose.dev.yml up -d
# View logs
docker-compose -f docker-compose.dev.yml logs -f
# Stop development
docker-compose -f docker-compose.dev.yml down
# Rebuild development images
docker-compose -f docker-compose.dev.yml build
```
### Production (Local Testing)
```bash
# Start production environment
docker-compose up -d
# View logs
docker-compose logs -f
# Stop production
docker-compose down
# Rebuild production images
docker-compose build
```
### Database Operations
```bash
# Access PostgreSQL shell
docker-compose exec postgres psql -U placebo_user -d placebo_db
# Backup database
docker-compose exec postgres pg_dump -U placebo_user placebo_db > backup.sql
# Restore database
docker-compose exec -T postgres psql -U placebo_user placebo_db < backup.sql
```
### Service Management
```bash
# Restart specific service
docker-compose restart backend
docker-compose restart frontend
docker-compose restart cms
# View service status
docker-compose ps
# View resource usage
docker-compose stats
```
## Health Checks
All services include health checks:
### Backend
```bash
curl http://localhost:3000/health
```
### CMS
```bash
curl http://localhost:1337/_health
```
### Frontend
```bash
curl http://localhost:3001
```
### PostgreSQL
```bash
docker-compose exec postgres pg_isready -U placebo_user -d placebo_db
```
## Database Migration
### SQLite to PostgreSQL Migration
Detailed migration plan in `scripts/migrate-to-postgres.md`
```bash
# Export SQLite data
sqlite3 backend/data.db .dump > backend-data.sql
# Transform for PostgreSQL
python scripts/transform-sqlite-to-postgres.py backend-data.sql > backend-postgres.sql
# Import to PostgreSQL
docker-compose exec -T postgres psql -U placebo_user placebo_db < backend-postgres.sql
```
## Troubleshooting
### Common Issues
#### 1. Port Conflicts
```bash
# Check what's using port 3000
sudo lsof -i :3000
# Or use different ports in docker-compose.yml
```
#### 2. Docker Build Failures
```bash
# Clear Docker cache
docker system prune -a
# Rebuild with no cache
docker-compose build --no-cache
```
#### 3. Database Connection Issues
```bash
# Check PostgreSQL logs
docker-compose logs postgres
# Test connection from backend
docker-compose exec backend node -e "console.log('Testing DB connection...')"
```
#### 4. Permission Issues
```bash
# Fix volume permissions
sudo chown -R $USER:$USER .
# Rebuild containers
docker-compose down && docker-compose up -d
```
### Debug Commands
```bash
# Shell into container
docker-compose exec backend sh
docker-compose exec frontend sh
docker-compose exec cms sh
# View container details
docker-compose exec backend env
docker-compose exec frontend env
# Check network connectivity
docker-compose exec backend curl http://postgres:5432
docker-compose exec frontend curl http://backend:3000/health
```
## Deployment to Coolify
### 1. Prepare for Production
```bash
# Update environment variables for production
cp backend/.env.example backend/.env.production
cp frontend/.env.example frontend/.env.production
cp cms/cms/.env.example cms/cms/.env.production
# Build production images
docker-compose build
# Test locally
docker-compose up -d
./scripts/test-docker.sh
```
### 2. Coolify Configuration
1. Connect GitHub repository to Coolify
2. Create three applications:
- Frontend (from `/frontend`)
- Backend (from `/backend`)
- CMS (from `/cms/cms`)
3. Configure environment variables
4. Set up PostgreSQL database
5. Configure domains and SSL
### 3. Database Migration
1. Export production SQLite data
2. Transform for PostgreSQL
3. Import to Coolify PostgreSQL
4. Verify data integrity
## Performance Optimization
### Frontend
- Nginx gzip compression enabled
- Static asset caching (1 year)
- Security headers configured
- React Router support
### Backend
- Health checks every 30 seconds
- Connection pooling with PostgreSQL
- CORS configured for frontend domains
### Database
- PostgreSQL connection pooling
- Regular backups via volumes
- Health checks with `pg_isready`
## Security
### Implemented Security Features
1. **Non-root users**: All containers run as non-root
2. **Health checks**: Automatic service monitoring
3. **Environment variables**: Secrets stored in .env files
4. **Network isolation**: Services on internal network
5. **Security headers**: X-Frame-Options, CSP, etc.
6. **SSL ready**: Nginx configured for HTTPS
### Security Best Practices
1. Never commit `.env` files to Git
2. Use strong passwords for PostgreSQL
3. Regular security updates
4. Monitor Docker logs
5. Backup database regularly
## Monitoring
### Built-in Monitoring
```bash
# View container logs
docker-compose logs -f
# Check container status
docker-compose ps
# Monitor resource usage
docker-compose stats
# View health check status
docker inspect --format='{{.State.Health.Status}}' placebo-backend
```
### Custom Monitoring
Add to `docker-compose.yml`:
```yaml
monitoring:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
```
## Backup and Recovery
### Database Backup
```bash
# Daily backup script
docker-compose exec postgres pg_dump -U placebo_user placebo_db > backup-$(date +%Y%m%d).sql
# Compress backup
gzip backup-$(date +%Y%m%d).sql
```
### Volume Backup
```bash
# Backup PostgreSQL volume
docker run --rm -v postgres_data:/source -v $(pwd)/backups:/backup alpine tar czf /backup/postgres-$(date +%Y%m%d).tar.gz -C /source .
```
### Recovery
```bash
# Restore database
docker-compose exec -T postgres psql -U placebo_user placebo_db < backup.sql
# Restore volume
docker run --rm -v postgres_data:/target -v $(pwd)/backups:/backup alpine tar xzf /backup/postgres-backup.tar.gz -C /target
```
## Contributing
### Adding New Services
1. Create Dockerfile in service directory
2. Add service to `docker-compose.yml`
3. Configure environment variables
4. Add health check
5. Test locally
6. Update documentation
### Updating Dependencies
```bash
# Rebuild with updated dependencies
docker-compose build --no-cache
# Update package.json in mounted volumes
docker-compose exec backend npm update
docker-compose exec frontend npm update
docker-compose exec cms npm update
```
## Support
### Documentation
- `DEPLOYMENT.md` - Complete deployment guide
- `scripts/migrate-to-postgres.md` - Database migration plan
- `AGENTS.md` - Development guidelines
### Troubleshooting Resources
- Docker Documentation: https://docs.docker.com
- PostgreSQL Documentation: https://www.postgresql.org/docs
- Strapi Documentation: https://docs.strapi.io
- Coolify Documentation: https://coolify.io/docs
### Macedonian Resources
- Local hosting providers
- Macedonian domain registration
- GDPR compliance for Macedonian users

49
backend/Dockerfile Normal file
View File

@ -0,0 +1,49 @@
# Backend Dockerfile for Placebo.mk NestJS API
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY package-lock.json* ./
# Install dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Build TypeScript
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
# Copy built application from builder stage
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/package*.json ./
# Copy environment configuration
COPY --chown=nodejs:nodejs .env.example .env
# Switch to non-root user
USER nodejs
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => {if(r.statusCode !== 200) throw new Error()})"
# Expose port
EXPOSE 3000
# Start application
CMD ["node", "dist/main.js"]

28
backend/Dockerfile.dev Normal file
View File

@ -0,0 +1,28 @@
# Backend Development Dockerfile for Placebo.mk NestJS API
FROM node:20-alpine
WORKDIR /app
# Install dependencies with better error handling
COPY package*.json ./
COPY package-lock.json* ./
# Clear npm cache and install dependencies
RUN npm cache clean --force && \
npm install
# Copy source code
COPY . .
# Fix permissions - use node user that exists in base image
RUN chown -R node:node /app
# Switch to non-root user that exists in base image
USER node
# Expose port
EXPOSE 3000
# Start development server
CMD ["npm", "run", "start:dev"]

Binary file not shown.

BIN
backend/database.sqlite.old Normal file

Binary file not shown.

View File

@ -18,6 +18,7 @@
"@nestjs/typeorm": "^11.0.0",
"class-transformer": "^0.5.1",
"class-validator": "^0.14.3",
"pg": "^8.18.0",
"reflect-metadata": "^0.2.2",
"rxjs": "^7.8.1",
"sqlite3": "^5.1.7",
@ -9135,6 +9136,96 @@
"node": ">=8"
}
},
"node_modules/pg": {
"version": "8.18.0",
"resolved": "https://registry.npmjs.org/pg/-/pg-8.18.0.tgz",
"integrity": "sha512-xqrUDL1b9MbkydY/s+VZ6v+xiMUmOUk7SS9d/1kpyQxoJ6U9AO1oIJyUWVZojbfe5Cc/oluutcgFG4L9RDP1iQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"pg-connection-string": "^2.11.0",
"pg-pool": "^3.11.0",
"pg-protocol": "^1.11.0",
"pg-types": "2.2.0",
"pgpass": "1.0.5"
},
"engines": {
"node": ">= 16.0.0"
},
"optionalDependencies": {
"pg-cloudflare": "^1.3.0"
},
"peerDependencies": {
"pg-native": ">=3.0.1"
},
"peerDependenciesMeta": {
"pg-native": {
"optional": true
}
}
},
"node_modules/pg-cloudflare": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/pg-cloudflare/-/pg-cloudflare-1.3.0.tgz",
"integrity": "sha512-6lswVVSztmHiRtD6I8hw4qP/nDm1EJbKMRhf3HCYaqud7frGysPv7FYJ5noZQdhQtN2xJnimfMtvQq21pdbzyQ==",
"license": "MIT",
"optional": true
},
"node_modules/pg-connection-string": {
"version": "2.11.0",
"resolved": "https://registry.npmjs.org/pg-connection-string/-/pg-connection-string-2.11.0.tgz",
"integrity": "sha512-kecgoJwhOpxYU21rZjULrmrBJ698U2RxXofKVzOn5UDj61BPj/qMb7diYUR1nLScCDbrztQFl1TaQZT0t1EtzQ==",
"license": "MIT"
},
"node_modules/pg-int8": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/pg-int8/-/pg-int8-1.0.1.tgz",
"integrity": "sha512-WCtabS6t3c8SkpDBUlb1kjOs7l66xsGdKpIPZsg4wR+B3+u9UAum2odSsF9tnvxg80h4ZxLWMy4pRjOsFIqQpw==",
"license": "ISC",
"engines": {
"node": ">=4.0.0"
}
},
"node_modules/pg-pool": {
"version": "3.11.0",
"resolved": "https://registry.npmjs.org/pg-pool/-/pg-pool-3.11.0.tgz",
"integrity": "sha512-MJYfvHwtGp870aeusDh+hg9apvOe2zmpZJpyt+BMtzUWlVqbhFmMK6bOBXLBUPd7iRtIF9fZplDc7KrPN3PN7w==",
"license": "MIT",
"peerDependencies": {
"pg": ">=8.0"
}
},
"node_modules/pg-protocol": {
"version": "1.11.0",
"resolved": "https://registry.npmjs.org/pg-protocol/-/pg-protocol-1.11.0.tgz",
"integrity": "sha512-pfsxk2M9M3BuGgDOfuy37VNRRX3jmKgMjcvAcWqNDpZSf4cUmv8HSOl5ViRQFsfARFn0KuUQTgLxVMbNq5NW3g==",
"license": "MIT"
},
"node_modules/pg-types": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/pg-types/-/pg-types-2.2.0.tgz",
"integrity": "sha512-qTAAlrEsl8s4OiEQY69wDvcMIdQN6wdz5ojQiOy6YRMuynxenON0O5oCpJI6lshc6scgAY8qvJ2On/p+CXY0GA==",
"license": "MIT",
"dependencies": {
"pg-int8": "1.0.1",
"postgres-array": "~2.0.0",
"postgres-bytea": "~1.0.0",
"postgres-date": "~1.0.4",
"postgres-interval": "^1.1.0"
},
"engines": {
"node": ">=4"
}
},
"node_modules/pgpass": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/pgpass/-/pgpass-1.0.5.tgz",
"integrity": "sha512-FdW9r/jQZhSeohs1Z3sI1yxFQNFvMcnmfuj4WBMUTxOrAyLMaTcE1aAMBiTlbMNaXvBCQuVi0R7hd8udDSP7ug==",
"license": "MIT",
"dependencies": {
"split2": "^4.1.0"
}
},
"node_modules/picocolors": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz",
@ -9253,6 +9344,45 @@
"node": ">= 0.4"
}
},
"node_modules/postgres-array": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/postgres-array/-/postgres-array-2.0.0.tgz",
"integrity": "sha512-VpZrUqU5A69eQyW2c5CA1jtLecCsN2U/bD6VilrFDWq5+5UIEVO7nazS3TEcHf1zuPYO/sqGvUvW62g86RXZuA==",
"license": "MIT",
"engines": {
"node": ">=4"
}
},
"node_modules/postgres-bytea": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/postgres-bytea/-/postgres-bytea-1.0.1.tgz",
"integrity": "sha512-5+5HqXnsZPE65IJZSMkZtURARZelel2oXUEO8rH83VS/hxH5vv1uHquPg5wZs8yMAfdv971IU+kcPUczi7NVBQ==",
"license": "MIT",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/postgres-date": {
"version": "1.0.7",
"resolved": "https://registry.npmjs.org/postgres-date/-/postgres-date-1.0.7.tgz",
"integrity": "sha512-suDmjLVQg78nMK2UZ454hAG+OAW+HQPZ6n++TNDUX+L0+uUlLywnoxJKDou51Zm+zTCjrCl0Nq6J9C5hP9vK/Q==",
"license": "MIT",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/postgres-interval": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/postgres-interval/-/postgres-interval-1.2.0.tgz",
"integrity": "sha512-9ZhXKM/rw350N1ovuWHbGxnGh/SNJ4cnxHiM0rxE4VN41wsg8P8zWn9hv/buK00RP4WvlOyr/RBDiptyxVbkZQ==",
"license": "MIT",
"dependencies": {
"xtend": "^4.0.0"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/prebuild-install": {
"version": "7.1.3",
"resolved": "https://registry.npmjs.org/prebuild-install/-/prebuild-install-7.1.3.tgz",
@ -10081,6 +10211,15 @@
"node": ">=0.10.0"
}
},
"node_modules/split2": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/split2/-/split2-4.2.0.tgz",
"integrity": "sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg==",
"license": "ISC",
"engines": {
"node": ">= 10.x"
}
},
"node_modules/sprintf-js": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz",

View File

@ -31,6 +31,7 @@
"@nestjs/typeorm": "^11.0.0",
"class-transformer": "^0.5.1",
"class-validator": "^0.14.3",
"pg": "^8.18.0",
"reflect-metadata": "^0.2.2",
"rxjs": "^7.8.1",
"sqlite3": "^5.1.7",

View File

@ -20,8 +20,12 @@ import {
isGlobal: true,
}),
TypeOrmModule.forRoot({
type: 'sqlite',
database: process.env.DATABASE_PATH ?? './database.sqlite',
type: 'postgres',
host: process.env.DATABASE_HOST || 'localhost',
port: parseInt(process.env.DATABASE_PORT || '5432', 10),
username: process.env.DATABASE_USERNAME || 'placebo_user',
password: process.env.DATABASE_PASSWORD || 'placebo_password',
database: process.env.DATABASE_NAME || 'placebo_backend_db',
entities: [Article, Author, Category, LiveBlog, LiveBlogUpdate],
synchronize: process.env.NODE_ENV !== 'production',
logging: process.env.NODE_ENV === 'development',

View File

@ -15,6 +15,11 @@ async function bootstrap() {
});
app.setGlobalPrefix('api/v1');
await app.listen(process.env.PORT ?? 3000);
const port = process.env.PORT ?? 3000;
const host = '0.0.0.0'; // Bind to all interfaces for Docker
await app.listen(port, host);
console.log(`Application is running on: http://${host}:${port}`);
}
void bootstrap();

View File

@ -223,7 +223,7 @@ export class LiveBlogUpdate {
@Column({ nullable: true })
authorId: string;
@Column({ type: 'datetime', nullable: true })
@Column({ type: 'timestamp', nullable: true })
scheduledAt: Date;
@Column({ nullable: true })

View File

@ -376,8 +376,19 @@ export class LiveBlogService implements OnModuleInit {
blogId: liveBlogId,
});
// Send periodic keep-alive messages to prevent timeout
const keepAliveInterval = setInterval(() => {
try {
response.write(`: keep-alive\n\n`);
} catch (error) {
// Client disconnected, stop sending keep-alive
clearInterval(keepAliveInterval);
}
}, 15000); // Send keep-alive every 15 seconds
// Handle client disconnect
response.on('close', () => {
clearInterval(keepAliveInterval);
this.sseClients.delete(clientId);
this.logger.log(
`Client ${clientId} disconnected from live blog ${liveBlogId}`,

57
cms/cms/Dockerfile Normal file
View File

@ -0,0 +1,57 @@
# CMS Dockerfile for Placebo.mk Strapi CMS
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY package-lock.json* ./
# Install dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Build Strapi
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
# Install SQLite for development (will use PostgreSQL in production)
RUN apk add --no-cache sqlite
# Copy built application from builder stage
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/public ./public
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/package*.json ./
# Copy environment configuration
COPY --chown=nodejs:nodejs .env.example .env
# Create data directory for SQLite
RUN mkdir -p /app/.tmp && \
chown -R nodejs:nodejs /app/.tmp
# Switch to non-root user
USER nodejs
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:1337/_health', (r) => {if(r.statusCode !== 200) throw new Error()})"
# Expose port
EXPOSE 1337
# Start Strapi
CMD ["npm", "run", "start"]

35
cms/cms/Dockerfile.dev Normal file
View File

@ -0,0 +1,35 @@
# CMS Development Dockerfile for Placebo.mk Strapi CMS
FROM node:20-slim
WORKDIR /app
# Install Python and build tools for any native modules
RUN apt-get update && apt-get install -y \
python3 \
make \
g++ \
&& rm -rf /var/lib/apt/lists/*
# Install dependencies with better error handling
COPY package*.json ./
COPY package-lock.json* ./
# Fix permissions before npm install
RUN chown -R node:node /app
# Switch to non-root user for npm install
USER node
# Clear npm cache and install dependencies
RUN npm cache clean --force && \
npm install
# Copy source code as node user
COPY --chown=node:node . .
# Expose port
EXPOSE 1337
# Start development server
CMD ["npm", "run", "develop"]

View File

@ -18,7 +18,7 @@
"@strapi/plugin-cloud": "5.33.0",
"@strapi/plugin-users-permissions": "5.33.0",
"@strapi/strapi": "5.33.0",
"better-sqlite3": "12.4.1",
"pg": "^8.13.3",
"react": "^18.0.0",
"react-dom": "^18.0.0",
"react-router-dom": "^6.0.0",

124
docker-compose.dev.yml Normal file
View File

@ -0,0 +1,124 @@
version: '3.8'
services:
# PostgreSQL database for development
postgres:
image: postgres:16-alpine
container_name: placebo-postgres-dev
environment:
POSTGRES_USER: placebo_user
POSTGRES_PASSWORD: placebo_password
volumes:
- postgres_data_dev:/var/lib/postgresql/data
- ./scripts/init-postgres-dev.sql:/docker-entrypoint-initdb.d/init-postgres-dev.sql
ports:
- "5432:5432"
networks:
- placebo-network-dev
# Backend API (NestJS) - Development with hot reload
backend:
build:
context: ./backend
dockerfile: Dockerfile.dev
container_name: placebo-backend-dev
environment:
NODE_ENV: development
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_USERNAME: placebo_user
DATABASE_PASSWORD: placebo_password
DATABASE_NAME: placebo_backend_db
DATABASE_SYNCHRONIZE: "true"
DATABASE_LOGGING: "true"
JWT_SECRET: dev-jwt-secret
JWT_EXPIRATION: 3600
CORS_ORIGIN: http://localhost:5173
PORT: 3000
STRAPI_URL: http://cms:1337
STRAPI_API_TOKEN: 578d628f62df967ff95f95bedb205b5d10bbf792340519c8c467d6473208e16b3918151a97b49fa2285a53df0ec8e340a9ca555b01a654bd22152847840e6a368ee626a6f1338ce2f23790c171013b263ec80fbaf116e2b459d3663b234d08855fd0eb631991ed15bb94f7dbb0b80f190352965c72c7fd327c73629ceff38fbb
ports:
- "3000:3000"
depends_on:
- postgres
volumes:
- ./backend/src:/app/src
- ./backend/package.json:/app/package.json
- ./backend/package-lock.json:/app/package-lock.json
- ./backend/.env:/app/.env
command: npm run start:dev
networks:
- placebo-network-dev
# Frontend (TanStack React) - Development with hot reload
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
container_name: placebo-frontend-dev
environment:
NODE_ENV: development
VITE_API_URL: http://localhost:3000/api/v1
VITE_CMS_URL: http://localhost:1337
ports:
- "5173:5173"
depends_on:
- backend
volumes:
- ./frontend/src:/app/src
- ./frontend/public:/app/public
- ./frontend/package.json:/app/package.json
- ./frontend/package-lock.json:/app/package-lock.json
- ./frontend/index.html:/app/index.html
- ./frontend/vite.config.ts:/app/vite.config.ts
- ./frontend/tsconfig.json:/app/tsconfig.json
- ./frontend/tsconfig.node.json:/app/tsconfig.node.json
- ./frontend/tailwind.config.js:/app/tailwind.config.js
- ./frontend/.env:/app/.env
command: npm run dev
networks:
- placebo-network-dev
# CMS (Strapi) - Development with hot reload
cms:
build:
context: ./cms/cms
dockerfile: Dockerfile.dev
container_name: placebo-cms-dev
environment:
NODE_ENV: development
DATABASE_CLIENT: postgres
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_NAME: placebo_cms_db
DATABASE_USERNAME: placebo_user
DATABASE_PASSWORD: placebo_password
DATABASE_SSL: "false"
JWT_SECRET: dev-jwt-secret
ADMIN_JWT_SECRET: dev-admin-jwt-secret
APP_KEYS: dev-app-keys
API_TOKEN_SALT: dev-api-token-salt
TRANSFER_TOKEN_SALT: dev-transfer-token-salt
CORS_ORIGIN: http://localhost:5173
PORT: 1337
ports:
- "1337:1337"
depends_on:
- postgres
volumes:
- ./cms/cms/src:/app/src
- ./cms/cms/config:/app/config
- ./cms/cms/public:/app/public
- ./cms/cms/package.json:/app/package.json
- ./cms/cms/package-lock.json:/app/package-lock.json
command: npm run develop
networks:
- placebo-network-dev
volumes:
postgres_data_dev:
driver: local
networks:
placebo-network-dev:
driver: bridge

151
docker-compose.yml Normal file
View File

@ -0,0 +1,151 @@
version: '3.8'
services:
# PostgreSQL database for production
postgres:
image: postgres:16-alpine
container_name: placebo-postgres
environment:
POSTGRES_DB: placebo_db
POSTGRES_USER: placebo_user
POSTGRES_PASSWORD: placebo_password
volumes:
- postgres_data:/var/lib/postgresql/data
- ./scripts/init-db.sql:/docker-entrypoint-initdb.d/init-db.sql
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U placebo_user -d placebo_db"]
interval: 10s
timeout: 5s
retries: 5
networks:
- placebo-network
# Backend API (NestJS)
backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: placebo-backend
environment:
NODE_ENV: production
DATABASE_TYPE: postgres
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_USERNAME: placebo_user
DATABASE_PASSWORD: placebo_password
DATABASE_NAME: placebo_db
DATABASE_SYNCHRONIZE: "false"
DATABASE_LOGGING: "false"
JWT_SECRET: ${JWT_SECRET:-your-jwt-secret-key-change-in-production}
JWT_EXPIRATION: 3600
CORS_ORIGIN: http://localhost:5173,http://localhost:3001
PORT: 3000
ports:
- "3000:3000"
depends_on:
postgres:
condition: service_healthy
volumes:
- ./backend/.env:/app/.env:ro
- ./backend/uploads:/app/uploads
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3000/health', (r) => {if(r.statusCode !== 200) throw new Error()})"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- placebo-network
# Frontend (TanStack React)
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
container_name: placebo-frontend
environment:
NODE_ENV: production
VITE_API_URL: http://localhost:3000
VITE_CMS_URL: http://localhost:1337
ports:
- "3001:80"
depends_on:
- backend
volumes:
- ./frontend/.env:/app/.env:ro
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- placebo-network
# CMS (Strapi)
cms:
build:
context: ./cms/cms
dockerfile: Dockerfile
container_name: placebo-cms
environment:
NODE_ENV: production
DATABASE_CLIENT: postgres
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_NAME: placebo_db
DATABASE_USERNAME: placebo_user
DATABASE_PASSWORD: placebo_password
DATABASE_SSL: "false"
JWT_SECRET: ${JWT_SECRET:-your-jwt-secret-key-change-in-production}
ADMIN_JWT_SECRET: ${ADMIN_JWT_SECRET:-your-admin-jwt-secret-key-change-in-production}
APP_KEYS: ${APP_KEYS:-your-app-keys-change-in-production}
API_TOKEN_SALT: ${API_TOKEN_SALT:-your-api-token-salt-change-in-production}
TRANSFER_TOKEN_SALT: ${TRANSFER_TOKEN_SALT:-your-transfer-token-salt-change-in-production}
CORS_ORIGIN: http://localhost:5173,http://localhost:3001
PORT: 1337
ports:
- "1337:1337"
depends_on:
postgres:
condition: service_healthy
volumes:
- ./cms/cms/.env:/app/.env:ro
- ./cms/cms/public/uploads:/app/public/uploads
- ./cms/cms/.tmp:/app/.tmp
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:1337/_health', (r) => {if(r.statusCode !== 200) throw new Error()})"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
networks:
- placebo-network
# Nginx reverse proxy (for production)
nginx:
image: nginx:alpine
container_name: placebo-nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- ./frontend/dist:/usr/share/nginx/html:ro
depends_on:
- frontend
- backend
- cms
networks:
- placebo-network
volumes:
postgres_data:
driver: local
networks:
placebo-network:
driver: bridge

45
frontend/Dockerfile Normal file
View File

@ -0,0 +1,45 @@
# Frontend Dockerfile for Placebo.mk TanStack React App
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY package-lock.json* ./
# Install dependencies
RUN npm ci
# Copy source code
COPY . .
# Build application
RUN npm run build
# Production stage
FROM nginx:alpine
# Create non-root user
RUN addgroup -g 1001 -S nginx && \
adduser -S nginx -u 1001 -G nginx
# Copy built application from builder stage
COPY --from=builder --chown=nginx:nginx /app/dist /usr/share/nginx/html
# Copy nginx configuration
COPY nginx.conf /etc/nginx/nginx.conf
# Switch to non-root user
USER nginx
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:80/ || exit 1
# Expose port
EXPOSE 80
# Start nginx
CMD ["nginx", "-g", "daemon off;"]

28
frontend/Dockerfile.dev Normal file
View File

@ -0,0 +1,28 @@
# Frontend Development Dockerfile for Placebo.mk TanStack React App
FROM node:20-alpine
WORKDIR /app
# Install dependencies with better error handling
COPY package*.json ./
COPY package-lock.json* ./
# Clear npm cache and install dependencies
RUN npm cache clean --force && \
npm install
# Copy source code
COPY . .
# Fix permissions - use node user that exists in base image
RUN chown -R node:node /app
# Switch to non-root user that exists in base image
USER node
# Expose port
EXPOSE 5173
# Start development server with host flag
CMD ["npm", "run", "dev", "--", "--host"]

107
frontend/nginx.conf Normal file
View File

@ -0,0 +1,107 @@
worker_processes auto;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
# Basic settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 100M;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/javascript application/xml+rss application/json;
gzip_disable "MSIE [1-6]\.";
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Frontend server
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
# Security headers for frontend
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' data:; connect-src 'self' http://localhost:3000 http://localhost:1337;" always;
# Handle React Router
location / {
try_files $uri $uri/ /index.html;
expires -1;
}
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# API proxy
location /api/ {
proxy_pass http://backend:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300;
proxy_connect_timeout 300;
}
# CMS proxy
location /cms/ {
proxy_pass http://cms:1337/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300;
proxy_connect_timeout 300;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Error pages
error_page 404 /index.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}

View File

@ -16,6 +16,13 @@ export function useLiveBlogStream(
liveBlogId: string,
options: LiveBlogStreamOptions = {}
) {
const defaultOptions: LiveBlogStreamOptions = {
autoReconnect: true,
reconnectInterval: 3000,
maxReconnectAttempts: 10,
};
const mergedOptions = { ...defaultOptions, ...options };
const [isConnected, setIsConnected] = useState(false);
const [lastEvent, setLastEvent] = useState<LiveBlogStreamEvent | null>(null);
const [connectionError, setConnectionError] = useState<string | null>(null);
@ -24,13 +31,13 @@ export function useLiveBlogStream(
const eventSourceRef = useRef<EventSource | null>(null);
const reconnectTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);
const lastEventIdRef = useRef<string | null>(null);
const optionsRef = useRef(options);
const optionsRef = useRef(mergedOptions);
const reconnectAttemptsRef = useRef(reconnectAttempts);
// Update refs when props change
useEffect(() => {
optionsRef.current = options;
}, [options]);
optionsRef.current = mergedOptions;
}, [mergedOptions]);
useEffect(() => {
reconnectAttemptsRef.current = reconnectAttempts;
@ -59,7 +66,7 @@ export function useLiveBlogStream(
eventSourceRef.current.close();
}
const url = new URL(`${import.meta.env.VITE_API_URL}/api/v1/live-blogs/${liveBlogId}/stream`, window.location.origin);
const url = new URL(`${import.meta.env.VITE_API_URL}/live-blogs/${liveBlogId}/stream`, window.location.origin);
if (lastEventIdRef.current) {
url.searchParams.set('last-event-id', lastEventIdRef.current);
@ -117,14 +124,14 @@ export function useLiveBlogStream(
setConnectionError('Connection to live blog lost');
// Attempt reconnection if enabled and within limits
if (optionsRef.current.autoReconnect && reconnectAttemptsRef.current < (optionsRef.current.maxReconnectAttempts || 5)) {
if (optionsRef.current.autoReconnect && reconnectAttemptsRef.current < optionsRef.current.maxReconnectAttempts) {
const nextAttempt = reconnectAttemptsRef.current + 1;
setReconnectAttempts(nextAttempt);
reconnectTimeoutRef.current = setTimeout(() => {
console.log(`Attempting reconnection (${nextAttempt}/${optionsRef.current.maxReconnectAttempts || 5})`);
console.log(`Attempting reconnection (${nextAttempt}/${optionsRef.current.maxReconnectAttempts})`);
createConnection();
}, optionsRef.current.reconnectInterval || 3000);
} else if (reconnectAttemptsRef.current >= (optionsRef.current.maxReconnectAttempts || 5)) {
}, optionsRef.current.reconnectInterval);
} else if (reconnectAttemptsRef.current >= optionsRef.current.maxReconnectAttempts) {
setConnectionError('Failed to reconnect after multiple attempts');
}
};

View File

@ -32,6 +32,7 @@ export function useLiveBlogUpdates(liveBlogId: string, page = 1, limit = 50) {
queryKey: ['liveBlogUpdates', liveBlogId, page, limit],
queryFn: () => api.fetchLiveBlogUpdates(liveBlogId, page, limit),
enabled: !!liveBlogId,
refetchInterval: 10000, // Poll every 10 seconds as fallback
});
}

View File

@ -14,4 +14,9 @@ export default defineConfig({
'@': path.resolve(__dirname, './src'),
},
},
server: {
host: true, // Listen on all addresses
port: 5173,
strictPort: true,
},
})

View File

@ -0,0 +1,44 @@
-- PostgreSQL initialization script for Placebo.mk development
-- Creates separate databases for backend and CMS
-- Create databases if they don't exist
SELECT 'CREATE DATABASE placebo_backend_db'
WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'placebo_backend_db')\gexec
SELECT 'CREATE DATABASE placebo_cms_db'
WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'placebo_cms_db')\gexec
-- Create user if it doesn't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'placebo_user') THEN
CREATE USER placebo_user WITH PASSWORD 'placebo_password';
END IF;
END
$$;
-- Grant privileges
GRANT ALL PRIVILEGES ON DATABASE placebo_backend_db TO placebo_user;
GRANT ALL PRIVILEGES ON DATABASE placebo_cms_db TO placebo_user;
-- Create extensions (useful for some applications)
\c placebo_backend_db
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
\c placebo_cms_db
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- Set search path for convenience
ALTER DATABASE placebo_backend_db SET search_path TO public;
ALTER DATABASE placebo_cms_db SET search_path TO public;
-- Output status
SELECT 'PostgreSQL initialization complete' AS status;
SELECT
datname as database,
pg_size_pretty(pg_database_size(datname)) as size,
(SELECT count(*) FROM pg_tables WHERE schemaname = 'public') as tables_count
FROM pg_database
WHERE datname IN ('placebo_backend_db', 'placebo_cms_db');

View File

@ -0,0 +1,297 @@
# Database Migration Plan: SQLite → PostgreSQL
## Overview
This document outlines the migration strategy for moving from SQLite (development) to PostgreSQL (production) for both the Backend API and CMS.
## Current State
- **Backend**: Uses SQLite with TypeORM entities
- **CMS**: Uses SQLite with Strapi's internal database
- **Development**: Both use local SQLite files
- **Production**: Need PostgreSQL for scalability and reliability
## Migration Steps
### Phase 1: Database Schema Preparation
#### 1.1 Update Backend Database Configuration
```typescript
// backend/src/config/database.config.ts
export default () => ({
database: {
type: process.env.DATABASE_TYPE || 'sqlite',
host: process.env.DATABASE_HOST || 'localhost',
port: parseInt(process.env.DATABASE_PORT, 10) || 5432,
username: process.env.DATABASE_USERNAME || 'placebo_user',
password: process.env.DATABASE_PASSWORD || 'placebo_password',
database: process.env.DATABASE_NAME || 'placebo_backend_db',
synchronize: process.env.NODE_ENV !== 'production',
logging: process.env.NODE_ENV !== 'production',
entities: [__dirname + '/../**/*.entity{.ts,.js}'],
migrations: [__dirname + '/../migrations/*{.ts,.js}'],
cli: {
migrationsDir: 'src/migrations',
},
},
});
```
#### 1.2 Create TypeORM Migrations
```bash
# Generate migration for existing schema
cd backend
npm run typeorm:generate-migration --name=InitialSchema
# Create migration files for PostgreSQL compatibility
npm run typeorm:create-migration --name=AddPostgresSupport
```
#### 1.3 Update CMS Database Configuration
```javascript
// cms/cms/config/database.js
module.exports = ({ env }) => ({
connection: {
client: env('DATABASE_CLIENT', 'sqlite'),
connection: {
host: env('DATABASE_HOST', 'localhost'),
port: env.int('DATABASE_PORT', 5432),
database: env('DATABASE_NAME', 'placebo_cms_db'),
user: env('DATABASE_USERNAME', 'placebo_user'),
password: env('DATABASE_PASSWORD', 'placebo_password'),
ssl: env.bool('DATABASE_SSL', false),
},
debug: false,
},
});
```
### Phase 2: Data Migration
#### 2.1 Export SQLite Data
```bash
# Export Backend SQLite data
sqlite3 backend/data.db .dump > backend-data.sql
# Export CMS SQLite data
sqlite3 cms/cms/.tmp/data.db .dump > cms-data.sql
```
#### 2.2 Transform SQL for PostgreSQL
Create transformation scripts:
```python
# scripts/transform-sqlite-to-postgres.py
import re
def transform_sqlite_to_postgres(sqlite_sql):
# Remove SQLite-specific syntax
sqlite_sql = re.sub(r'AUTOINCREMENT', 'SERIAL', sqlite_sql)
sqlite_sql = re.sub(r'INTEGER PRIMARY KEY', 'SERIAL PRIMARY KEY', sqlite_sql)
sqlite_sql = re.sub(r'BLOB', 'BYTEA', sqlite_sql)
sqlite_sql = re.sub(r'DATETIME', 'TIMESTAMP', sqlite_sql)
return sqlite_sql
```
#### 2.3 Import to PostgreSQL
```bash
# Create databases
psql -U placebo_user -h localhost -d placebo_db -c "CREATE DATABASE placebo_backend_db;"
psql -U placebo_user -h localhost -d placebo_db -c "CREATE DATABASE placebo_cms_db;"
# Import transformed data
psql -U placebo_user -h localhost -d placebo_backend_db -f transformed-backend-data.sql
psql -U placebo_user -h localhost -d placebo_cms_db -f transformed-cms-data.sql
```
### Phase 3: Application Updates
#### 3.1 Update Environment Variables
Create `.env.production` files:
```bash
# backend/.env.production
DATABASE_TYPE=postgres
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_USERNAME=placebo_user
DATABASE_PASSWORD=${POSTGRES_PASSWORD}
DATABASE_NAME=placebo_backend_db
DATABASE_SYNCHRONIZE=false
# cms/cms/.env.production
DATABASE_CLIENT=postgres
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_NAME=placebo_cms_db
DATABASE_USERNAME=placebo_user
DATABASE_PASSWORD=${POSTGRES_PASSWORD}
DATABASE_SSL=false
```
#### 3.2 Update Docker Configuration
Update `docker-compose.yml` to use PostgreSQL for both services.
### Phase 4: Testing
#### 4.1 Local Testing with Docker Compose
```bash
# Start services with PostgreSQL
docker-compose up -d postgres backend cms frontend
# Run data migration
./scripts/migrate-data.sh
# Test endpoints
curl http://localhost:3000/health
curl http://localhost:1337/_health
```
#### 4.2 Data Validation
```sql
-- Verify data counts match
SELECT 'Backend Articles', COUNT(*) FROM articles
UNION ALL
SELECT 'CMS Content Types', COUNT(*) FROM strapi_content_types;
-- Verify relationships
SELECT a.title, COUNT(c.id) as comment_count
FROM articles a
LEFT JOIN comments c ON a.id = c.article_id
GROUP BY a.id;
```
### Phase 5: Production Deployment
#### 5.1 Create Migration Script
```bash
#!/bin/bash
# scripts/deploy-migration.sh
set -e
echo "Starting database migration..."
# Backup existing PostgreSQL data
pg_dump -U placebo_user -h ${PRODUCTION_DB_HOST} -d placebo_backend_db > backup-$(date +%Y%m%d).sql
# Run migrations
npm run typeorm:migration:run
# Verify migration
npm run typeorm:query "SELECT version_num FROM typeorm_migrations ORDER BY version_num DESC LIMIT 1;"
echo "Migration completed successfully!"
```
#### 5.2 Rollback Plan
```bash
#!/bin/bash
# scripts/rollback-migration.sh
set -e
echo "Starting rollback..."
# Restore from backup
psql -U placebo_user -h ${PRODUCTION_DB_HOST} -d placebo_backend_db -f backup-${BACKUP_DATE}.sql
# Revert environment variables
export DATABASE_TYPE=sqlite
export DATABASE_NAME=./data.db
echo "Rollback completed!"
```
## Migration Tools
### 1. SQLite to PostgreSQL Converter
```python
# scripts/sqlite_to_postgres.py
import sqlite3
import psycopg2
import os
def migrate_table(sqlite_conn, pg_conn, table_name):
# Read from SQLite
cursor_sqlite = sqlite_conn.cursor()
cursor_sqlite.execute(f"SELECT * FROM {table_name}")
rows = cursor_sqlite.fetchall()
# Get column names
cursor_sqlite.execute(f"PRAGMA table_info({table_name})")
columns = [col[1] for col in cursor_sqlite.fetchall()]
# Insert into PostgreSQL
cursor_pg = pg_conn.cursor()
placeholders = ', '.join(['%s'] * len(columns))
columns_str = ', '.join(columns)
for row in rows:
cursor_pg.execute(
f"INSERT INTO {table_name} ({columns_str}) VALUES ({placeholders})",
row
)
pg_conn.commit()
```
### 2. Data Validation Script
```python
# scripts/validate_migration.py
def validate_counts(sqlite_conn, pg_conn, table_name):
cursor_sqlite = sqlite_conn.cursor()
cursor_sqlite.execute(f"SELECT COUNT(*) FROM {table_name}")
sqlite_count = cursor_sqlite.fetchone()[0]
cursor_pg = pg_conn.cursor()
cursor_pg.execute(f"SELECT COUNT(*) FROM {table_name}")
pg_count = cursor_pg.fetchone()[0]
return sqlite_count == pg_count
```
## Timeline
### Week 1: Preparation
- Update database configurations
- Create migration scripts
- Set up PostgreSQL locally
### Week 2: Testing
- Test migration locally
- Validate data integrity
- Performance testing
### Week 3: Staging Deployment
- Deploy to staging environment
- User acceptance testing
- Fix any issues
### Week 4: Production Deployment
- Schedule maintenance window
- Execute migration
- Monitor performance
- Rollback if needed
## Risk Mitigation
### High Risks:
1. **Data loss**: Maintain multiple backups
2. **Downtime**: Schedule during low-traffic hours
3. **Performance issues**: Monitor closely after migration
### Mitigation Strategies:
1. Complete backup before migration
2. Gradual rollout with canary deployment
3. Performance monitoring for 48 hours post-migration
## Success Criteria
1. All data migrated without loss
2. Application functionality unchanged
3. Performance equal or better than SQLite
4. Zero downtime during migration
5. All tests passing post-migration
## Post-Migration Tasks
1. Update documentation
2. Remove SQLite dependencies
3. Set up PostgreSQL monitoring
4. Schedule regular backups
5. Update deployment scripts

147
scripts/test-docker.sh Executable file
View File

@ -0,0 +1,147 @@
#!/bin/bash
# Test script for Placebo.mk Docker setup
set -e
echo "🧪 Testing Placebo.mk Docker setup..."
# Check if docker-compose is available
if ! command -v docker-compose &> /dev/null; then
echo "⚠️ docker-compose not found, using docker compose"
DOCKER_COMPOSE="docker compose"
else
DOCKER_COMPOSE="docker-compose"
fi
# Create environment files if they don't exist
echo "📝 Creating environment files..."
# Backend .env
if [ ! -f backend/.env ]; then
cat > backend/.env << EOF
# Backend Environment Variables
NODE_ENV=production
DATABASE_TYPE=postgres
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_USERNAME=placebo_user
DATABASE_PASSWORD=placebo_password
DATABASE_NAME=placebo_db
DATABASE_SYNCHRONIZE=false
DATABASE_LOGGING=false
JWT_SECRET=your-jwt-secret-key-change-in-production
JWT_EXPIRATION=3600
CORS_ORIGIN=http://localhost:5173,http://localhost:3001
PORT=3000
EOF
echo "✅ Created backend/.env"
fi
# Frontend .env
if [ ! -f frontend/.env ]; then
cat > frontend/.env << EOF
# Frontend Environment Variables
NODE_ENV=production
VITE_API_URL=http://localhost:3000
VITE_CMS_URL=http://localhost:1337
EOF
echo "✅ Created frontend/.env"
fi
# CMS .env
if [ ! -f cms/cms/.env ]; then
cat > cms/cms/.env << EOF
# CMS Environment Variables
NODE_ENV=production
DATABASE_CLIENT=postgres
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_NAME=placebo_db
DATABASE_USERNAME=placebo_user
DATABASE_PASSWORD=placebo_password
DATABASE_SSL=false
JWT_SECRET=your-jwt-secret-key-change-in-production
ADMIN_JWT_SECRET=your-admin-jwt-secret-key-change-in-production
APP_KEYS=your-app-keys-change-in-production
API_TOKEN_SALT=your-api-token-salt-change-in-production
TRANSFER_TOKEN_SALT=your-transfer-token-salt-change-in-production
CORS_ORIGIN=http://localhost:5173,http://localhost:3001
PORT=1337
EOF
echo "✅ Created cms/cms/.env"
fi
# Create database initialization script
mkdir -p scripts
cat > scripts/init-db.sql << EOF
-- Create separate databases for backend and CMS
CREATE DATABASE placebo_backend_db;
CREATE DATABASE placebo_cms_db;
-- Grant privileges
GRANT ALL PRIVILEGES ON DATABASE placebo_backend_db TO placebo_user;
GRANT ALL PRIVILEGES ON DATABASE placebo_cms_db TO placebo_user;
EOF
echo "✅ Created database initialization script"
echo "🐳 Building Docker images..."
$DOCKER_COMPOSE build
echo "🚀 Starting services..."
$DOCKER_COMPOSE up -d
echo "⏳ Waiting for services to be healthy..."
sleep 30
echo "📊 Checking service status..."
$DOCKER_COMPOSE ps
echo "🔍 Testing service connectivity..."
# Test PostgreSQL
echo "Testing PostgreSQL..."
if docker exec placebo-postgres pg_isready -U placebo_user -d placebo_db; then
echo "✅ PostgreSQL is healthy"
else
echo "❌ PostgreSQL health check failed"
exit 1
fi
# Test Backend
echo "Testing Backend API..."
if curl -f http://localhost:3000/health > /dev/null 2>&1; then
echo "✅ Backend API is healthy"
else
echo "❌ Backend API health check failed"
exit 1
fi
# Test CMS
echo "Testing CMS..."
if curl -f http://localhost:1337/_health > /dev/null 2>&1; then
echo "✅ CMS is healthy"
else
echo "❌ CMS health check failed"
exit 1
fi
# Test Frontend
echo "Testing Frontend..."
if curl -f http://localhost:3001 > /dev/null 2>&1; then
echo "✅ Frontend is serving"
else
echo "❌ Frontend health check failed"
exit 1
fi
echo ""
echo "🎉 All tests passed! Docker setup is working correctly."
echo ""
echo "Services are running at:"
echo " Frontend: http://localhost:3001"
echo " Backend API: http://localhost:3000"
echo " CMS Admin: http://localhost:1337/admin"
echo ""
echo "To stop services: docker-compose down"
echo "To view logs: docker-compose logs -f"