Merge branch 'main' of ssh://git.cloonar.com/openclawd/docfast
Some checks failed
Deploy to Production / Deploy to Server (push) Failing after 21s
Some checks failed
Deploy to Production / Deploy to Server (push) Failing after 21s
This commit is contained in:
commit
ed273430c7
3 changed files with 252 additions and 13 deletions
184
BACKUP_PROCEDURES.md
Normal file
184
BACKUP_PROCEDURES.md
Normal file
|
|
@ -0,0 +1,184 @@
|
|||
# DocFast Backup & Disaster Recovery Procedures
|
||||
|
||||
## Overview
|
||||
DocFast now uses BorgBackup for full disaster recovery backups. The system backs up all critical components needed to restore the service on a new server.
|
||||
|
||||
## What is Backed Up
|
||||
- **PostgreSQL database** - Full database dump with schema and data
|
||||
- **Docker volumes** - Application data and files
|
||||
- **Nginx configuration** - Web server configuration
|
||||
- **SSL certificates** - Let's Encrypt certificates and keys
|
||||
- **Crontabs** - Scheduled tasks
|
||||
- **OpenDKIM keys** - Email authentication keys
|
||||
- **DocFast application files** - docker-compose.yml, .env, scripts
|
||||
- **System information** - Installed packages, enabled services, disk usage
|
||||
|
||||
## Backup Location & Schedule
|
||||
|
||||
### Current Setup (Local)
|
||||
- **Location**: `/opt/borg-backups/docfast`
|
||||
- **Schedule**: Daily at 03:00 UTC
|
||||
- **Retention**: 7 daily + 4 weekly + 3 monthly backups
|
||||
- **Compression**: LZ4 (fast compression/decompression)
|
||||
- **Encryption**: repokey mode (encrypted with passphrase)
|
||||
|
||||
### Security
|
||||
- **Passphrase**: `docfast-backup-YYYY` (where YYYY is current year)
|
||||
- **Key backup**: Stored in `/opt/borg-backups/docfast-key-backup.txt`
|
||||
- **⚠️ IMPORTANT**: Both passphrase AND key are required for restore!
|
||||
|
||||
## Scripts
|
||||
|
||||
### Backup Script: `/opt/docfast-borg-backup.sh`
|
||||
- Automated backup creation
|
||||
- Runs via cron daily at 03:00 UTC
|
||||
- Logs to `/var/log/docfast-backup.log`
|
||||
- Auto-prunes old backups
|
||||
|
||||
### Restore Script: `/opt/docfast-borg-restore.sh`
|
||||
- List available backups: `./docfast-borg-restore.sh list`
|
||||
- Restore specific backup: `./docfast-borg-restore.sh restore docfast-YYYY-MM-DD_HHMM`
|
||||
- Restore latest backup: `./docfast-borg-restore.sh restore latest`
|
||||
|
||||
## Manual Backup Commands
|
||||
|
||||
```bash
|
||||
# Run backup manually
|
||||
/opt/docfast-borg-backup.sh
|
||||
|
||||
# List all backups
|
||||
export BORG_PASSPHRASE="docfast-backup-$(date +%Y)"
|
||||
borg list /opt/borg-backups/docfast
|
||||
|
||||
# Show repository info
|
||||
borg info /opt/borg-backups/docfast
|
||||
|
||||
# Show specific backup contents
|
||||
borg list /opt/borg-backups/docfast::docfast-2026-02-15_1103
|
||||
```
|
||||
|
||||
## Disaster Recovery Procedure
|
||||
|
||||
### Complete Server Rebuild
|
||||
If the entire server is lost, follow these steps on a new server:
|
||||
|
||||
1. **Install dependencies**:
|
||||
```bash
|
||||
apt update && apt install -y docker.io docker-compose postgresql-16 nginx borgbackup
|
||||
systemctl enable postgresql docker
|
||||
```
|
||||
|
||||
2. **Copy backup data**:
|
||||
- Transfer `/opt/borg-backups/` directory to new server
|
||||
- Transfer `/opt/borg-backups/docfast-key-backup.txt`
|
||||
|
||||
3. **Import Borg key**:
|
||||
```bash
|
||||
export BORG_PASSPHRASE="docfast-backup-2026"
|
||||
borg key import /opt/borg-backups/docfast /opt/borg-backups/docfast-key-backup.txt
|
||||
```
|
||||
|
||||
4. **Restore latest backup**:
|
||||
```bash
|
||||
/opt/docfast-borg-restore.sh restore latest
|
||||
```
|
||||
|
||||
5. **Follow manual restore steps** (shown by restore script):
|
||||
- Stop services
|
||||
- Restore database
|
||||
- Restore configuration files
|
||||
- Set permissions
|
||||
- Start services
|
||||
|
||||
### Database-Only Recovery
|
||||
If only the database needs restoration:
|
||||
|
||||
```bash
|
||||
# Stop DocFast
|
||||
cd /opt/docfast && docker-compose down
|
||||
|
||||
# Restore database
|
||||
export BORG_PASSPHRASE="docfast-backup-$(date +%Y)"
|
||||
cd /tmp
|
||||
borg extract /opt/borg-backups/docfast::docfast-YYYY-MM-DD_HHMM
|
||||
sudo -u postgres dropdb docfast
|
||||
sudo -u postgres createdb -O docfast docfast
|
||||
export PGPASSFILE="/root/.pgpass"
|
||||
pg_restore -d docfast /tmp/tmp/docfast-backup-*/docfast-db.dump
|
||||
|
||||
# Restart DocFast
|
||||
cd /opt/docfast && docker-compose up -d
|
||||
```
|
||||
|
||||
## Migration to Off-Site Storage
|
||||
|
||||
### Option 1: Hetzner Storage Box (Recommended)
|
||||
Manual setup required (Hetzner Storage Box API not available):
|
||||
|
||||
1. **Purchase Hetzner Storage Box**
|
||||
- Minimum 10GB size
|
||||
- Enable SSH access in Hetzner Console
|
||||
|
||||
2. **Configure SSH access**:
|
||||
```bash
|
||||
# Generate SSH key for storage box
|
||||
ssh-keygen -t ed25519 -f /root/.ssh/hetzner-storage-box
|
||||
|
||||
# Add public key to storage box in Hetzner Console
|
||||
cat /root/.ssh/hetzner-storage-box.pub
|
||||
```
|
||||
|
||||
3. **Update backup script**:
|
||||
Change `BORG_REPO` in `/opt/docfast-borg-backup.sh`:
|
||||
```bash
|
||||
BORG_REPO="ssh://uXXXXXX@uXXXXXX.your-storagebox.de:23/./docfast-backups"
|
||||
```
|
||||
|
||||
4. **Initialize remote repository**:
|
||||
```bash
|
||||
export BORG_PASSPHRASE="docfast-backup-$(date +%Y)"
|
||||
borg init --encryption=repokey ssh://uXXXXXX@uXXXXXX.your-storagebox.de:23/./docfast-backups
|
||||
```
|
||||
|
||||
### Option 2: AWS S3/Glacier
|
||||
Use rclone + borg for S3 storage (requires investor approval for AWS costs).
|
||||
|
||||
## Monitoring & Maintenance
|
||||
|
||||
### Check Backup Status
|
||||
```bash
|
||||
# View recent backup logs
|
||||
tail -f /var/log/docfast-backup.log
|
||||
|
||||
# Check repository size and stats
|
||||
export BORG_PASSPHRASE="docfast-backup-$(date +%Y)"
|
||||
borg info /opt/borg-backups/docfast
|
||||
```
|
||||
|
||||
### Manual Cleanup
|
||||
```bash
|
||||
# Prune old backups manually
|
||||
borg prune --keep-daily 7 --keep-weekly 4 --keep-monthly 3 /opt/borg-backups/docfast
|
||||
|
||||
# Compact repository
|
||||
borg compact /opt/borg-backups/docfast
|
||||
```
|
||||
|
||||
### Repository Health Check
|
||||
```bash
|
||||
# Check repository consistency
|
||||
borg check --verify-data /opt/borg-backups/docfast
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
1. **Test restores regularly** - Run restore test monthly
|
||||
2. **Monitor backup logs** - Check for failures in `/var/log/docfast-backup.log`
|
||||
3. **Keep key safe** - Store `/opt/borg-backups/docfast-key-backup.txt` securely off-site
|
||||
4. **Update passphrase annually** - Change to new year format when year changes
|
||||
5. **Local storage limit** - Current server has ~19GB available, monitor usage
|
||||
|
||||
## Migration Timeline
|
||||
- **Immediate**: Local BorgBackup operational (✅ Complete)
|
||||
- **Phase 2**: Off-site storage setup (requires Storage Box purchase or AWS approval)
|
||||
- **Phase 3**: Automated off-site testing and monitoring
|
||||
|
|
@ -1,21 +1,55 @@
|
|||
import { Router } from "express";
|
||||
import { getPoolStats } from "../services/browser.js";
|
||||
import { pool } from "../services/db.js";
|
||||
|
||||
export const healthRouter = Router();
|
||||
|
||||
healthRouter.get("/", (_req, res) => {
|
||||
const pool = getPoolStats();
|
||||
res.json({
|
||||
status: "ok",
|
||||
healthRouter.get("/", async (_req, res) => {
|
||||
const poolStats = getPoolStats();
|
||||
let databaseStatus: any;
|
||||
let overallStatus = "ok";
|
||||
let httpStatus = 200;
|
||||
|
||||
// Check database connectivity
|
||||
try {
|
||||
const client = await pool.connect();
|
||||
try {
|
||||
const result = await client.query('SELECT version()');
|
||||
const version = result.rows[0]?.version || 'Unknown';
|
||||
// Extract just the PostgreSQL version number (e.g., "PostgreSQL 15.4")
|
||||
const versionMatch = version.match(/PostgreSQL ([\d.]+)/);
|
||||
const shortVersion = versionMatch ? `PostgreSQL ${versionMatch[1]}` : 'PostgreSQL';
|
||||
|
||||
databaseStatus = {
|
||||
status: "ok",
|
||||
version: shortVersion
|
||||
};
|
||||
} finally {
|
||||
client.release();
|
||||
}
|
||||
} catch (error: any) {
|
||||
databaseStatus = {
|
||||
status: "error",
|
||||
message: error.message || "Database connection failed"
|
||||
};
|
||||
overallStatus = "degraded";
|
||||
httpStatus = 503;
|
||||
}
|
||||
|
||||
const response = {
|
||||
status: overallStatus,
|
||||
version: "0.2.1",
|
||||
database: databaseStatus,
|
||||
pool: {
|
||||
size: pool.poolSize,
|
||||
active: pool.totalPages - pool.availablePages,
|
||||
available: pool.availablePages,
|
||||
queueDepth: pool.queueDepth,
|
||||
pdfCount: pool.pdfCount,
|
||||
restarting: pool.restarting,
|
||||
uptimeSeconds: Math.round(pool.uptimeMs / 1000),
|
||||
size: poolStats.poolSize,
|
||||
active: poolStats.totalPages - poolStats.availablePages,
|
||||
available: poolStats.availablePages,
|
||||
queueDepth: poolStats.queueDepth,
|
||||
pdfCount: poolStats.pdfCount,
|
||||
restarting: poolStats.restarting,
|
||||
uptimeSeconds: Math.round(poolStats.uptimeMs / 1000),
|
||||
},
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
res.status(httpStatus).json(response);
|
||||
});
|
||||
21
src/routes/health.ts.backup
Normal file
21
src/routes/health.ts.backup
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
import { Router } from "express";
|
||||
import { getPoolStats } from "../services/browser.js";
|
||||
|
||||
export const healthRouter = Router();
|
||||
|
||||
healthRouter.get("/", (_req, res) => {
|
||||
const pool = getPoolStats();
|
||||
res.json({
|
||||
status: "ok",
|
||||
version: "0.2.1",
|
||||
pool: {
|
||||
size: pool.poolSize,
|
||||
active: pool.totalPages - pool.availablePages,
|
||||
available: pool.availablePages,
|
||||
queueDepth: pool.queueDepth,
|
||||
pdfCount: pool.pdfCount,
|
||||
restarting: pool.restarting,
|
||||
uptimeSeconds: Math.round(pool.uptimeMs / 1000),
|
||||
},
|
||||
});
|
||||
});
|
||||
Loading…
Add table
Add a link
Reference in a new issue