Automated PostgreSQL backups using pg_dump, scheduled via cron in a tiny Go service. Ships as a Docker image for easy, reliable daily backups to a host-mounted folder.
- Runs a daily backup at 03:00 (container timezone) using
pg_dump. - Writes
.sqldump files into./backupson your host. - Appends a
backup.logwith success/failure entries. - Works with any reachable Postgres (locally, Docker, or cloud providers).
- Optional: uploads backups to UploadThing and (optionally) deletes the local file after a successful upload.
Backup filename format:
{APP_NAME}_backup_{DB_NAME}_{YYYYMMDD_HHMMSS}.sql
Example: KOVA_backup_railway_20250829_110913.sql
- Docker Desktop (with Docker Compose v2:
docker compose) - A reachable PostgreSQL instance
Optional (for local, non-Docker runs):
- Go 1.21+ and
pg_dumpin PATH
- Create a
.envfile next todocker-compose.yml:
APP_NAME=backup
DB_HOST=your-postgres-host
DB_PORT=5432
DB_NAME=your_database
DB_USER=your_user
DB_PASSWORD=your_password
# Inside the container we write to /app/backups; mapped to ./backups on the host
BACKUP_DIR=/app/backups
# Optional: set container timezone for the cron schedule
TZ=UTC
# Optional: UploadThing (cloud upload)
UPLOADTHING_ENABLED=false
UPLOADTHING_API_KEY=your_uploadthing_api_key
# If true, delete the local .sql file after a successful upload
UPLOADTHING_DELETE_LOCAL=false
- Build and start:
docker compose build
docker compose up -d- Verify the service and logs:
# Follow container logs
docker compose logs -f postgres-backup-scheduler- Check created files on your host:
- Backups:
./backups/*.sql - Log file:
./backups/backup.log
The first backup runs immediately on start, then daily at 03:00.
The app reads environment variables (no implicit .env loading when running the binary directly):
DB_HOST(required)DB_PORT(default:5432)DB_NAME(required)DB_USER(required)DB_PASSWORD(required)BACKUP_DIR(default:./backups; in Docker use/app/backups)APP_NAME(used in filename; e.g.,backup)TZ(optional; container timezone for the cron schedule)- UploadThing (optional cloud upload):
UPLOADTHING_ENABLED(true/false, default disabled)UPLOADTHING_API_KEY(required if enabled)UPLOADTHING_DELETE_LOCAL(true/false): delete local file after a successful upload
Compose mounts:
./backups -> /app/backups(read/write)./logs -> /app/logs(optional; current app writesbackup.logto/app/backups)
- Cron expression in code:
0 0 3 * * *(with seconds enabled) = every day at 03:00. - Timezone comes from the container (
TZ). SetTZin.envif you want local time (e.g.,TZ=Europe/London).
To change the schedule, update main.go and rebuild:
_, err = c.AddFunc("0 0 3 * * *", func() {
runBackupJob(config)
})Use psql to restore a .sql dump (adjust host/port/db/user as needed):
psql -h <host> -p 5432 -U <user> -d <db> -f backups\<file.sql>Make sure pg_dump is available in your PATH (from PostgreSQL client tools).
- Set env vars in the current shell:
set APP_NAME=backup
set DB_HOST=localhost
set DB_PORT=5432
set DB_NAME=your_database
set DB_USER=your_user
set DB_PASSWORD=your_password
set BACKUP_DIR=backups- Build and run:
go build -o backup-scheduler.exe
backup-scheduler.exe- Backups and
backup.logwill appear under%CD%\backups(or whateverBACKUP_DIRyou set). - The first backup runs immediately, then daily at 03:00.
main.goreads config from environment variables.performBackupcallspg_dumpwithPGPASSWORDinjected via env.- Dumps are written to
BACKUP_DIR;backup.loglogs successes/errors with timestamps and file sizes. - Cron from
github.com/robfig/cron/v3schedules the job with second precision. - Dockerfile builds a static Go binary, then runs it in a
postgres:17-alpineimage (which includespg_dump). - If UploadThing is enabled,
performBackuploads config (LoadUploadConfig) and callsUploadWithRetryfromupload.go:- Up to 3 retry attempts with a 30s delay are performed on failures.
- On success, an upload entry is added to
backup.log, and the local file is optionally deleted based onUPLOADTHING_DELETE_LOCAL.
When UPLOADTHING_ENABLED=true, the app will upload each completed backup to UploadThing:
- Required:
UPLOADTHING_API_KEY(get from your UploadThing dashboard). - Upload timeout: 10 minutes (suitable for large dumps; adjust in code if needed).
- Logging: success/failure is appended to
backup.login the same directory as local backups. - Local cleanup: enable
UPLOADTHING_DELETE_LOCAL=trueto remove the local.sqlafter a successful upload. - Retries: 3 attempts with 30 seconds between attempts.
- API: Uses UploadThing's v6 presigned URL flow for reliable uploads.
- No files in
./backups:- Ensure the service is running:
docker compose psand check logs. - Confirm
DB_HOST, credentials, and network reachability. - On Windows, verify the
./backupsfolder exists and is shared with Docker Desktop.
- Ensure the service is running:
- Permission denied writing backups:
- The container runs as a non-root user. The bind-mounted
./backupsmust allow writes.
- The container runs as a non-root user. The bind-mounted
- Timezone confusion:
- Set
TZto your desired zone so 03:00 matches your expectation.
- Set
pg_dumpnot found (local run only):- Install PostgreSQL client tools and add them to PATH.
- UploadThing issues:
- "Not found" error: Verify your
UPLOADTHING_API_KEYis correct and active in your UploadThing dashboard. - Missing credentials: Ensure
UPLOADTHING_ENABLED=trueonly whenUPLOADTHING_API_KEYis set. - Network issues: Check outbound HTTPS access from the container/host to
api.uploadthing.com. - Large files: The client uses a 10-minute timeout per request; very large dumps may need adjustment.
- Debugging: Review
backup.logfor detailed upload error messages and retry status. - API limits: Check your UploadThing plan limits for file size and monthly usage.
- "Not found" error: Verify your
- There is no built-in retention policy; old backups accumulate. Rotate or prune files as needed (e.g., host cron or a simple script).
./logsmount is optional—the app currently logs tobackup.loginsideBACKUP_DIR.
If you need schedule by env, retention, compression, or S3 uploads, those can be added—open to enhancements.