45 Commits

Author SHA1 Message Date
482cc46476 work on README.md
Some checks failed
Build and Push Docker Image / build (push) Failing after 10s
learning to use shields.io
2025-10-26 22:42:45 -07:00
5e33afb807 q
Some checks failed
Build and Push Docker Image / build (push) Failing after 10s
2025-10-23 17:41:46 -07:00
c1a7ffd9e8 try 4
All checks were successful
Build and Push Docker Image / build (push) Successful in 45s
2025-10-23 17:24:45 -07:00
b528097b00 step4
Some checks failed
Build and Push Docker Image / build (push) Failing after 8s
2025-10-23 17:16:43 -07:00
035811115e try 3
Some checks failed
Build and Push Docker Image / build (push) Failing after 8s
2025-10-23 17:08:05 -07:00
f1ddee1b6e step2
Some checks failed
Build and Push Docker Image / build (push) Failing after 8s
2025-10-23 16:02:40 -07:00
1b0631d4e8 test 2
Some checks failed
Build and Push Docker Image / build (push) Failing after 8s
2025-10-23 15:43:10 -07:00
6192b18a49 learning something new
Some checks failed
Build and Push Docker Image / build (push) Failing after 8s
2025-10-23 15:27:46 -07:00
fd222fc92a one two buckle my show
Some checks failed
Docker Image CI / build-and-push-image (push) Failing after 8s
2025-10-23 15:16:54 -07:00
bbc5245793 testing new workflow
Some checks failed
Docker Image CI / build-and-push-image (push) Has been cancelled
2025-10-23 14:57:42 -07:00
cacd0086c1 badges are fun
All checks were successful
Build and Push Docker Image / build (push) Successful in 43s
2025-10-23 13:44:30 -07:00
d13c54c8df learning badges
Some checks failed
Build and Push Docker Image / build (push) Failing after 6s
2025-10-23 13:32:20 -07:00
874adb4e2e worked on README
Some checks failed
Build and Push Docker Image / build (push) Failing after 6s
2025-10-23 13:24:05 -07:00
f5d9f0e458 fixed the repo
Some checks failed
Build and Push Docker Image / build (push) Failing after 8s
2025-10-23 13:16:08 -07:00
3a838a92f9 and then it hit me
Some checks failed
Build and Push Docker Image / build (push) Failing after 7s
2025-10-23 12:59:50 -07:00
1d8c32eac3 im at a loss 2025-10-23 12:58:50 -07:00
dd9d6feb57 first commit of bot config 2025-10-23 12:57:22 -07:00
6f72c80511 added a README
Some checks failed
Build and Push Docker Image / build (push) Failing after 7s
2025-10-23 12:51:51 -07:00
81845f4b72 and then it hit me
Some checks failed
Build and Push Docker Image / build (push) Failing after 6s
2025-10-23 12:35:33 -07:00
e0f8fa47b9 im at a loss 2025-10-23 12:35:31 -07:00
c40db791f4 lets see if it reads the vars 2025-10-23 12:35:31 -07:00
a6a459dcd9 layering issues 2025-10-23 12:35:08 -07:00
416d2ab3e5 wrong directory being made 2025-10-23 12:30:51 -07:00
8bc1ae86e7 first commit of bot config 2025-10-23 12:28:07 -07:00
0b7f731764 Merge branch 'develop'
Some checks failed
Build and Push Docker Image / build (push) Failing after 7s
2025-10-23 12:14:51 -07:00
733f5e2504 added the official docker cli
All checks were successful
Build and Push Docker Image / build (push) Successful in 39s
-switched to user 1000 for security.
-added user to docker group
-properly mounted btrfs drive on host allows users to create snapshots
2025-10-22 16:44:48 -07:00
2e2211b26e added timezone dropdown selector 2025-10-22 10:42:18 -07:00
ece058e9da debugging 2025-10-22 10:17:06 -07:00
6e157cf8ee added gitignore and added .env as an ignored type 2025-10-22 10:11:21 -07:00
bf4e850655 redid some code
All checks were successful
Build and Push Docker Image / build (push) Successful in 49s
2025-10-22 09:58:24 -07:00
981da971cd one more time
All checks were successful
Build and Push Docker Image / build (push) Successful in 48s
2025-10-22 01:33:50 -07:00
8fc3c0ae37 so close
All checks were successful
Build and Push Docker Image / build (push) Successful in 47s
2025-10-22 00:52:43 -07:00
2289f7f400 and then it hit me
All checks were successful
Build and Push Docker Image / build (push) Successful in 45s
2025-10-22 00:31:49 -07:00
5feac4aa32 revert to dh image
All checks were successful
Build and Push Docker Image / build (push) Successful in 45s
2025-10-21 13:33:24 -07:00
a81f66fa6b switched source to proxy
All checks were successful
Build and Push Docker Image / build (push) Successful in 52s
2025-10-21 12:17:09 -07:00
8320490b21 im at a loss
All checks were successful
Build and Push Docker Image / build (push) Successful in 49s
2025-10-21 11:58:04 -07:00
a2d8f10c2a lets see if it reads the vars
All checks were successful
Build and Push Docker Image / build (push) Successful in 48s
2025-10-21 01:45:12 -07:00
d1677f92c6 typos
All checks were successful
Build and Push Docker Image / build (push) Successful in 50s
2025-10-21 01:05:14 -07:00
300ad5b49c docker layers are my enemy
All checks were successful
Build and Push Docker Image / build (push) Successful in 48s
2025-10-21 01:02:47 -07:00
7fbeb3aa26 build fix
All checks were successful
Build and Push Docker Image / build (push) Successful in 53s
2025-10-21 00:46:22 -07:00
31921e5a40 env file mismatch
All checks were successful
Build and Push Docker Image / build (push) Successful in 47s
2025-10-20 16:30:22 -07:00
14c0a4f522 layering issues
All checks were successful
Build and Push Docker Image / build (push) Successful in 1m14s
2025-10-20 16:21:56 -07:00
12773ef7e7 wrong directory being made
All checks were successful
Build and Push Docker Image / build (push) Successful in 1m9s
2025-10-20 16:07:27 -07:00
c252250db0 add develop to build workflow
Some checks failed
Build and Push Docker Image / build (push) Failing after 49s
2025-10-20 16:01:41 -07:00
39d5ad4b52 first commit of bot config 2025-10-20 15:47:20 -07:00
10 changed files with 706 additions and 59 deletions

View File

@@ -4,11 +4,12 @@ on:
push:
branches:
- main
pull_request:
tags:
- "v*"
jobs:
build:
runs-on: prodesk
runs-on: prodesk
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -23,13 +24,49 @@ jobs:
username: ${{ secrets.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_TOKEN }}
- name: Build and push image
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Gitea metadata
id: meta-gitea
uses: docker/metadata-action@v5
with:
images: gitea.calahilstudios.com/calahilstudios/${{ github.event.repository.name }}
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=sha
type=raw,value=latest,enable={{is_default_branch}}
- name: Docker Hub metadata
id: meta-dockerhub
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/${{ github.event.repository.name }}
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha
type=raw,value=latest,enable={{is_default_branch}}
- name: Build and push to Gitea
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
push: true
tags: |
gitea.calahilstudios.com/${{ github.repository_owner }}/${{ github.event.repository.name }}:latest
gitea.calahilstudios.com/${{ github.repository_owner }}/${{ github.event.repository.name }}:${{ github.sha }}
tags: ${{ steps.meta-gitea.outputs.tags }}
labels: ${{ steps.meta-gitea.outputs.labels }}
- name: Build and push to Docker Hub
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ steps.meta-dockerhub.outputs.tags }}
labels: ${{ steps.meta-dockerhub.outputs.labels }}

1
.gitignore vendored
View File

@@ -0,0 +1 @@
.env

View File

@@ -1,37 +1,49 @@
FROM ghcr.io/linuxserver/duplicati:2.1.0
FROM linuxserver/duplicati:2.1.0
ENV DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN apt-get update -y \
# Install Docker CLI, bash, python3, btrfs support and all the app directories
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
gnupg \
lsb-release \
btrfs-progs \
#&& rm -rf /var/lib/apt/lists/* \
&& install -m 0755 -d /etc/apt/keyrings \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc \
&& chmod a+r /etc/apt/keyrings/docker.asc \
&& echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null \
&& apt-get update -y \
&& apt-get install -y --no-install-recommends \
cron \
bash \
python3 \
python3-pip \
btrfs-progs \
&& mkdir -p /etc/apt/keyrings \
&& curl -fsSL "https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg" \
| gpg --dearmor -o /etc/apt/keyrings/docker.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) stable" \
| tee /etc/apt/sources.list.d/docker.list > /dev/null \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
docker-ce-cli \
postgresql-client \
&& groupadd -f docker \
&& usermod -aG docker abc \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /backups
&& mkdir -p /usr/local/bin /config /etc/services.d/backupbot
# Copy backup script
COPY backup.sh /usr/local/bin/backup.sh
RUN chmod +x /usr/local/bin/backup.sh \
&& mkdir -p /etc/services.d/backupbot
RUN chmod +x /usr/local/bin/backup.sh
# Copy the environment variables for backupbot
COPY backupbot.conf /defaults/backupbot.conf
RUN chown www-data:www-data /defaults/backupbot.conf \
&& chmod 644 /defaults/backupbot.conf
# Copy s6 service for backupbot
COPY services/backupbot/run /etc/services.d/backupbot/run
RUN chmod +x /etc/services.d/backupbot/run
# Copy web frontend
COPY web /app
RUN chmod +x /app/cgi-bin/backupbot.cgi
# Expose web frontend port
EXPOSE 8080
# Keep duplicati entrypoint
ENTRYPOINT ["/init"]

303
README.md
View File

@@ -1 +1,302 @@
Docker backup system for configs and databases
# BackupBot 🤖
![Docker Image Version (tag)](https://img.shields.io/docker/v/calahil/backupbot/latest?arch=amd64&style=for-the-badge&logo=docker&logoColor=white&labelColor=blue&color=green)
![Docker Pulls](https://img.shields.io/docker/pulls/calahil/backupbot?style=for-the-badge&logo=docker&logoColor=white&labelColor=blue&color=green)
![Docker Stars](https://img.shields.io/docker/stars/calahil/backupbot?style=for-the-badge&logo=docker&logoColor=white&labelColor=blue&color=green)
![GitHub License](https://img.shields.io/github/license/calahil/backupbot?style=social&logo=github&link=https%3A%2F%2Fraw.githubusercontent.com%2Fcalahil%2Fbackupbot%2Frefs%2Fheads%2Fmain%2FLICENSE)
![GitHub Release](https://img.shields.io/github/v/release/calahil/backupbot?display_name=release&style=for-the-badge&logo=github&labelColor=blue&color=green)
> **Automated Docker backup system for PostgreSQL databases and application configurations with Duplicati integration**
BackupBot is a comprehensive backup solution that automatically discovers and backs up PostgreSQL containers, creates btrfs snapshots of your application data, and provides a web-based configuration interface. Built on top of LinuxServer.io's Duplicati image, it combines database backups with flexible cloud storage options.
---
## ✨ Features
- 🔍 **Auto-Discovery**: Automatically detects PostgreSQL containers by image patterns
- 📊 **Multi-Database Support**: Backs up all databases within each PostgreSQL container using `pg_dumpall`
- 📸 **Filesystem Snapshots**: Creates read-only btrfs snapshots of application data
- 🔄 **Automated Scheduling**: Configurable backup times with retry logic
- 🌐 **Web Interface**: Simple configuration UI accessible on port 8080
- 🔔 **Gotify Integration**: Optional push notifications for backup failures
- 🗄️ **Duplicati Integration**: Full access to Duplicati for cloud backup destinations
- 🧹 **Retention Management**: Automatic cleanup of old backups based on retention policy
- 🐳 **Docker-Native**: Designed to run in containerized environments
---
## 🚀 Quick Start
### Prerequisites
- Docker Engine 20.10+
- Docker Compose 2.0+
- Btrfs filesystem for snapshot functionality (optional but recommended)
- Running PostgreSQL containers you want to back up
### Installation
1. **Clone the repository:**
```bash
git clone https://gitea.calahilstudios.com/owner/backupbot.git
cd backupbot
```
2. **Create environment file:**
```bash
cp .env.example .env
# Edit .env with your settings
nano .env
```
3. **Start the container:**
```bash
docker-compose up -d
```
4. **Access the interfaces:**
- BackupBot Config: http://localhost:8201
- Duplicati Web UI: http://localhost:8200
---
## 📋 Configuration
### Environment Variables
Create a `.env` file in the project root:
```env
# Duplicati encryption key (required)
KEY=your_encryption_key_here
# Duplicati web password (required)
PASSWORD=your_secure_password
# User/Group IDs (optional)
PUID=1000
PGID=1000
# Timezone (optional)
TZ=America/Los_Angeles
```
### BackupBot Configuration
BackupBot settings are managed through the web interface at `http://localhost:8201` or via the config file at `/config/backupbot.conf`:
```bash
TZ=America/Los_Angeles
BACKUP_DIR=/backups/postgres
LOG_FILE=/config/log/pgbackup.log
MAX_RETRIES=3
GOTIFY_URL=http://gotify.example.com
GOTIFY_TOKEN=your_gotify_token_here
BACKUP_HOUR=03
BACKUP_MINUTE=00
RETENTION_DAYS=7
```
### Supported PostgreSQL Images
BackupBot automatically detects containers running these images:
- `postgres:17.0-alpine`
- `postgres:17`
- `postgres:14.0-alpine`
- `postgres` (any version)
- `ghcr.io/immich-app/postgres:*`
Additional patterns can be added by modifying the `KNOWN_IMAGES` list in `backup.sh`.
---
## 🗂️ Volume Mappings
```yaml
volumes:
# Duplicati configuration
- /srv/appdata/duplicati/config:/config
# Backup storage (where dumps are stored)
- /srv/backups:/backups:rshared
# Docker socket (for container discovery)
- /var/run/docker.sock:/var/run/docker.sock:ro
# Source data for snapshots (optional)
- /srv/appdata:/source/appdata:ro
```
---
## 🔧 Usage
### Manual Backup
Trigger a backup manually:
```bash
docker exec backupbot /usr/local/bin/backup.sh
```
### View Logs
Monitor backup operations:
```bash
docker logs -f backupbot
```
### Check Backup Files
Backups are organized by container name:
```bash
ls -lh /srv/backups/postgres_dumps/
```
Example structure:
```
/srv/backups/
├── postgres_dumps/
│ ├── myapp_db/
│ │ ├── 2024-10-23_03-00-00.sql
│ │ └── 2024-10-24_03-00-00.sql
│ └── another_db/
│ └── 2024-10-23_03-00-00.sql
└── snapshots/
├── hostname-2024-10-23/
└── hostname-2024-10-24/
```
---
## 🎯 How It Works
1. **Discovery Phase**: BackupBot scans running Docker containers and identifies PostgreSQL instances
2. **Extraction**: For each database, credentials are extracted from environment variables
3. **Backup**: `pg_dumpall` creates a complete SQL dump of all databases
4. **Snapshot**: A read-only btrfs snapshot is created of `/srv/appdata`
5. **Retention**: Old backups exceeding the retention period are automatically deleted
6. **Notification**: On failure after retries, Gotify notifications are sent (if configured)
---
## 🔐 Security Notes
- **Privileged Mode**: Required for btrfs snapshot functionality
- **Docker Socket**: Read-only access needed for container discovery
- **Credentials**: Database passwords are extracted from container environment variables
- **Network**: BackupBot runs in bridge mode by default
### Best Practices
- Use strong encryption keys for Duplicati
- Restrict access to the web interfaces using a reverse proxy with authentication
- Regularly test backup restoration procedures
- Store encryption keys securely outside the container
---
## 🛠️ Development
### Building from Source
```bash
docker build -t backupbot:latest .
```
### CI/CD Pipeline
BackupBot uses Gitea Actions for automated builds:
- **Trigger**: Push to `main` or `develop` branches
- **Registry**: `gitea.calahilstudios.com`
- **Tags**: `develop` and commit SHA
---
## 📊 Monitoring
### Web Interfaces
- **BackupBot Config**: `http://localhost:8201`
- Configure backup schedules
- Set retention policies
- Manage Gotify notifications
- **Duplicati**: `http://localhost:8200`
- Configure cloud storage destinations
- Schedule remote backups
- Restore from backups
### Log Levels
Set via `BACKUPBOT_WEB_LOGGING` environment variable:
- `DEBUG`: Verbose logging with exception traces
- `INFO`: Standard operational logs (default)
- `WARN`: Warnings and errors only
---
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request on Gitea
---
## 📝 License
This project is licensed under the GNU Affero General Public License v3.0 - see the [LICENSE](LICENSE) file for details.
**AGPL-3.0 Key Points:**
- ✅ Free to use, modify, and distribute
- ✅ Source code must be made available
- ✅ Network use is considered distribution
- ✅ Modifications must also be AGPL-3.0
---
## 🙏 Acknowledgments
- Built on [LinuxServer.io Duplicati](https://github.com/linuxserver/docker-duplicati)
- PostgreSQL backup functionality inspired by community best practices
- Web interface uses vanilla JavaScript for minimal dependencies
---
## 📞 Support
- 🐛 **Issues**: [Report bugs on Gitea](https://gitea.calahilstudios.com/owner/backupbot/issues)
- 📚 **Documentation**: This README and inline code comments
- 💬 **Discussions**: Open an issue for questions
---
## 🗺️ Roadmap
- [ ] MySQL/MariaDB support
- [ ] MongoDB backup integration
- [ ] Advanced scheduling options (multiple backup windows)
- [ ] Backup verification and integrity checks
- [ ] Prometheus metrics export
- [ ] Email notifications
- [ ] Backup compression options
---
**Made with ❤️ by Calahil Studios**
[![Gitea](https://img.shields.io/badge/View%20on-Gitea-609926?style=for-the-badge&logo=gitea&logoColor=white)](https://gitea.calahilstudios.com)

View File

@@ -4,7 +4,6 @@
# Author: Calahil Studios
# === CONFIGURATION ===
LOG_FILE="$1"
BACKUP_DIR="/backups/postgres_dumps"
RETENTION_DAYS="${RETENTION_DAYS:-7}" # Keep 7 days of backups
@@ -19,12 +18,12 @@ ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0
EOF
)
echo "[BACKUPBOT_INFO] Starting PostgreSQL backup service..." | tee -a "$LOG_FILE"
echo "[BACKUPBOT_INFO] Starting PostgreSQL backup service..."
mkdir -p "$BACKUP_DIR"
TIMESTAMP=$(date +'%Y-%m-%d_%H-%M-%S')
echo "[BACKUPBOT_INFO] $(date) - Starting backup cycle ($TIMESTAMP)" | tee -a "$LOG_FILE"
echo "[BACKUPBOT_INFO] Checking for running Postgres containers..." | tee -a "$LOG_FILE"
echo "[BACKUPBOT_INFO] $(date) - Starting backup cycle ($TIMESTAMP)"
echo "[BACKUPBOT_INFO] Checking for running Postgres containers..."
# Find running containers matching known image names
MATCHING_CONTAINERS=$(
@@ -41,7 +40,7 @@ MATCHING_CONTAINERS=$(
)
if [ -z "$MATCHING_CONTAINERS" ]; then
echo "[BACKUPBOT_WARN] No Postgres containers found." | tee -a "$LOG_FILE"
echo "[BACKUPBOT_WARN] No Postgres containers found."
else
for container in $MATCHING_CONTAINERS; do
NAME=$(docker inspect --format '{{.Name}}' "$container" | sed 's#^/##')
@@ -54,16 +53,16 @@ else
PG_USER=$(docker inspect --format '{{range .Config.Env}}{{println .}}{{end}}' "$container" | grep POSTGRES_USER | cut -d= -f2)
PG_PASS=$(docker inspect --format '{{range .Config.Env}}{{println .}}{{end}}' "$container" | grep POSTGRES_PASSWORD | cut -d= -f2)
if docker exec -e PGPASSWORD="$PG_PASS" "$container" pg_dumpall -U "$PG_USER" -h 127.0.0.1 >"$FILE" 2>/tmp/pg_backup_error.log; then
echo "[BACKUPBOT_SUCCESS] Backup complete for $NAME -> $FILE" | tee -a "$LOG_FILE"
echo "[BACKUPBOT_SUCCESS] Backup complete for $NAME -> $FILE"
else
echo "[BACKUPBOT_ERROR] Backup failed for $NAME (check /tmp/pg_backup_error.log)" | tee -a "$LOG_FILE"
echo "[BACKUPBOT_ERROR] Backup failed for $NAME (check /tmp/pg_backup_error.log)"
fi
# Retention cleanup
find "$CONTAINER_BACKUP_DIR" -type f -mtime +$RETENTION_DAYS -name '*.sql' -delete
done
fi
echo "[BACKUPBOT_INFO] Creating a snapshot of /srv/appdata" | tee -a "$LOG_FILE"
echo "[BACKUPBOT_INFO] Creating a snapshot of /srv/appdata"
btrfs subvolume snapshot -r /source/appdata /backups/snapshots/$(hostname)-$(date +%F)
echo "[BACKUPBOT_INFO] Backup cycle complete." | tee -a "$LOG_FILE"
echo "[BACKUPBOT_INFO] Backup cycle complete."

9
backupbot.conf Normal file
View File

@@ -0,0 +1,9 @@
TZ=America/Los_Angeles
BACKUP_DIR=/backups/postgres
LOG_FILE=/config/log/pgbackup.log
MAX_RETRIES=3
GOTIFY_URL=http://gotify.example.com
GOTIFY_TOKEN=your_gotify_token_here
BACKUP_HOUR=03
BACKUP_MINUTE=00
BACKUPBOT_WEB_LOGGING=DEBUG

View File

@@ -1,11 +1,11 @@
services:
backupbot:
image: gitea.calahilstudios.com/calahil/backupbot:latest
build: .
container_name: backupbot
privileged: true
environment:
- PUID=0
- PGID=0
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
- SETTINGS_ENCRYPTION_KEY=${KEY}
- CLI_ARGS= #optional
@@ -14,12 +14,10 @@ services:
# Config dir for duplicati
- /srv/appdata/duplicati/config:/config
# Backup folder to store dumps/backups
- /srv/backups:/backups
# Local docker config dirs
- /srv/appdata:/source/appdata:rshared
- /srv/backups:/backups:rshared
# Docker socket to list containers
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- 8200:8200
- 8201:8080
restart: unless-stopped

View File

@@ -1,27 +1,84 @@
#!/usr/bin/with-contenv bash
set -e
# Source env if available
if [[ -f /config/backupbot.conf ]]; then
set -a
source /config/backupbot.conf
set +a
else
echo "[INFO] copying config vars from defaults..."
cp -r /defaults/backupbot.conf /config/
set -a
source /config/backupbot.conf
set +a
fi
# Initialize default web interface if missing
if [ ! -d /config/web ]; then
echo "[INFO] Populating /config/web from defaults..."
cp -r /defaults/web /config/
fi
echo "[BACKUPBOT_INFO] Starting PostgreSQL backup loop service..."
# Start Python HTTP server for web config in background
cd /app
INTERVAL_HOURS="${INTERVAL_HOURS:-24}"
nohup python3 -m http.server 8080 --cgi 2>&1 &
# Start backup scheduler
STATE_FILE="/config/last_backup_date"
LOG_FILE="/config/log/pgbackup.log"
mkdir -p "$(dirname "$STATE_FILE")" "$(dirname "$LOG_FILE")"
# TZ
: "${TZ:=UTC}"
export TZ
# Retry config
RETRIES=3
GOTIFY_URL="${GOTIFY_URL:-}"
GOTIFY_TOKEN="${GOTIFY_TOKEN:-}"
# Helper: seconds until next 3AM
seconds_until_next_3am() {
local now next_3am
now=$(date +%s)
next_3am=$(date -d "today 03:00" +%s)
((now >= next_3am)) && next_3am=$(date -d "tomorrow 03:00" +%s)
echo $((next_3am - now))
}
# Run backup with retries
run_backup() {
local attempt=1
while ((attempt <= RETRIES)); do
echo "[INFO] Backup attempt $attempt"
if /usr/local/bin/backup.sh; then
echo "[SUCCESS] Backup completed"
return 0
else
echo "[WARN] Backup failed on attempt $attempt"
((attempt++))
sleep 5
fi
done
# Send Gotify notification if configured
if [[ -n "$GOTIFY_URL" && -n "$GOTIFY_TOKEN" ]]; then
curl -s -X POST "$GOTIFY_URL/message?token=$GOTIFY_TOKEN" \
-F "title=Backup Failed" \
-F "message=PostgreSQL backup failed after $RETRIES attempts" \
-F "priority=5"
fi
return 1
}
# Main loop
while true; do
TODAY=$(date +%F)
# Check if a backup already ran today
if [[ -f "$STATE_FILE" && "$(cat "$STATE_FILE")" == "$TODAY" ]]; then
echo "[BACKUPBOT_INFO] Backup already completed today ($TODAY). Skipping."
echo "[INFO] Backup already done for $TODAY"
else
echo "[BACKUPBOT_INFO] Triggering backup.sh at $(date)"
/usr/local/bin/backup.sh "$LOG_FILE"
echo "$TODAY" >"$STATE_FILE"
echo "[BACKUPBOT_INFO] Backup completed and date recorded."
echo "[INFO] Running backup for $TODAY"
if run_backup; then
echo "$TODAY" >"$STATE_FILE"
fi
fi
echo "[BACKUPBOT_INFO] Sleeping for $INTERVAL_HOURS hours..."
sleep "${INTERVAL_HOURS}h"
SECONDS_TO_WAIT=$(seconds_until_next_3am)
sleep "$SECONDS_TO_WAIT"
done

115
web/cgi-bin/backupbot.cgi Normal file
View File

@@ -0,0 +1,115 @@
#!/usr/bin/env python3
import cgi
import cgitb
import os
import json
import sys
import traceback
import tempfile
cgitb.enable()
print("Content-Type: application/json\n")
ENV_FILE = "/config/backupbot.conf"
ZONEINFO_DIR = "/usr/share/zoneinfo"
# Logging level from environment
LOG_LEVEL = os.environ.get("BACKUPBOT_WEB_LOGGING", "info").lower()
LOG_LEVELS = {"debug": 3, "info": 2, "warn": 1}
def log(level, message, exc=None):
"""
Docker-friendly logging.
level: "debug", "info", "warn"
exc: exception object (only used in debug)
"""
if LOG_LEVELS.get(level, 0) <= LOG_LEVELS.get(LOG_LEVEL, 0):
timestamp = (
__import__("datetime")
.datetime.now()
.strftime(
"%Y-%m-%d \
%H:%M:%S"
)
)
msg = f"[{timestamp}] [{level.upper()}] {message}"
print(msg, file=sys.stderr)
if exc and LOG_LEVEL == "debug":
traceback.print_exception(
type(exc), exc, exc.__traceback__, file=sys.stderr
)
def read_env():
env = {}
if os.path.exists(ENV_FILE):
try:
with open(ENV_FILE) as f:
for line in f:
line = line.strip()
if not line or "=" not in line:
continue
key, val = line.split("=", 1)
env[key.strip()] = val.strip()
except Exception as e:
log("warn", f"Failed to read config: {e}", e)
return env
def write_env(env):
try:
dir_name = os.path.dirname(ENV_FILE)
os.makedirs(dir_name, exist_ok=True)
# Write atomically to temp file
with tempfile.NamedTemporaryFile("w", dir=dir_name, delete=False) as tmp:
for key, val in env.items():
tmp.write(f"{key}={val}\n")
temp_name = tmp.name
os.replace(temp_name, ENV_FILE)
log("info", f"Configuration saved to {ENV_FILE}")
except Exception as e:
log("warn", f"Failed to write config: {e}", e)
raise
def list_timezones():
zones = []
for root, _, files in os.walk(ZONEINFO_DIR):
rel_root = os.path.relpath(root, ZONEINFO_DIR)
if rel_root.startswith(("posix", "right")):
continue
for file in files:
if file.startswith(".") or file.endswith((".tab", ".zi")):
continue
zones.append(os.path.join(rel_root, file) if rel_root != "." else file)
return sorted(zones)
form = cgi.FieldStorage()
action = form.getvalue("action")
try:
if action == "get":
env = read_env()
log("debug", f"Returning configuration: {env}")
print(json.dumps(env))
elif action == "set":
raw_len = os.environ.get("CONTENT_LENGTH")
length = int(raw_len) if raw_len else 0
data = json.loads(os.read(0, length))
log("debug", f"Received new configuration: {data}")
env = read_env()
env.update(data) # update existing keys, add new keys
write_env(env)
print(json.dumps({"status": "ok", "message": "Configuration saved."}))
elif action == "get_timezones":
zones = list_timezones()
log("debug", f"Returning {len(zones)} timezones")
print(json.dumps({"timezones": zones}))
else:
log("warn", f"Invalid action requested: {action}")
print(json.dumps({"status": "error", "message": "Invalid action"}))
except Exception as e:
log("warn", f"Unhandled exception: {e}", e)
print(json.dumps({"status": "error", "message": str(e)}))

118
web/index.html Normal file
View File

@@ -0,0 +1,118 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>BackupBot Configuration</title>
<style>
body {
font-family: sans-serif;
margin: 2rem;
background: #f4f4f4;
}
label {
display: block;
margin-top: 1rem;
}
input {
width: 200px;
}
button {
margin-top: 1rem;
padding: 0.5rem 1rem;
}
</style>
</head>
<body>
<h1>BackupBot Configuration</h1>
<form id="configForm">
<label>Timezone:
<select id="tzSelect" name="TZ">
<option value="">Loading...</option>
</select>
</label>
<label>Backup Directory:
<input type="text" name="BACKUP_DIR" id="backupDir" placeholder="/backups">
<button type="button" onclick="chooseBackupDir()">Browse</button>
</label>
<label>Log File:
<input type="text" name="LOG_FILE" id="logDir" placeholder="/config/log">
<button type="button" onclick="chooseLogDir()">Browse</button>
</label>
<label>Backup Hour:
<input type="number" name="BACKUP_HOUR" min="0" max="23">
</label>
<label>Backup Minute:
<input type="number" name="BACKUP_MINUTE" min="0" max="59">
</label>
<label>Max Retries:
<input type="number" name="MAX_RETRIES" min="1" max="10">
</label>
<label>Gotify URL:
<input type="text" name="GOTIFY_URL">
</label>
<label>Gotify Token:
<input type="text" name="GOTIFY_TOKEN">
</label>
<button type="submit">Save Configuration</button>
</form>
<p id="status"></p>
<script>
async function loadTimezones() {
const res = await fetch('/cgi-bin/backupbot.cgi?action=get_timezones');
const data = await res.json();
const select = document.getElementById('tzSelect');
select.innerHTML = '';
data.timezones.forEach(tz => {
const opt = document.createElement('option');
opt.value = tz;
opt.textContent = tz;
select.appendChild(opt);
});
}
function chooseBackupDir() {
const base = prompt("Enter or confirm your backup directory path:", "/backups");
if (base) document.getElementById('backupDir').value = base;
}
function chooseLogDir() {
const base = prompt("Enter or confirm your log directory path:", "/config/log");
if (base) document.getElementById('logDir').value = base;
}
async function loadConfig() {
const res = await fetch('/cgi-bin/backupbot.cgi?action=get');
const data = await res.json();
const form = document.getElementById('configForm');
Object.keys(data).forEach(key => {
if (form.elements[key]) form.elements[key].value = data[key];
});
}
async function saveConfig(e) {
e.preventDefault();
const formData = new FormData(document.getElementById('configForm'));
const obj = Object.fromEntries(formData.entries());
const res = await fetch('/cgi-bin/backupbot.cgi?action=set', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify(obj)
});
const result = await res.json();
document.getElementById('status').innerText = result.message;
}
document.getElementById('configForm').addEventListener('submit', saveConfig);
loadTimezones().then(loadConfig);
</script>
</body>
</html>