Initial commit: Fuji photo processor pipeline
Automatic photo processing: Fuji X-H2 → FTP → Synology NAS → resize → Immich - PollingObserver watches /incoming/ for new JPEGs - Moves originals to /originals/YYYY/MM/ - Creates resized copies (1080x1920 @ 85%) with EXIF preserved - SQLite tracking to prevent duplicate processing - Deploy script for Synology NAS (docker run) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
12
.gitignore
vendored
Normal file
12
.gitignore
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
__pycache__/
|
||||
*.pyc
|
||||
*.pyo
|
||||
.env
|
||||
.env.*
|
||||
*.egg-info/
|
||||
dist/
|
||||
build/
|
||||
.venv/
|
||||
venv/
|
||||
*.db
|
||||
.DS_Store
|
||||
6
Dockerfile
Normal file
6
Dockerfile
Normal file
@@ -0,0 +1,6 @@
|
||||
FROM python:3.12-slim
|
||||
WORKDIR /app
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
COPY processor.py .
|
||||
CMD ["python", "-u", "processor.py"]
|
||||
111
README.md
Normal file
111
README.md
Normal file
@@ -0,0 +1,111 @@
|
||||
# Fuji Photo Processor
|
||||
|
||||
Automatic photo processing pipeline for Fuji X-H2 photos: camera uploads via FTP to Synology NAS, files are automatically resized and organized, then picked up by Immich for library management.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────┐ FTP ┌──────────────────┐ watchdog ┌─────────────┐
|
||||
│ Fuji X-H2 │ ──────────→ │ Synology NAS │ ────────────→ │ Processor │
|
||||
│ (camera) │ port 21 │ /volume2/photos/ │ polling │ container │
|
||||
└─────────────┘ │ incoming/ │ └──────┬──────┘
|
||||
└──────────────────┘ │
|
||||
│ resize + move
|
||||
┌─────────────┴─────────────┐
|
||||
│ │
|
||||
┌─────▼─────┐ ┌───────▼───────┐
|
||||
│ /originals │ │ /processed │
|
||||
│ YYYY/MM/ │ │ YYYY/MM/ │
|
||||
│ (full-res) │ │ (1080px max) │
|
||||
└────────────┘ └───────┬───────┘
|
||||
│
|
||||
┌───────▼───────┐
|
||||
│ Immich │
|
||||
│ external lib │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Synology NAS with Docker (Container Manager) installed
|
||||
- FTP server enabled on DSM
|
||||
- Immich running (for photo management)
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. FTP Server (DSM)
|
||||
|
||||
1. Open **Control Panel → File Services → FTP**
|
||||
2. Enable FTP service on port **21**
|
||||
3. Set passive port range: **50000-50100**
|
||||
4. Create a dedicated user `fujiftp` with write access to `/volume2/photos/incoming`
|
||||
|
||||
### 2. Camera Configuration (Fuji X-H2)
|
||||
|
||||
Configure an FTP profile on the camera:
|
||||
|
||||
| Setting | Value |
|
||||
|-----------------|--------------------------|
|
||||
| Server IP | `192.168.175.141` |
|
||||
| Port | `21` |
|
||||
| Passive Mode | **ON** |
|
||||
| Username | `fujiftp` |
|
||||
| Password | *(your password)* |
|
||||
| Upload Dir | `/incoming` |
|
||||
| Auto Transfer | ON (or manual trigger) |
|
||||
|
||||
### 3. Deploy
|
||||
|
||||
```bash
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
The deploy script will:
|
||||
- Create required directories on the NAS
|
||||
- Transfer and build the Docker image on the NAS
|
||||
- Start the container with appropriate volume mounts
|
||||
|
||||
### 4. Immich Integration
|
||||
|
||||
1. In Immich, go to **Administration → External Libraries**
|
||||
2. Add a new library with import path: `/volume2/photos/processed`
|
||||
3. Set scan interval (e.g., every 15 minutes)
|
||||
4. Mount `/volume2/photos/processed` into the Immich container as a read-only volume
|
||||
|
||||
## Container Details
|
||||
|
||||
### Volumes
|
||||
|
||||
| Container Path | Host Path | Purpose |
|
||||
|----------------|----------------------------------------|---------------------------|
|
||||
| `/incoming` | `/volume2/photos/incoming` | FTP upload landing zone |
|
||||
| `/originals` | `/volume2/photos/originals` | Full-resolution originals |
|
||||
| `/processed` | `/volume2/photos/processed` | Resized copies for Immich |
|
||||
| `/data` | `/volume2/docker/photo-processor/data` | SQLite tracking database |
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|-----------------|---------|--------------------------------------|
|
||||
| `POLL_INTERVAL` | `30` | Filesystem poll interval in seconds |
|
||||
| `JPEG_QUALITY` | `85` | JPEG compression quality (1-100) |
|
||||
| `MAX_WIDTH` | `1080` | Maximum width for resized images |
|
||||
| `MAX_HEIGHT` | `1920` | Maximum height for resized images |
|
||||
| `TZ` | `Europe/Amsterdam` | Container timezone |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Check container logs
|
||||
|
||||
```bash
|
||||
ssh -i ../SynologyDocker/synology_ssh_key ssh@192.168.175.141 \
|
||||
'sudo /usr/local/bin/docker logs -f photo-processor'
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
- **Files not detected**: Check that the FTP user has write permissions to `/volume2/photos/incoming`. The processor uses polling (not inotify) so there may be up to a 30-second delay.
|
||||
- **Permission denied on originals/processed**: Ensure the directories exist and are writable. The deploy script creates them automatically.
|
||||
- **Duplicate filenames**: The processor tracks files by filename in SQLite. If you re-upload a file with the same name, it will be skipped. Delete the entry from `/volume2/docker/photo-processor/data/processed.db` to reprocess.
|
||||
- **Container keeps restarting**: Check logs for Python errors. Common cause: missing directories or permission issues on volume mounts.
|
||||
- **EXIF data lost**: The processor preserves EXIF data from the original. If EXIF is missing, the original file may not have contained it (check camera settings).
|
||||
135
deploy.sh
Executable file
135
deploy.sh
Executable file
@@ -0,0 +1,135 @@
|
||||
#!/usr/bin/env bash
|
||||
# Fuji Photo Processor Deployment Script for Synology NAS
|
||||
# Deploys: photo-processor container (watches /incoming/ for JPEGs)
|
||||
#
|
||||
# This script runs from the LOCAL machine and SSHes into the NAS.
|
||||
# Docker compose is not available on this Synology, so we use docker run.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Configuration
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SSH_KEY="$(cd "$SCRIPT_DIR/.." && pwd)/SynologyDocker/synology_ssh_key"
|
||||
NAS_HOST="ssh@192.168.175.141"
|
||||
NAS_DOCKER_DIR="/volume2/docker/photo-processor"
|
||||
CONTAINER_NAME="photo-processor"
|
||||
IMAGE_NAME="photo-processor"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Logging helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
log_info() { echo -e "${GREEN}[INFO]${NC} $*"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $*"; }
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# SSH / Docker wrappers
|
||||
# ---------------------------------------------------------------------------
|
||||
ssh_cmd() {
|
||||
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=no "$NAS_HOST" "$@"
|
||||
}
|
||||
|
||||
docker_cmd() {
|
||||
# Synology requires sudo for docker socket access
|
||||
ssh_cmd "sudo /usr/local/bin/docker $*"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Functions
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
create_directories() {
|
||||
log_info "Creating directories on NAS..."
|
||||
ssh_cmd "mkdir -p $NAS_DOCKER_DIR/data $NAS_DOCKER_DIR/build"
|
||||
|
||||
# Photos dirs may need elevated permissions - check and warn
|
||||
if ! ssh_cmd "test -d /volume2/photos/incoming" 2>/dev/null; then
|
||||
log_warn "/volume2/photos/ directories not found!"
|
||||
log_warn "Create them via DSM File Station or as admin:"
|
||||
log_warn " sudo mkdir -p /volume2/photos/{incoming,originals,processed}"
|
||||
log_warn " sudo chown -R fujiftp:users /volume2/photos/incoming"
|
||||
log_warn " sudo chmod 755 /volume2/photos/{originals,processed}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
build_image() {
|
||||
log_info "Transferring build files to NAS..."
|
||||
tar czf - -C "$SCRIPT_DIR" Dockerfile requirements.txt processor.py | \
|
||||
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=no "$NAS_HOST" \
|
||||
"cd $NAS_DOCKER_DIR/build && tar xzf -"
|
||||
|
||||
log_info "Building $IMAGE_NAME image on NAS..."
|
||||
docker_cmd build -t "$IMAGE_NAME" "$NAS_DOCKER_DIR/build"
|
||||
}
|
||||
|
||||
stop_existing() {
|
||||
log_info "Stopping existing container (if any)..."
|
||||
docker_cmd stop "$CONTAINER_NAME" 2>/dev/null || true
|
||||
docker_cmd rm "$CONTAINER_NAME" 2>/dev/null || true
|
||||
}
|
||||
|
||||
start_container() {
|
||||
log_info "Starting $CONTAINER_NAME container..."
|
||||
docker_cmd run -d \
|
||||
--name "$CONTAINER_NAME" \
|
||||
--restart unless-stopped \
|
||||
--memory=256m \
|
||||
-e TZ=Europe/Amsterdam \
|
||||
-v /volume2/photos/incoming:/incoming \
|
||||
-v /volume2/photos/originals:/originals \
|
||||
-v /volume2/photos/processed:/processed \
|
||||
-v "$NAS_DOCKER_DIR/data:/data" \
|
||||
"$IMAGE_NAME"
|
||||
}
|
||||
|
||||
show_status() {
|
||||
echo ""
|
||||
log_info "=== Container Status ==="
|
||||
docker_cmd ps --filter "name=$CONTAINER_NAME" --format "'table {{.Names}}\t{{.Status}}\t{{.Ports}}'"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
main() {
|
||||
log_info "=== Fuji Photo Processor Deployment ==="
|
||||
echo ""
|
||||
|
||||
# Verify SSH key exists
|
||||
if [ ! -f "$SSH_KEY" ]; then
|
||||
log_error "SSH key not found: $SSH_KEY"
|
||||
log_error "Expected sibling repo at: $(cd "$SCRIPT_DIR/.." && pwd)/SynologyDocker/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify NAS connectivity
|
||||
log_info "Testing NAS connectivity..."
|
||||
if ! ssh_cmd "echo ok" >/dev/null 2>&1; then
|
||||
log_error "Cannot connect to NAS at $NAS_HOST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
create_directories
|
||||
build_image
|
||||
stop_existing
|
||||
start_container
|
||||
show_status
|
||||
|
||||
echo ""
|
||||
log_info "=== Deployment Complete ==="
|
||||
log_info "Container: $CONTAINER_NAME"
|
||||
log_info "Incoming: /volume2/photos/incoming (FTP upload target)"
|
||||
log_info "Originals: /volume2/photos/originals/YYYY/MM/"
|
||||
log_info "Processed: /volume2/photos/processed/YYYY/MM/"
|
||||
echo ""
|
||||
log_info "Check logs: ssh -i $SSH_KEY $NAS_HOST 'sudo /usr/local/bin/docker logs -f $CONTAINER_NAME'"
|
||||
}
|
||||
|
||||
main "$@"
|
||||
222
processor.py
Normal file
222
processor.py
Normal file
@@ -0,0 +1,222 @@
|
||||
"""
|
||||
Fuji Photo Processor
|
||||
Watches /incoming/ for new JPEGs, moves originals and creates resized copies.
|
||||
Designed for Fuji X-H2 → FTP → Synology NAS → Immich pipeline.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import sqlite3
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from PIL import Image
|
||||
from watchdog.events import FileSystemEventHandler
|
||||
from watchdog.observers.polling import PollingObserver
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Configuration from environment
|
||||
# ---------------------------------------------------------------------------
|
||||
POLL_INTERVAL = int(os.environ.get("POLL_INTERVAL", "30"))
|
||||
JPEG_QUALITY = int(os.environ.get("JPEG_QUALITY", "85"))
|
||||
MAX_WIDTH = int(os.environ.get("MAX_WIDTH", "1080"))
|
||||
MAX_HEIGHT = int(os.environ.get("MAX_HEIGHT", "1920"))
|
||||
|
||||
INCOMING_DIR = Path("/incoming")
|
||||
ORIGINALS_DIR = Path("/originals")
|
||||
PROCESSED_DIR = Path("/processed")
|
||||
DB_PATH = Path("/data/processed.db")
|
||||
|
||||
JPEG_EXTENSIONS = {".jpg", ".jpeg"}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Logging
|
||||
# ---------------------------------------------------------------------------
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(message)s",
|
||||
datefmt="%Y-%m-%d %H:%M:%S",
|
||||
stream=sys.stdout,
|
||||
)
|
||||
log = logging.getLogger("processor")
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Database
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def init_db():
|
||||
"""Initialize the SQLite tracking database."""
|
||||
DB_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
conn = sqlite3.connect(str(DB_PATH))
|
||||
conn.execute(
|
||||
"""CREATE TABLE IF NOT EXISTS processed_files (
|
||||
filename TEXT PRIMARY KEY,
|
||||
processed_at TEXT NOT NULL,
|
||||
original_path TEXT NOT NULL,
|
||||
resized_path TEXT NOT NULL
|
||||
)"""
|
||||
)
|
||||
conn.commit()
|
||||
return conn
|
||||
|
||||
|
||||
def is_already_processed(conn, filename):
|
||||
"""Check if a file has already been processed."""
|
||||
row = conn.execute(
|
||||
"SELECT 1 FROM processed_files WHERE filename = ?", (filename,)
|
||||
).fetchone()
|
||||
return row is not None
|
||||
|
||||
|
||||
def mark_processed(conn, filename, original_path, resized_path):
|
||||
"""Record a file as processed."""
|
||||
conn.execute(
|
||||
"INSERT INTO processed_files (filename, processed_at, original_path, resized_path) "
|
||||
"VALUES (?, ?, ?, ?)",
|
||||
(filename, datetime.now().isoformat(), str(original_path), str(resized_path)),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Processing
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def is_jpeg(path):
|
||||
"""Check if a file path has a JPEG extension (case-insensitive)."""
|
||||
return Path(path).suffix.lower() in JPEG_EXTENSIONS
|
||||
|
||||
|
||||
def process_file(filepath, conn):
|
||||
"""Move original and create a resized copy of a JPEG file."""
|
||||
filepath = Path(filepath)
|
||||
|
||||
if not filepath.exists():
|
||||
return
|
||||
|
||||
if not is_jpeg(filepath):
|
||||
return
|
||||
|
||||
filename = filepath.name
|
||||
|
||||
if is_already_processed(conn, filename):
|
||||
log.info("Skipping already processed: %s", filename)
|
||||
return
|
||||
|
||||
try:
|
||||
# Determine year/month from file modification time
|
||||
mtime = filepath.stat().st_mtime
|
||||
dt = datetime.fromtimestamp(mtime)
|
||||
year_month = f"{dt.year}/{dt.month:02d}"
|
||||
|
||||
# Paths
|
||||
original_dest = ORIGINALS_DIR / year_month / filename
|
||||
resized_dest = PROCESSED_DIR / year_month / filename
|
||||
|
||||
# Create directories
|
||||
original_dest.parent.mkdir(parents=True, exist_ok=True)
|
||||
resized_dest.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Resize and save with EXIF preserved
|
||||
with Image.open(filepath) as image:
|
||||
exif_data = image.info.get("exif", b"")
|
||||
image.thumbnail((MAX_WIDTH, MAX_HEIGHT), Image.LANCZOS)
|
||||
|
||||
save_kwargs = {"quality": JPEG_QUALITY}
|
||||
if exif_data:
|
||||
save_kwargs["exif"] = exif_data
|
||||
|
||||
image.save(str(resized_dest), "JPEG", **save_kwargs)
|
||||
|
||||
# Move original out of incoming
|
||||
shutil.move(str(filepath), str(original_dest))
|
||||
|
||||
# Track in database
|
||||
mark_processed(conn, filename, original_dest, resized_dest)
|
||||
log.info("Processed: %s → originals/%s, processed/%s", filename, year_month, year_month)
|
||||
|
||||
except Exception:
|
||||
log.exception("Error processing %s", filename)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Watchdog handler
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class PhotoHandler(FileSystemEventHandler):
|
||||
"""Handles new JPEG files appearing in the incoming directory."""
|
||||
|
||||
def __init__(self, conn):
|
||||
super().__init__()
|
||||
self.conn = conn
|
||||
|
||||
def on_created(self, event):
|
||||
if event.is_directory:
|
||||
return
|
||||
if is_jpeg(event.src_path):
|
||||
log.info("New file detected: %s", event.src_path)
|
||||
# Small delay to ensure file is fully written (FTP uploads)
|
||||
time.sleep(2)
|
||||
process_file(event.src_path, self.conn)
|
||||
|
||||
def on_moved(self, event):
|
||||
"""Handle FTP clients that create temp files then rename."""
|
||||
if event.is_directory:
|
||||
return
|
||||
if is_jpeg(event.dest_path):
|
||||
log.info("Renamed file detected: %s", event.dest_path)
|
||||
time.sleep(2)
|
||||
process_file(event.dest_path, self.conn)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def scan_existing(conn):
|
||||
"""Process any existing files in /incoming/ (catch-up after restart)."""
|
||||
existing = list(INCOMING_DIR.glob("*"))
|
||||
jpegs = [f for f in existing if f.is_file() and is_jpeg(f)]
|
||||
if jpegs:
|
||||
log.info("Found %d existing JPEG(s) in /incoming/, processing...", len(jpegs))
|
||||
for filepath in jpegs:
|
||||
process_file(filepath, conn)
|
||||
else:
|
||||
log.info("No existing files in /incoming/")
|
||||
|
||||
|
||||
def main():
|
||||
log.info("=== Fuji Photo Processor ===")
|
||||
log.info("Config: poll=%ds, quality=%d, max_size=%dx%d",
|
||||
POLL_INTERVAL, JPEG_QUALITY, MAX_WIDTH, MAX_HEIGHT)
|
||||
log.info("Watching: %s", INCOMING_DIR)
|
||||
|
||||
conn = init_db()
|
||||
log.info("Database initialized: %s", DB_PATH)
|
||||
|
||||
# Catch-up scan
|
||||
scan_existing(conn)
|
||||
|
||||
# Start watching
|
||||
handler = PhotoHandler(conn)
|
||||
observer = PollingObserver(timeout=POLL_INTERVAL)
|
||||
observer.schedule(handler, str(INCOMING_DIR), recursive=False)
|
||||
observer.start()
|
||||
log.info("Watching for new files (polling every %ds)...", POLL_INTERVAL)
|
||||
|
||||
try:
|
||||
while True:
|
||||
time.sleep(1)
|
||||
except KeyboardInterrupt:
|
||||
log.info("Shutting down...")
|
||||
observer.stop()
|
||||
|
||||
observer.join()
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
2
requirements.txt
Normal file
2
requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
Pillow==11.1.0
|
||||
watchdog==4.0.2
|
||||
Reference in New Issue
Block a user