Google Photos vs Synology Photos: Breaking the Cloud Dependency (A Migration Log)

Last month, I hit the inevitable wall that every data-conscious engineer eventually faces: the storage cap. After years of treating GooglePhotos as an infinite dumpster for my raw camera uploads, memes, and family videos, I received the notification that my 2TB plan was 95% full. The immediate reaction was to upgrade to the next tier, but looking at the compounding costs over the next decade—and the realization that I was renting access to my own memories—forced a re-evaluation. This wasn't just about saving $10 a month; it was about the technical limitations of cloud abstraction versus the raw control of local hardware.

The Architecture Gap: Cloud Scale vs. Local Iron

To understand the trade-off, we must look beyond the UI and into the backend architecture. Google Cloud Bigtable and Colossus file systems power Google Photos. It provides unmatched durability (11 nines) and global availability. However, the cost of this scale is compression and loss of hierarchy. Google Photos flattens your directory structure into a chronological stream. For a casual user, this is magic. For an engineer who organizes shoots by `YYYY-MM-DD_ProjectName`, it is chaos.

Synology Photos, running on DiskStation Manager (DSM), operates fundamentally differently. It sits on top of a Linux-based OS, typically utilizing the Btrfs file system. This is critical because Btrfs supports bit-rot protection and snapshotting—features that are essential when you become the sole custodian of your data. When you move to Synology, you aren't just swapping apps; you are swapping a service for a system you must maintain. The CPU in your NAS (often an Intel Celeron or AMD Ryzen embedded chip) becomes the bottleneck for facial recognition and thumbnail generation, replacing Google's massive TPU clusters.

Critical Warning: Unlike Google Photos, Synology provides NO geographical redundancy out of the box. If your house floods, your data is gone. You must implement the 3-2-1 backup rule manually using Hyper Backup or C2 Storage.

The "Experience" of switching involves a painful transition period: The Migration. I initially tried using Google Takeout. The result was 50GB zip files containing JSON metadata split from the actual images. Re-associating this metadata (EXIF dates, GPS) with the images is where most "naive" migration scripts fail.

Why the "Drag and Drop" Approach Failed

My first attempt involved mounting the NAS as a network drive and dragging the unzipped Takeout folders into the `/photo` directory. This failed spectacularly for two reasons:

  1. Creation Date Chaos: The file system "Created Date" was reset to the moment of the copy, messing up the timeline in non-EXIF-aware viewers.
  2. HEIC Incompatibility: My older Synology indexing service choked on thousands of iPhone HEIC files, causing CPU usage to spike to 99% for days, rendering the NAS unusable.

The Solution: Automated Sync Container

Instead of manual copying, the robust engineering solution is to use a containerized synchronizer that handles the API interactions and metadata preservation correctly. I deployed `gphotos-sync` via Docker on the Synology NAS itself. This tool pulls photos directly from the Google Photos API, organizes them into folders by date, and avoids the "Takeout" zip mess.

Here is the `docker-compose.yml` configuration I used to bridge the gap between GooglePhotos and my local Synology Photos library.

version: '3.8'
services:
  gphotos-sync:
    image: gilesknap/gphotos-sync:3.1.2
    container_name: gphotos_migrator
    user: "1026:100" # Match PUID/PGID of your Synology user
    volumes:
      - /volume1/photo/GoogleArchive:/storage # Target folder in Synology Photos
      - ./config:/config # Stores OAuth credentials
    environment:
      - LOG_LEVEL=info
      # Prevent downloading videos if storage is tight
      - SKIP_VIDEO=false 
    # Use 'run-once' logic or schedule via cron, don't leave looping
    entrypoint: ["/bin/sh", "-c", "gphotos-sync /storage"]
    restart: "no"

# Note: You must first authenticate locally to generate the client_secret.json
# and place it in the ./config volume.

The key parameter here is mapping the volume directly to `/volume1/photo`. Synology Photos monitors this directory. By injecting files directly into the indexing path with the correct User ID (`1026` in my case, check yours via `id` command in SSH), we ensure that the Synology indexing service picks them up immediately with the correct permissions.

Optimization Tip: If your Synology NAS supports it, install the "Advanced Media Extensions" package. This is required for hardware-accelerated HEIC decoding, which significantly speeds up thumbnail generation after the migration.

Performance & Cost Analysis

After migrating roughly 120,000 assets, I compared the performance and usability of both systems. The test environment was a Synology DS923+ (Ryzen R1600, 4GB ECC RAM) versus a Google One 2TB plan accessed via Fiber internet.

Metric Google Photos Synology Photos (Local) Synology Photos (Remote)
Thumbnail Load Instant (<100ms) Instant (LAN) Variable (200ms - 1s)
Search (AI) Excellent ("Dog on beach") Good ("Dog", "Beach") N/A
Video Scrubbing Adaptive Bitrate (Smooth) Direct Stream (High Bandwidth) Buffering on 4G/5G
5-Year Cost $600+ (Subscription) $550 (Hardware, One-off) $0 (Recurring)

The numbers reveal the truth: Google wins on remote accessibility and AI intelligence. Google's object recognition is semantic—it understands context. Synology's recognition is literal—it identifies objects but lacks the nuance. However, Synology Photos wins entirely on throughput for local management. Editing a 4K video file over 10GbE LAN from the NAS is seamless; trying to do that from the cloud is impossible without downloading first.

Check Official Synology Photos Compatibility

Edge Cases & Network Hurdles

While the hardware works well, the network configuration is where many engineers stumble. Accessing Synology Photos outside your home network requires exposing your NAS to the internet.

The QuickConnect Trap: Synology offers "QuickConnect" to bypass firewalls and CGNAT. While convenient, it relays traffic through Synology's servers, which severely caps your transfer speeds. For a Google Photos-like experience on mobile data, you should not use QuickConnect.

The Best Practice: Set up a Reverse Proxy (using Nginx or Traefik) and a DDNS service. Open port 443 only and route traffic to the Photos application. This allows direct connection speeds. If you are behind a CGNAT (common with ISPs like Starlink), you may need to use a tunneling service like Cloudflare Tunnel or Tailscale.

Security Notice: Exposing your NAS to the internet increases your attack surface. Ensure you have 2FA enabled and "Auto Block" configured for failed login attempts.

Conclusion

Moving from GooglePhotos to Synology Photos is not just a change of app; it's a shift in philosophy. You gain absolute privacy, zero compression, and no monthly fees. In exchange, you lose the world-class AI search and the "it just works" reliability of Google's global CDN. For the general public, Google remains the king of convenience. But for those of us who want to own our digital legacy, the Synology ecosystem offers a robust, albeit hands-on, sanctuary for our memories.

Post a Comment