Google Photos vs Synology Photos: Why I Migrated 2TB to Local NAS

It started with a seemingly benign notification: "Account storage is 90% full." For years, I had treated GooglePhotos as an infinite sinkhole for my digital memories, uploading gigabytes of RAW images and 4K videos without a second thought. But when Google ended their unlimited free storage tier, the reality of the "SaaS Tenant" model hit hard. I calculated the projected cost of storing 2TB of data over the next decade on a cloud subscription, and the numbers were staggering—not just in terms of money, but in data sovereignty. The latency in retrieving archived footage and the realization that my data was being used to train public AI models forced a pivot. I needed an exit strategy that offered the convenience of cloud AI with the control of bare metal.

The Cloud Trap: Latency and Privacy Analysis

Technically, Google Photos is a marvel of distributed engineering. It abstracts away the complexity of redundancy, indexing, and availability. However, for a power user or engineer, this abstraction is a double-edged sword. When I analyzed the network traffic during a bulk export (using Google Takeout), I noticed significant throttling. My gigabit fiber connection was underutilized, capped by server-side rate limits. Furthermore, the compression algorithms used in the "Storage Saver" tier, while efficient, introduce generation loss that is unacceptable for archival purposes.

The alternative, Synology Photos running on a dedicated NAS (Network Attached Storage), flips this architecture. Instead of relying on a distributed global CDN, you bring the compute to the data. I deployed a Synology DS923+ with 32GB of ECC RAM. The immediate benefit was local network throughput. Transferring files over SMB or the Photos mobile backup agent within the LAN saturated the 1GbE link (approx. 110MB/s), and with SMB Multichannel enabled, I saw speeds nearing 200MB/s on WiFi 6 clients. This is a fundamental shift from the "rented" model to an "owned" infrastructure.

Privacy Warning: When you agree to Google's TOS, you grant them a license to host, reproduce, and create derivative works from your content. With a local NAS, the data never leaves your subnet unless you explicitly tunnel it out.

Why the DIY "Nextcloud on Pi" Approach Failed

Before settling on Synology, I attempted a pure open-source route: hosting Nextcloud on a Raspberry Pi 4 with an external USB HDD. In theory, this was the most cost-effective solution. In practice, it was a maintenance nightmare.

The first bottleneck was I/O throughput. The USB 3.0 bus on the Pi, shared with other peripherals, choked during the indexing of 50,000+ photos. The database (MariaDB) grew rapidly, and the SD card hosting the OS inevitably succumbed to write-wear corruption within six months. More critically, the "AI" facial recognition plugins for Nextcloud were CPU-bound and painfully slow on ARM architecture without NPU acceleration. It would take weeks to re-index faces after an update. I realized that "free" software often costs significantly more in engineering hours. I needed a solution that decoupled storage management from the application layer, which led me to the Synology ecosystem.

The Solution: Synology Photos with Reverse Proxy

Migrating to Synology Photos solved the hardware stability issue, but accessing the NAS securely from outside the home network (to mimic the Google Photos experience) required a robust network configuration. Opening default ports (5000/5001) to the WAN is a security suicide mission.

Instead, I implemented a Reverse Proxy using Nginx (built into DSM's login portal settings, but configurable via SSH for advanced tuning). This allows me to access my photos via a clean domain (e.g., `photos.mydomain.com`) over HTTPS/TLS 1.3, while keeping the NAS management ports closed to the internet. Here is the Nginx configuration block I used to harden the connection headers.

// /etc/nginx/sites-enabled/photos-proxy.conf
// Standard Nginx Reverse Proxy block for Synology
server {
    listen 443 ssl http2;
    server_name photos.mydomain.com;

    // SSL Certificate paths (Let's Encrypt auto-generated)
    ssl_certificate /usr/syno/etc/certificate/system/default/fullchain.pem;
    ssl_certificate_key /usr/syno/etc/certificate/system/default/privkey.pem;

    location / {
        proxy_pass http://localhost:5000; // Forward to internal DSM port
        
        // CRITICAL: Pass real client IP to the NAS log
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;

        // WebSocket support for live notifications
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        // Security Headers to prevent clickjacking
        add_header X-Frame-Options "SAMEORIGIN";
        add_header X-Content-Type-Options "nosniff";
    }
}

Let's break down the critical logic here. The `proxy_set_header X-Real-IP` directive is mandatory; without it, the Synology logs will show all traffic coming from `127.0.0.1` (localhost), making intrusion detection systems like "Auto Block" useless. If an attacker tries to brute-force your login, the NAS needs the actual WAN IP to ban it. Additionally, the `Upgrade` and `Connection` headers are essential for the WebSocket connections that Synology Photos uses to push real-time upload status updates to the web client.

Benchmark: Cost & Performance Over 5 Years

The initial capital expenditure (CapEx) for a NAS is high, but the operational expenditure (OpEx) is near zero. Conversely, Cloud storage is low CapEx, high OpEx. Here is the breakdown for 2TB of data over a 5-year horizon.

Metric Google Photos (2TB Plan) Synology DS923+ (2x4TB RAID 1)
5-Year Cost ~$600 (Subscription) ~$550 (Hardware + Drives)
Data Ownership Tenant (Leased) Owner (Sovereign)
AI Processing Cloud-side (Privacy Risk) Local Device (Private)
LAN Transfer Speed N/A (Internet Speed Dependent) 110MB/s - 250MB/s
Redundancy Google Internal (Unknown) RAID 1 (1 Drive Failure Tolerance)

The table reveals the inflection point. By year 4, the NAS hardware pays for itself. However, the performance metric is where the user experience diverges. Browsing a library of 50,000 photos on Synology Photos over LAN is instantaneous. There is no buffering of thumbnails. The facial recognition, while taking about 2 days to initially index the entire library on the DS923+ (AMD Ryzen R1600 CPU), is surprisingly accurate. It correctly grouped family members and even distinguished between similar-looking pets, all without sending a single byte of biometric data to the cloud.

Try Synology DSM Live Demo

Edge Cases & Critical Caveats

While I advocate for this migration, it is not without risks. The "3-2-1 Backup Rule" becomes your personal responsibility. If your house burns down, your NAS goes with it. Unlike Google, you don't have geo-redundancy out of the box.

The Fix: I configured "Hyper Backup" on the Synology to encrypt and push a nightly differential backup to a cheap S3-compatible bucket (like Backblaze B2 or Wasabi). This costs pennies compared to Google One storage because it is "cold" storage—only accessed in disaster scenarios. Do not rely solely on RAID; RAID is redundancy, not backup. If you accidentally delete a photo, RAID will faithfully replicate that deletion to the mirrored drive instantly. You must enable "Snapshot Replication" (Btrfs feature) to roll back accidental deletions.

Pro Tip: Enable Btrfs Data Scrubbing on a monthly schedule. This detects "bit rot" (silent data corruption) and auto-heals the file from the RAID parity data, a feature Google Photos never exposes to the end user.

Conclusion

Migrating to Synology Photos is not just a financial decision; it is a declaration of data independence. While Google Photos offers unparalleled convenience and superior global search capabilities, the trade-off in privacy and long-term cost is substantial. For engineers and tech-savvy users, the combination of a Synology NAS, a proper Reverse Proxy, and an off-site backup strategy provides a superior, private, and lightning-fast alternative. You stop being a tenant in someone else's server farm and become the architect of your own digital legacy.

Post a Comment