The Problem: Large Video Files Break Everything
Video production teams work with massive files. A typical 4K video project includes raw footage (2-5 GB per 10-minute clip), master files (8-12 GB for a finished 30-minute video), and project assets (3-5 GB of B-roll, graphics, audio).
When editors try to upload these to the cloud via a web browser, they hit walls:
Standard HTTP upload issues:
- ⏱️ Timeouts: Most servers kill connections after 30-60 seconds of inactivity
- 🔌 Network hiccups: One disconnection = start over from 0%
- 💾 Memory limits: Browsers crash trying to hold 10GB in RAM
- 🌐 Proxy limits: Corporate networks often have 1-2GB request size limits
- 😓 User experience: Upload shows 98% progress, then fails → rage quit
The specific scenario: A video editor uploads a 10GB file. It takes 45 minutes on their connection. At 43 minutes (96% complete), their WiFi hiccups for 2 seconds. The upload fails. They have to restart from scratch.
This happened to our users multiple times per day.
Why "Just Use Multipart Upload" Isn't Enough
Cloud storage providers support multipart uploads, but there are still critical problems:
Browser limitations: Web browsers can't read a 10GB file in chunks without loading the entire thing into memory first (which crashes the browser). Solution: Desktop application with direct file system access.
Progress tracking complexity: When uploading 20 chunks in parallel, calculating accurate total progress isn't straightforward. Solution: Weighted progress calculation based on chunk sizes.
Resume after failure: If chunk 15 out of 20 fails, how do you know which chunks succeeded? Solution: Track each chunk's status in a local database.
Network saturation: Uploading too many chunks simultaneously can actually slow down the overall upload. Solution: Limit concurrent uploads to 3 chunks at a time (tested optimal for most connections).
Our Architecture: Desktop App + Chunked Upload Manager
How it works:
- User selects file, desktop app requests upload session from server
- Server creates "pending upload" record and generates presigned URLs (one per chunk, valid for 24 hours)
- Desktop app splits file into 512MB chunks (sweet spot for most networks)
- Upload 3 chunks simultaneously—as each completes, immediately start the next chunk
- Real-time progress: Aggregate chunk progress weighted by size
- After all chunks uploaded, server verifies all chunks exist and marks upload as complete
The Technical Challenge: Graceful Failure Handling
Network disconnection scenario: User's WiFi drops while uploading chunk 8.
What happens: Chunk 8 upload fails with network error → Desktop app detects failure → App pauses for 2 seconds (exponential backoff) → App retries chunk 8 → Upload resumes from where it left off.
User experience: Progress bar pauses briefly, then continues. No manual intervention needed.
Application crash scenario: Desktop app crashes (power outage, forced quit, system update).
What happens: User restarts desktop app → App checks local database for incomplete uploads → App shows: "Resume upload of project_final.mp4 (6.4GB / 10GB uploaded)?" → User clicks "Resume" → App skips chunks 1-12 (already uploaded), continues with chunk 13.
User experience: Never start from scratch. Always resume from last successful chunk.
This resilience is what makes the system actually work in production. Without it, large file uploads would still be a nightmare for users on unstable connections.
Real-World Performance
Before chunked uploads (standard HTTP POST):
- ⏱️ Success rate: 40% for files over 2GB (60% failed)
- 😓 User experience: "I tried uploading 4 times before it worked"
- 🔌 Network resilience: Zero tolerance for disconnections
- 💻 Browser crashes: Frequent for files over 5GB
After chunked uploads (desktop app + 512MB chunks):
- ⏱️ Success rate: 99.2% for files over 10GB
- ✨ User experience: "I can upload while working, pause for lunch, and it just works"
- 🔌 Network resilience: Automatic retry, exponential backoff
- 💻 Memory usage: Stable (only 512MB in memory at a time, not entire 10GB)
Upload times (10GB file):
- Fast connection (100 Mbps): 15-18 minutes
- Medium connection (25 Mbps): 60-70 minutes
- Slow connection (5 Mbps): 5-6 hours (but still succeeds with resume support)
Peak usage stats:
- Largest successful upload: 47GB (4K raw footage, took 8 hours on slow connection)
- Most resumed upload: 14 resume events (user on unstable mobile hotspot, upload succeeded)
- Concurrent uploads: 120+ users uploading simultaneously during peak hours
What We Learned
1. Chunk size matters more than you think
We tested 64MB chunks (too many, overhead adds up), 1GB chunks (too large, harder to resume), and 512MB chunks (Goldilocks zone). Test with real files on real connections. Theory doesn't always match reality.
2. Limit concurrent uploads
Early version uploaded 10 chunks simultaneously. Result: Each chunk competed for bandwidth, overall upload was slower. Testing showed 3 concurrent uploads is optimal for most connections. More parallelism isn't always better.
3. Progress feedback changes user behavior
When we added real-time progress tracking, support tickets about "slow uploads" dropped 70%. Users didn't need uploads to be faster—they needed to know the upload was working. Visibility matters as much as performance.
Building a platform that handles large file uploads, video/media processing, or user-generated content at scale? Let's talk →
We've built chunked upload systems for video platforms, medical imaging platforms, and CAD file management. The pattern is always the same: chunk it, upload in parallel, track progress, handle failures gracefully.