gighiveThe GigHive app uses a memory-efficient streaming upload system with real network progress tracking and proper cancellation handling.
GigHive/Sources/App/AppConstants.swiftenum AppConstants {
static let MAX_UPLOAD_SIZE_BYTES: Int64 = 6_442_450_944 // 6 GB
static var MAX_UPLOAD_SIZE_FORMATTED: String {
ByteCountFormatter.string(fromByteCount: MAX_UPLOAD_SIZE_BYTES, countStyle: .file)
}
}
~/scripts/gighive/ansible/roles/docker/files/apache/DockerfileRUN sed -i 's/upload_max_filesize = .*/upload_max_filesize = 6144M/' /etc/php/${PHP_VERSION}/fpm/php.ini && \
sed -i 's/post_max_size = .*/post_max_size = 6144M/' /etc/php/${PHP_VERSION}/fpm/php.ini
~/scripts/gighive/ansible/roles/docker/templates/modsecurity.conf.j2# Global limits
SecRequestBodyLimit 6442450944
SecRequestBodyNoFilesLimit 6442450944
# Upload endpoint specific
<LocationMatch "^/api/uploads\.php$">
SecRequestBodyLimit 6442450944
</LocationMatch>
~/scripts/gighive/ansible/roles/docker/files/apache/webroot/src/Validation/UploadValidator.php// Default to 6 GB if not specified; allow override via env UPLOAD_MAX_BYTES
$env = getenv('UPLOAD_MAX_BYTES');
$defaultMax = 6 * 1024 * 1024 * 1024; // 6 GB calculated at runtime
$this->maxBytes = $maxBytes ?? ($env !== false && ctype_digit((string)$env) ? (int)$env : $defaultMax);
UPLOAD_MAX_BYTES environment variable to override the defaultNote: All limits are now aligned at 6 GB across iOS app, ModSecurity, PHP configuration, and application validator.
~/scripts/gighive/ansible/roles/docker/templates/.env.j2~/scripts/gighive/ansible/roles/docker/files/apache/webroot/src/Services/UploadService.php (line 76-77)FILENAME_SEQ_PAD=5 → stormpigs20251010_00001_mysong.mp4FILENAME_SEQ_PAD=3 → stormpigs20251010_001_mysong.mp4Code:
$padWidthEnv = getenv('FILENAME_SEQ_PAD');
$padWidth = is_string($padWidthEnv) && ctype_digit($padWidthEnv) ? max(1, min(9, (int)$padWidthEnv)) : 5;
$seqPadded = str_pad((string)$seq, $padWidth, '0', STR_PAD_LEFT);
Note: This is a deployment-specific preference and can be configured per environment without code changes.
UploadView.doUpload()
↓
UploadClient.uploadWithMultipartInputStream()
↓
NetworkProgressUploadClient.uploadFile() [async/await]
↓
URLSession delegates (streaming + progress)
Role: UI Layer
Responsibilities:
Key Code:
uploadTask = Task {
let (status, data, _) = try await client.uploadWithMultipartInputStream(
payload,
progress: { completed, total in
// Update UI with progress
}
)
}
// Cancel on button press:
uploadTask?.cancel()
currentUploadClient?.cancelCurrentUpload()
Role: Public API / Cancellation Handler
Responsibilities:
uploadWithMultipartInputStream()NetworkProgressUploadClient instancewithTaskCancellationHandlerKey Code:
func uploadWithMultipartInputStream(...) async throws -> (...) {
let networkClient = NetworkProgressUploadClient(...)
self.currentNetworkClient = networkClient
return try await withTaskCancellationHandler {
try await networkClient.uploadFile(
payload: payload,
progressHandler: { completed, total in
progress?(completed, total)
}
)
} onCancel: {
print("⚠️ Task cancelled - cancelling network upload")
networkClient.cancelUpload()
}
}
Why it exists:
Role: Core Upload Implementation
Responsibilities:
didSendBodyData delegatedidCompleteWithError delegateKey Features:
Key Code:
func uploadFile(...) async throws -> (status: Int, data: Data, requestURL: URL) {
return try await withCheckedThrowingContinuation { continuation in
self.continuation = continuation
// Create MultipartInputStream
let stream = try MultipartInputStream(...)
// Start URLSession upload task
let task = session.uploadTask(withStreamedRequest: request)
task.resume()
// Delegates will resume continuation when done
}
}
// URLSession delegate:
func urlSession(..., didCompleteWithError error: Error?) {
if let error = error {
continuation?.resume(throwing: error)
} else {
continuation?.resume(returning: (status, data, url))
}
}
Role: Streaming Data Source
Responsibilities:
InputStream protocolPhases:
1. User selects video.mov (6GB)
↓
2. UploadView creates UploadPayload
↓
3. UploadClient.uploadWithMultipartInputStream()
↓
4. NetworkProgressUploadClient.uploadFile()
↓
5. MultipartInputStream created
- Calculates: 6GB + headers = 6,000,123,456 bytes total
↓
6. URLSession.uploadTask(withStreamedRequest:)
- Reads from MultipartInputStream in chunks
- Streams directly to network
↓
7. Progress callbacks fire:
- didSendBodyData: 300MB / 6GB (5%)
- didSendBodyData: 600MB / 6GB (10%)
- ... etc
↓
8. UploadView updates UI: "10%..."
↓
9. Upload completes
↓
10. didCompleteWithError fires
↓
11. Continuation resumes with (status: 201, data: {...}, url: ...)
↓
12. UploadView shows success alert
Memory used: ~10-20MB (buffers only, not the 6GB file!)
1. User hits "Upload" button (shows "Uploading...")
↓
2. UploadView calls:
- uploadTask?.cancel()
- currentUploadClient?.cancelCurrentUpload()
↓
3. Task cancellation detected by withTaskCancellationHandler
↓
4. onCancel block fires:
- networkClient.cancelUpload()
↓
5. NetworkProgressUploadClient.cancelUpload():
- currentUploadTask?.cancel()
↓
6. URLSession cancels task
↓
7. didCompleteWithError fires with URLError.cancelled
↓
8. Continuation resumes: continuation?.resume(throwing: error)
↓
9. UploadView catches error, shows "cancelled" in debug log
↓
10. UI returns to normal state
Result: Clean cancellation, no continuation leak! ✅
Data(contentsOf: url) loads entire 6GB into RAM → crashesMultipartInputStream reads chunks → uses ~10MB RAMURLSession.upload(from: data) gives fake progress (buffer, not network)didSendBodyData gives REAL network progress| File | Lines | Purpose |
|---|---|---|
UploadView.swift |
~750 | UI and user interaction |
UploadClient.swift |
~105 | Public API + cancellation handling |
NetworkProgressUploadClient.swift |
~300 | Core upload implementation |
MultipartInputStream.swift |
~200 | Streaming data source |
UploadPayload.swift |
~15 | Data model |
AppConstants.swift |
~15 | Max upload size constant |
Total: ~1,385 lines of upload-related code
Current State: Upload limit (6 GB) is defined in 4 separate locations.
Goal: Use Ansible variable as single source of truth for server-side systems.
After defining upload_max_bytes and upload_max_mb in Ansible variables (Step 1 - DONE), update these files:
File: ~/scripts/gighive/ansible/roles/docker/files/apache/Dockerfile
# Change from hardcoded values:
RUN sed -i 's/upload_max_filesize = .*/upload_max_filesize = 6144M/' ...
# To Ansible variable:
RUN sed -i 's/upload_max_filesize = .*/upload_max_filesize = M/' ...
File: ~/scripts/gighive/ansible/roles/docker/templates/modsecurity.conf.j2
# Change from hardcoded values:
SecRequestBodyLimit 6442450944
# To Ansible variable:
SecRequestBodyLimit
File: ~/scripts/gighive/ansible/roles/docker/files/apache/webroot/src/Validation/UploadValidator.php
Option A: Pass via environment variable (recommended)
UPLOAD_MAX_BYTES env var)UPLOAD_MAX_BYTES in .env.j2 using Ansible variable:
UPLOAD_MAX_BYTES=
Option B: Generate PHP file from template
UploadValidator.php.j2File: GigHive/Sources/App/AppConstants.swift
⚠️ This must be manually updated when Ansible variable changes.
Add comment linking to Ansible variable:
// IMPORTANT: Keep in sync with Ansible variable 'upload_max_bytes'
// Location: ~/scripts/gighive/ansible/group_vars/all.yml
static let MAX_UPLOAD_SIZE_BYTES: Int64 = 6_442_450_944 // 6 GB
Architecture is clean, efficient, and maintainable! 🎉