R2 Object Storage Integration
Priority: P0 (Immediate)
What is R2?
S3-compatible object storage with \1. Drop-in replacement for AWS S3 using the same SDK.
Why This Matters for Company Manager
Current S3 Usage
The platform uses AWS S3 for file storage via \1:
- **shared-files router**: Upload/download/delete operations
- **Press center assets**: Magazine covers, article images
- **Product images**: Multi-tenant media storage
- **User uploads**: Documents, attachments
- **Tenant-scoped buckets**: Files isolated by tenantId/siteId
Cost Comparison
| Metric | AWS S3 | Cloudflare R2 | Savings |
|---|---|---|---|
| Storage (100 GB) | $2.30/mo | $1.50/mo | 35% |
| Egress (500 GB) | $45/mo | **$0** | **100%** |
| PUT (1M ops) | $5.00 | $4.50 | 10% |
| GET (10M ops) | $4.00 | $3.60 | 10% |
| **Total** | **~$56/mo** | **~$9.60/mo** | **83%** |
The egress savings alone justify the migration.
Architecture
Current (AWS S3)
Next.js App → AWS S3 Client → AWS S3 (us-east-1)
↓ egress
Users (global)
Proposed (R2)
Next.js App → S3 Client → R2 (auto-region)
↓ free egress
Users (global, via CF CDN)
Workers → R2 Binding → R2 (zero-latency from edge)
Implementation
Step 1: Create R2 Bucket
npx wrangler r2 bucket create company-manager-media
npx wrangler r2 bucket create company-manager-media-staging # for testing
Step 2: Generate S3-Compatible Credentials
# In Cloudflare dashboard: R2 → Manage R2 API tokens
# Create token with Object Read & Write for company-manager-media bucket
# Get: Access Key ID + Secret Access Key
Step 3: Update S3 Client Configuration
The migration is minimal because R2 is S3-compatible. Change the endpoint:
// Before (AWS S3)
import { S3Client } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "us-east-1",
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
});
// After (R2 via S3 API)
const s3 = new S3Client({
region: "auto",
endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});
All existing S3 operations (\1, \1, \1, \1) work unchanged.
Step 4: Update Environment Variables
# apps/app/.env
R2_ACCOUNT_ID=<cloudflare-account-id>
R2_ACCESS_KEY_ID=<r2-api-token-access-key>
R2_SECRET_ACCESS_KEY=<r2-api-token-secret>
R2_BUCKET_NAME=company-manager-media
R2_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
Step 5: Worker-Native R2 Access
For Workers that need file access (current or future), use the R2 binding directly (faster than S3 API):
// wrangler.jsonc
{
"r2_buckets": [
{ "binding": "MEDIA", "bucket_name": "company-manager-media" }
]
}
// Worker code
export default {
async fetch(request: Request, env: Env) {
// Direct R2 binding (zero-latency from edge)
const object = await env.MEDIA.get(`${tenantId}/products/${imageId}.jpg`);
if (!object) return new Response("Not Found", { status: 404 });
return new Response(object.body, {
headers: {
"content-type": object.httpMetadata?.contentType ?? "image/jpeg",
"cache-control": "public, max-age=86400",
"etag": object.httpEtag,
},
});
},
};
Multi-Tenant File Organization
Key Structure
company-manager-media/
├── {tenantId}/
│ ├── products/
│ │ ├── {productId}/{filename}
│ │ └── {productId}/thumbnails/{size}/{filename}
│ ├── press-center/
│ │ ├── magazines/{magazineId}/{filename}
│ │ └── articles/{articleId}/{filename}
│ ├── shared-files/
│ │ └── {fileId}/{filename}
│ └── user-uploads/
│ └── {userId}/{filename}
Tenant Isolation
// Enforce tenant scope in all R2 operations
function getR2Key(tenantId: string, path: string): string {
// Prevent path traversal
const sanitized = path.replace(/\.\./g, "").replace(/^\//, "");
return `${tenantId}/${sanitized}`;
}
Advanced Features
Presigned URLs
For direct browser uploads (bypass server):
import { AwsClient } from "aws4fetch"; // lightweight, Workers-compatible
const r2 = new AwsClient({
accessKeyId: env.R2_ACCESS_KEY_ID,
secretAccessKey: env.R2_SECRET_ACCESS_KEY,
});
// Generate presigned upload URL (valid 1 hour)
const url = new URL(
`https://${env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com/${bucket}/${key}`
);
const signed = await r2.sign(
new Request(url, { method: "PUT" }),
{ aws: { signQuery: true }, expiresIn: 3600 }
);
Event Notifications
Trigger processing when files are uploaded:
// R2 bucket notification → Worker
// Automatically resize images, generate thumbnails, extract metadata
export default {
async queue(batch: MessageBatch<R2EventNotification>, env: Env) {
for (const message of batch.messages) {
const event = message.body;
if (event.action === "PutObject" && event.object.key.includes("/products/")) {
// Generate thumbnail
await generateThumbnail(env, event.object.key);
}
}
},
};
Lifecycle Rules
# Auto-delete temporary uploads after 7 days
npx wrangler r2 bucket lifecycle set company-manager-media \
--rule '{"id":"cleanup-temp","prefix":"temp/","expiration":{"days":7}}'
# Move old press archives to Infrequent Access after 90 days
npx wrangler r2 bucket lifecycle set company-manager-media \
--rule '{"id":"archive-press","prefix":"press-center/archive/","transition":{"days":90,"storageClass":"InfrequentAccess"}}'
Public Bucket for CDN
For public assets (product images, press photos):
# Enable public access for read-only CDN
npx wrangler r2 bucket public-access set company-manager-media --allow
Access via: \1 or custom domain.
Migration Strategy
Phase 1: Dual-Write (Week 1)
Write to both S3 and R2, read from S3:
async function uploadFile(key: string, body: Buffer) {
// Write to both
await Promise.all([
s3Client.send(new PutObjectCommand({ Bucket: s3Bucket, Key: key, Body: body })),
r2Client.send(new PutObjectCommand({ Bucket: r2Bucket, Key: key, Body: body })),
]);
}
Phase 2: Read from R2 (Week 2)
Switch reads to R2, keep S3 as backup:
async function getFile(key: string) {
try {
return await r2Client.send(new GetObjectCommand({ Bucket: r2Bucket, Key: key }));
} catch {
// Fallback to S3 during migration
return await s3Client.send(new GetObjectCommand({ Bucket: s3Bucket, Key: key }));
}
}
Phase 3: Bulk Copy (Week 2-3)
Use R2 Super Slurper to copy existing S3 data:
# Cloudflare dashboard → R2 → Data Migration → Create migration
# Source: AWS S3 bucket
# Destination: company-manager-media
# Super Slurper handles incremental sync
Phase 4: Cut Over (Week 3-4)
Remove S3 writes, R2 is now primary.
Limits
| Metric | Limit |
|---|---|
| Max object size | ~5 TiB |
| Max buckets | 1,000/account |
| Multipart upload parts | 10,000 |
| Min multipart part size | 5 MiB |
| Max metadata per object | 2 KiB |
| List objects per request | 1,000 |
Estimated Impact
- **Egress cost**: Save 100% on data transfer (biggest win)
- **Storage cost**: Save ~35%
- **Latency**: Faster for Workers (direct binding vs. S3 API call)
- **Reliability**: Cloudflare's 335+ PoP network for serving
- **Effort**: 3-5 days (S3 API compatible, minimal code changes)