by InlinexDev

Cloudflare R2 vs AWS S3: Why We Switched for Document Storage

A practical comparison of Cloudflare R2 and AWS S3 for storing shipping labels, invoices, and product images in e-commerce applications.

Cloudflare R2AWS S3storageinfrastructurecost optimization

The Egress Cost Problem

AWS S3 is the default choice for object storage. But when your application serves files frequently — shipping labels downloaded multiple times, product images displayed on every page load, invoices accessed by multiple users — egress fees add up fast.

Cloudflare R2 charges zero egress fees. That single difference changed our storage strategy.

Cost Comparison

For a shipping platform generating 1,000 labels/month, each downloaded an average of 3 times:

| Metric | AWS S3 | Cloudflare R2 | |--------|--------|---------------| | Storage (10GB) | $0.23/mo | $0.15/mo | | PUT requests (1,000) | $0.005 | $0.0045 | | GET requests (3,000) | $0.0012 | $0.0011 | | Egress (3GB) | $0.27 | $0.00 | | Monthly total | $0.51 | $0.16 |

At this scale, the savings are modest. But scale to 50,000 labels/month with larger documents, and egress costs on S3 balloon to $15-50/month while R2 stays near zero for egress.

Migration: It's S3-Compatible

The best part about R2 is that it uses the S3-compatible API. Migration means changing configuration, not rewriting code:

Before (AWS S3)

const { S3Client } = require('@aws-sdk/client-s3');

const s3 = new S3Client({
  region: 'ap-southeast-1',
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY,
    secretAccessKey: process.env.AWS_SECRET_KEY
  }
});

After (Cloudflare R2)

const { S3Client } = require('@aws-sdk/client-s3');

const r2 = new S3Client({
  region: 'auto',
  endpoint: process.env.R2_ENDPOINT,
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY,
    secretAccessKey: process.env.R2_SECRET_KEY
  }
});

The only changes are the endpoint, region, and credentials. All PutObjectCommand, GetObjectCommand, and DeleteObjectCommand calls work identically.

Upload Implementation

const { PutObjectCommand, GetObjectCommand } = require('@aws-sdk/client-s3');
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');

async function uploadDocument(buffer, key, contentType) {
  await r2.send(new PutObjectCommand({
    Bucket: process.env.R2_BUCKET,
    Key: key,
    Body: buffer,
    ContentType: contentType
  }));

  return key;
}

async function getDownloadUrl(key, expiresIn = 3600) {
  const command = new GetObjectCommand({
    Bucket: process.env.R2_BUCKET,
    Key: key
  });

  return getSignedUrl(r2, command, { expiresIn });
}

Organizing Files

We use a structured key naming convention:

labels/{year}/{month}/{shipment_id}.pdf
invoices/{year}/{month}/{order_id}.pdf
images/products/{product_id}/{variant_id}.jpg
images/products/{product_id}/thumbnail.jpg

This makes it easy to:

  • List all labels for a given month
  • Clean up old files with lifecycle policies
  • Debug issues by navigating the key hierarchy

Presigned URLs for Security

Never expose your bucket publicly. Use presigned URLs that expire:

app.get('/api/labels/:shipmentId/download', async (req, res) => {
  const shipment = await getShipment(req.params.shipmentId);
  
  if (shipment.userId !== req.user.id) {
    return res.status(403).json({ error: 'Unauthorized' });
  }

  const url = await getDownloadUrl(shipment.labelKey, 300); // 5 min expiry
  res.redirect(url);
});

When to Still Use S3

R2 isn't always the right choice:

  • Complex lifecycle policies — S3 has more granular lifecycle management
  • Cross-region replication — S3 supports automatic replication across regions
  • AWS ecosystem integration — Lambda triggers, Athena queries, etc.
  • Compliance requirements — some regulations require specific AWS certifications

Performance Comparison

In our testing with Southeast Asia-based applications:

  • Upload latency: R2 and S3 (ap-southeast-1) are comparable
  • Download latency: R2 has a slight edge due to Cloudflare's CDN network
  • Throughput: Both handle our workloads without issues
  • Availability: Both services have been 99.99%+ reliable

Migration Checklist

  1. Create an R2 bucket in the Cloudflare dashboard
  2. Generate API tokens with read/write access
  3. Update environment variables (endpoint, credentials)
  4. Test uploads and downloads in staging
  5. Migrate existing files using rclone or a custom script
  6. Update application code (usually just configuration)
  7. Monitor for errors for 48 hours before decommissioning S3

Conclusion

For applications that serve files to users — shipping labels, invoices, product images — Cloudflare R2's zero egress pricing makes it the obvious choice. The S3-compatible API means migration is trivial, and performance is excellent thanks to Cloudflare's global network.

Related Project

ShipAnywhere

Smart international shipping platform offering up to 60% off FedEx rates, with instant quotes, one-click labels, electronic trade documents, and scheduled pickups to 220+ countries.