We Moved to Cloudflare - Why and What Changed
Why We Left Vercel
ToolBox ran on Vercel from day one. The developer experience was good. Push to GitHub, get a deployment. Automatic preview URLs for branches. The Next.js integration was seamless because Vercel makes Next.js.
So why move?
The answer is cost trajectory and architectural control. Vercel's pricing model scales with server-side compute. Every API route, every server-side rendered page, every image optimization call counts against your function invocation quota. For a tool site with 139+ pages and 20 API routes, those numbers climb fast.
Cloudflare Pages has a different model. Static assets are served from a global CDN at no per-request cost. API routes run as Pages Functions on Cloudflare Workers, which have a generous free tier and predictable pricing beyond it. KV storage for rate limiting is cheap. The entire infrastructure cost dropped significantly after the migration.
But the migration was not a simple deploy-and-done process. This post covers what changed, what broke, and how it all works now.
---
The Architecture Before
On Vercel, ToolBox was a standard Next.js application:
- Pages rendered server-side or statically generated depending on the route
- API routes ran as serverless functions (Node.js runtime)
- Image optimization handled by Vercel's built-in image service
- Environment variables managed through Vercel's dashboard
- Deployments triggered by GitHub push to master
The build produced a mix of static HTML, server-rendered pages, and serverless function bundles. Vercel handled the routing between them automatically.
---
The Architecture After
On Cloudflare, ToolBox is now a fully static export with edge functions:
Build output:
/out/ - Static HTML, CSS, JS (Next.js static export)
/functions/api/ - Pages Functions (Cloudflare Workers)The next.config.ts sets output: "export", which tells Next.js to generate plain HTML files instead of a server-rendered app. Every tool page becomes a static .html file that Cloudflare serves directly from its CDN.
API routes live in the functions/ directory as Pages Functions. These are JavaScript files that export request handler functions following the Cloudflare Workers API:
export async function onRequestPost({ request, env }) {
const body = await request.json();
// process the request
return Response.json({ result: data });
}
export async function onRequestOptions() {
return new Response(null, { status: 204, headers: corsHeaders });
}This is a fundamentally different model from Vercel's serverless functions. Workers start in under 5ms (no cold start), run at the edge in 300+ data centers, and use the V8 isolate runtime instead of a full Node.js process.
---
Static Export - What Had to Change
Moving to output: "export" meant every page had to be statically renderable at build time. This broke several things.
No Server-Side Props
Any page using getServerSideProps had to be converted. In practice, ToolBox had very few of these since the tools are all client-side, but the blog system and some metadata pages needed refactoring.
No API Routes in the Next.js App
With static export, the app/api/ routes in Next.js stop working. They are still in the codebase for local development, but in production, all API traffic is handled by Cloudflare Pages Functions in the functions/ directory.
This meant rewriting every API route as a standalone Cloudflare Workers function. The logic stayed the same, but the function signatures changed:
Before (Next.js API route):
import { NextRequest, NextResponse } from 'next/server';
export async function POST(request: NextRequest) {
const body = await request.json();
return NextResponse.json({ result: data });
}After (Cloudflare Pages Function):
export async function onRequestPost({ request, env }) {
const body = await request.json();
return Response.json({ result: data });
}The differences are subtle but important:
- No
NextRequest/NextResponse- use the standard WebRequest/ResponseAPIs - Environment variables come from the
envparameter, notprocess.env - The function exports use
onRequestPost,onRequestGet, etc. instead of namedPOST/GETexports - No Node.js APIs available - Workers use the V8 runtime
No Dynamic Routes at Build Time
Static export requires that all dynamic routes are known at build time. The generateStaticParams function must return every possible parameter combination. For the 139 tool pages, this was already in place. For blog posts, it required listing every slug.
---
Bundle Size Reduction
The initial deployment to Cloudflare failed. The Worker bundle exceeded the 25MB limit. The Vercel build included dependencies that were unnecessary in a static export context.
The reduction process cut approximately 3.3MB from the bundle:
Removed Sentry
The error tracking SDK added significant weight to the bundle. Sentry's client-side SDK alone is around 70KB gzipped, but the server-side SDK that was bundled into API routes was much larger. We replaced it with a lightweight custom error handler that posts to a simple error collection endpoint.
Removed next/og
The @vercel/og package for generating Open Graph images is designed specifically for Vercel's infrastructure. It bundles the Satori layout engine and a WASM-based text renderer, adding over 2MB to the bundle. On Cloudflare, OG images are generated through a simpler approach using the Pages Functions.
Extracted Blog Content
Blog post content was originally embedded in the JavaScript bundle through imports. Moving the content to separate Markdown files in content/blog/ and reading them at build time (for static pages) or at request time (for the API) removed megabytes of string data from the main bundle.
Result
The final Worker bundle fits comfortably within Cloudflare's limits, and the static assets (HTML, CSS, JS) are served directly from the CDN without any Worker involvement.
---
Pages Functions - The API Layer
Every API endpoint is a separate file in the functions/api/ directory. Cloudflare's file-based routing maps the directory structure to URL paths:
functions/api/v1/json-format.js -> /api/v1/json-format
functions/api/v1/hash.js -> /api/v1/hash
functions/api/dns-lookup.js -> /api/dns-lookup
functions/api/paddle/verify.js -> /api/paddle/verify
functions/api/errors/report.js -> /api/errors/reportEach function file exports handlers for the HTTP methods it supports. Most endpoints only handle POST and OPTIONS (for CORS preflight):
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
};
export async function onRequestOptions() {
return new Response(null, { status: 204, headers: corsHeaders });
}
export async function onRequestPost({ request, env }) {
// endpoint logic
}The env parameter gives access to environment variables and KV bindings configured in the Cloudflare dashboard.
---
KV for Rate Limiting
On Vercel, rate limiting for subscription verification used in-memory tracking within the serverless function. This was unreliable because each function invocation runs in isolation - there is no shared state between invocations.
On Cloudflare, the subscription verification endpoints use KV (Key-Value) storage for device rate limiting. KV is a globally distributed, eventually consistent key-value store that persists across Worker invocations.
The rate limiting works like this:
- When a user verifies their Pro subscription, the endpoint receives their IP address from the
CF-Connecting-IPheader - The IP is hashed with SHA-256 (using a salt) to create a privacy-preserving identifier
- The hash is stored in a KV key namespaced by subscription ID
- Each subscription ID can have up to 3 device hashes in a 30-day rolling window
- If a 4th device tries to verify the same subscription, the request is rejected with a 429 status
const MAX_DEVICES = 3;
const TTL_SECONDS = 30 * 24 * 60 * 60; // 30 days
async function checkDeviceLimit(subscriptionId, clientIP, kv) {
const key = `rl:${subscriptionId}`;
const ipHash = await hashIP(clientIP);
let record;
try {
const stored = await kv.get(key, 'json');
record = stored || { ips: [], updated: Date.now() };
} catch {
return { allowed: true }; // fail open if KV is unavailable
}
// Already known device - allow
if (record.ips.includes(ipHash)) {
return { allowed: true };
}
// Too many devices
if (record.ips.length >= MAX_DEVICES) {
return { allowed: false, message: 'Device limit exceeded' };
}
// New device - register and allow
record.ips.push(ipHash);
await kv.put(key, JSON.stringify(record), { expirationTtl: TTL_SECONDS });
return { allowed: true };
}The KV namespace is bound to the Pages Function in the Cloudflare dashboard as PRO_RATE_LIMIT. The expirationTtl parameter ensures records automatically expire after 30 days without any cleanup jobs.
This was a significant improvement over the Vercel approach. KV is persistent, globally distributed, and does not require managing a separate database.
---
What Broke During Migration
Split View
The Split View feature loads two tools side-by-side in iframes. On Vercel, the iframe URLs pointed to the same domain and worked fine. On Cloudflare, the static export produced slightly different HTML structures for embedded pages, which caused the iframes to include the full site header and navigation inside each pane.
The fix was detecting embedded mode (via a URL parameter) and stripping the header, footer, and navigation when a page is loaded inside an iframe. The embedded mode check runs on the client side:
const isEmbedded = new URL(window.location.href).searchParams.has('embed');
if (isEmbedded) {
// Hide header, footer, navigation
}Environment Variables
On Vercel, process.env.VARIABLE_NAME works everywhere - in API routes, in server-side rendering, and in build-time code. On Cloudflare Pages Functions, environment variables are not available through process.env. They come through the env parameter in the function handler.
Every API route that referenced process.env had to be updated:
// Before (Vercel)
const apiKey = process.env.PADDLE_API_KEY;
// After (Cloudflare)
export async function onRequestPost({ request, env }) {
const apiKey = env.PADDLE_API_KEY;
}Client-side environment variables (prefixed with NEXT_PUBLIC_) still work as before since they are inlined at build time.
CORS Headers
Vercel automatically handles CORS for API routes in the same project. Cloudflare Pages Functions do not. Every endpoint needed explicit CORS headers and an OPTIONS handler for preflight requests.
This was tedious but straightforward. The CORS configuration is identical across all endpoints:
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
};---
Build Process
The build process is a custom script (scripts/cf-build.mjs) that:
- Runs
next buildwithoutput: "export"to generate static HTML in theout/directory - Copies the
functions/directory for Cloudflare Pages Functions - Deploys to Cloudflare Pages
The pages:build script in package.json handles this:
{
"scripts": {
"pages:build": "node scripts/cf-build.mjs",
"cf:deploy": "node scripts/cf-build.mjs"
}
}Deployments are triggered by pushing to the master branch, same as before. Cloudflare Pages connects to the GitHub repository and runs the build script automatically.
---
Performance Comparison
Static Asset Delivery
On Vercel, static assets were served from their CDN with good performance. On Cloudflare, static assets are served from their CDN, which has significantly more edge locations (300+ data centers vs Vercel's smaller network). The practical difference is most noticeable for users in regions where Cloudflare has a nearby data center but Vercel does not.
API Response Times
This is where the biggest improvement happened. Vercel's serverless functions have cold starts - the first request after a period of inactivity takes 200-500ms to spin up a Node.js process. Subsequent requests are faster while the function stays warm.
Cloudflare Workers do not have meaningful cold starts. The V8 isolate model starts in under 5ms. Every API request, regardless of timing, gets consistent sub-50ms response times for computation-bound endpoints.
Build Times
Build times increased slightly because the static export generates HTML for all 139+ tool pages upfront. On Vercel, some pages were server-rendered on demand, so the build only had to handle static pages. On Cloudflare, every page must be generated at build time.
The trade-off is worth it. Slower builds (measured in seconds, not minutes) in exchange for faster page loads and lower hosting costs.
---
What We Gained
Predictable Costs
Cloudflare Pages has no per-request charges for static assets. Workers have a generous free tier (100,000 requests/day on the free plan, 10 million on the paid plan). KV reads are cheap. The total infrastructure cost is predictable and significantly lower than Vercel's usage-based pricing at any meaningful traffic level.
Edge Computing
All API routes now run at the edge, in the data center closest to the user. On Vercel, serverless functions run in a single region (unless you pay for Edge Functions, which have their own limitations). On Cloudflare, every request is handled by the nearest data center automatically.
IP Geolocation
Cloudflare provides the user's country through the CF-IPCountry header on every request. This is used for localized pricing on the Pro subscription page. On Vercel, this required a separate geolocation API call.
KV Storage
Having a built-in key-value store eliminated the need for an external database for simple state like rate limiting records. KV is fast, globally distributed, and requires no infrastructure management.
---
What We Lost
Vercel Analytics
Vercel's built-in analytics dashboard was useful for tracking page views and web vitals. On Cloudflare, analytics are available through Cloudflare's dashboard, but the integration is less polished for Next.js-specific metrics.
Preview Deployments
Vercel's preview deployments for pull requests are more polished than Cloudflare Pages' preview URLs. Both platforms generate preview URLs for branches, but Vercel's integration with GitHub comments and status checks is more refined.
Image Optimization
Vercel's built-in image optimization service (used by next/image) does not work on Cloudflare. Since ToolBox is primarily a tool site and does not rely heavily on dynamic image optimization, this was not a significant loss. Static images are served directly.
---
The OpenNext Adapter
The migration uses @opennextjs/cloudflare, an adapter that helps bridge the gap between Next.js and Cloudflare's platform. It handles the translation of Next.js conventions (like routing and middleware) to Cloudflare's execution model.
The adapter is listed as a dev dependency:
{
"devDependencies": {
"@opennextjs/cloudflare": "^1.17.1"
}
}It generates the .open-next/ directory during the build process, which contains the compiled server functions and asset mappings that Cloudflare Pages expects.
---
Migration Checklist (For Other Projects)
If you are considering a similar migration for your Next.js project, here is what you need to evaluate:
Compatible without changes:
- Static pages (no getServerSideProps)
- Client-side rendering
- CSS modules, Tailwind, CSS-in-JS
- Client-side data fetching (fetch, axios, SWR, React Query)
- Service workers (PWA functionality)
- Web Workers
Requires rewriting:
- API routes (from Next.js format to Workers format)
- Server-side rendering (must switch to static export or use the OpenNext adapter)
- Environment variable access in API routes
- Any Node.js-specific APIs (fs, path, crypto node module)
- Database connections (replace with KV, D1, or external APIs)
Not available:
- next/image optimization (use static images or Cloudflare Images)
- Incremental Static Regeneration (ISR)
- Middleware that depends on Node.js APIs
- Server Actions
New capabilities gained:
- KV storage (key-value store)
- D1 (SQLite at the edge)
- R2 (object storage)
- Durable Objects (stateful edge compute)
- Cloudflare AI (inference at the edge)
- IP geolocation via CF-IPCountry header
- WebSocket support in Workers
---
DNS and CDN Configuration
Cloudflare is primarily known as a DNS and CDN provider, so using Cloudflare Pages means the DNS, CDN, and hosting are all managed by the same company. This eliminates a layer of indirection.
On Vercel, the DNS pointed to Vercel's nameservers, which then routed to Vercel's CDN. Cloudflare Pages eliminates the middleman - DNS resolution happens at the same edge node that serves the content.
Cache Behavior
Static assets (JS, CSS, images) are served with long-lived cache headers:
Cache-Control: public, max-age=31536000, immutableHTML pages use a shorter cache with stale-while-revalidate:
Cache-Control: public, max-age=0, s-maxage=3600, stale-while-revalidate=86400This means HTML is always fresh (max-age=0 for the browser), but Cloudflare's edge caches it for up to 1 hour (s-maxage=3600) and will serve stale content while revalidating for up to 24 hours. Users get fast page loads while still seeing relatively fresh content.
Security Headers
The next.config.ts defines security headers that apply to all routes:
const securityHeaders = [
{ key: "X-Content-Type-Options", value: "nosniff" },
{ key: "X-Frame-Options", value: "SAMEORIGIN" },
{ key: "X-XSS-Protection", value: "1; mode=block" },
{ key: "Referrer-Policy", value: "strict-origin-when-cross-origin" },
{ key: "Permissions-Policy", value: "camera=(), microphone=(), geolocation=()" },
];These headers were configured the same way on Vercel, but on Cloudflare they are applied at the CDN level, which means they are added to every response regardless of whether it comes from a static asset or a Pages Function.
---
The Workers Runtime vs Node.js
One of the less obvious differences between Vercel and Cloudflare is the runtime. Vercel serverless functions run on Node.js. Cloudflare Workers run on the V8 JavaScript engine directly, without the Node.js layer.
This matters in practice:
Available APIs
Workers have access to standard Web APIs - fetch, Request, Response, URL, TextEncoder, TextDecoder, crypto.subtle, crypto.getRandomValues, atob, btoa, etc. They do not have access to Node.js built-in modules like fs, path, child_process, net, or http.
For ToolBox, this was not a problem. The API endpoints are all data transformation functions that operate on strings and JSON. They do not need filesystem access, child processes, or TCP sockets.
The hash endpoint is a good example. On Vercel (Node.js), you might use crypto.createHash():
// Node.js
const crypto = require('crypto');
const hash = crypto.createHash('sha256').update(input).digest('hex');On Workers, you use the Web Crypto API:
// Workers
const encoder = new TextEncoder();
const data = encoder.encode(input);
const hashBuffer = await crypto.subtle.digest('SHA-256', data);
const hashArray = new Uint8Array(hashBuffer);
const hex = Array.from(hashArray).map(b => b.toString(16).padStart(2, '0')).join('');The Web Crypto API is asynchronous (returns a Promise) while Node.js crypto.createHash is synchronous. This required minor refactoring but does not change the functionality.
MD5 Exception
The Web Crypto API does not support MD5. It is considered insecure and was deliberately excluded from the standard. But MD5 is still widely used for non-security purposes (checksums, legacy system compatibility), so the hash endpoint includes a bundled MD5 implementation in pure JavaScript.
This is about 120 lines of code that implements RFC 1321 directly. Not elegant, but necessary for backward compatibility.
Memory and CPU Limits
Workers have a 128MB memory limit and a 30-second CPU time limit (on the paid plan). Vercel functions have higher limits (up to 1024MB memory, 5 minutes execution). For ToolBox's API endpoints, which are all short-lived data transformations, the Workers limits are more than sufficient.
The diff endpoint has an input size limit of 100KB per string specifically to stay well within the Workers CPU budget. LCS diff algorithms are O(n*m) in time complexity, so large inputs could theoretically exceed the CPU limit.
---
Monitoring and Error Tracking
On Vercel, Sentry was integrated for error tracking. The Sentry SDK added significant bundle weight and was removed during the migration.
The replacement is a lightweight custom error handler. Errors are reported to a dedicated Pages Function endpoint (/api/errors/report) that logs them to Cloudflare's built-in logging. A separate endpoint (/api/errors/list) allows reviewing recent errors.
This is simpler than Sentry but sufficient for the current scale. If error volume increases to the point where a dedicated error tracking service is justified, it would be re-added - but as a server-side integration in the Pages Functions rather than a client-side SDK bundled into the JavaScript.
---
Deployment Pipeline
The deployment pipeline is straightforward:
- Push code to the
masterbranch on GitHub - Cloudflare Pages detects the push via webhook
- The build command (
node scripts/cf-build.mjs) runs next buildgenerates the static export in/out/- Cloudflare deploys the static files and Pages Functions
- The new version goes live globally within seconds
Preview deployments work for non-master branches. Each branch gets a unique preview URL. This is useful for testing changes before merging to master.
Rollbacks are simple - Cloudflare keeps previous deployments and you can roll back to any previous version through the dashboard.
---
Lessons Learned
Test Locally with Wrangler
The wrangler pages dev command runs your Pages Functions locally against the Cloudflare Workers runtime. This catches incompatibilities with the V8 runtime before deployment. We found several issues during local testing that would have been hard to debug in production.
CORS Is Your Problem Now
On Vercel, CORS for same-origin API routes is handled automatically. On Cloudflare Pages Functions, you need to handle CORS yourself. Every endpoint needs an OPTIONS handler and appropriate CORS headers. Missing this results in browser requests failing silently with a CORS error.
KV Is Eventually Consistent
KV has a propagation delay of up to 60 seconds for writes to reach all edge locations. For rate limiting, this means a device could theoretically verify on two different edge nodes within 60 seconds and both would see an empty record. In practice this is not a problem because subscription verification is not a high-frequency operation - it happens once when you load the site.
Static Export Forces Discipline
The static export constraint is actually beneficial. It forces you to separate your concerns cleanly. If a page cannot be statically rendered, you know something in the rendering pipeline depends on runtime data that should probably be fetched on the client side instead. The result is a more predictable, more cacheable, faster site.
---
Was It Worth It
Yes. The migration took roughly a week of focused work, most of which was rewriting API routes and debugging the static export. The ongoing benefits - lower costs, better edge performance, KV storage, IP geolocation - will compound over time.
The biggest risk was stability during the transition. We deployed to Cloudflare while keeping the Vercel deployment running as a fallback, then switched DNS once everything was verified. There was zero downtime.
For a statically-generated site like ToolBox, where all the tool logic runs in the browser and the API routes are simple request-response functions, Cloudflare Pages is a better fit than Vercel. The static export model aligns naturally with the platform, and the Workers runtime gives us everything we need for the API layer without the overhead of a Node.js process.
If your Next.js app relies heavily on server-side rendering, Server Actions, or ISR, the calculus is different. Those features are tightly integrated with Vercel's platform and harder to replicate elsewhere. But for static-first applications with simple API needs, Cloudflare is hard to beat on both cost and performance.
You might also like
Want higher limits, batch processing, and AI tools?