Skip to content

Cloudflare Workers: A Complete Beginner's Guide

This guide takes you from zero to deploying your first Cloudflare Worker. We’ll start with manual setup so you understand what’s happening, then show you how AI coding assistants can speed up your workflow.

Quick Navigation:


Prefer learning by watching? These official Cloudflare tutorials will get you started:

Watch this 30-minute video or follow the written guide below - your choice!

Ready to code? Skip to Manual Setup to start building.


Imagine you built a website and someone in Tokyo visits it. If your server is in New York, their request has to travel across the Pacific Ocean, get processed, and travel back. That’s slow.

Cloudflare Workers run your code in 300+ cities worldwide. When someone visits your site, the nearest server handles their request. Tokyo user? Tokyo server. London user? London server.

Traditional Server:
User (Tokyo) → → → → → Server (New York) → → → → → User (Tokyo)
[500ms round trip]
Cloudflare Worker:
User (Tokyo) → Server (Tokyo) → User (Tokyo)
[50ms round trip]
BenefitWhat It Means
Free tier100,000 requests/day free - enough for learning and small projects
No server managementNo Linux commands, no updates, no security patches
Instant deploymentCode goes live in seconds, not minutes
Scales automatically1 user or 1 million users - same code, same price
Great for portfoliosImpress recruiters with deployed projects, not localhost demos
  • APIs - Backend for your mobile app or website
  • Redirects - Short URLs like bit.ly
  • Authentication - Login systems
  • Chatbots - Discord/Slack bots
  • Scrapers - Fetch data from other websites
  • Full websites - With Cloudflare Pages

Before we start, you need:

  1. Go to dash.cloudflare.com
  2. Click “Sign Up”
  3. Use your email (school email works fine)
  4. Verify your email

Node.js lets you run JavaScript on your computer (not just in browsers).

Check if you have it:

Terminal window
node --version

If you see a version number (like v20.10.0), you’re good. If not:

  • Windows: Download from nodejs.org (LTS version)
  • Mac: brew install node (if you have Homebrew) or download from nodejs.org
  • Linux: sudo apt install nodejs npm (Ubuntu/Debian)

Any text editor works, but we recommend:

  • VS Code - Free, popular, great extensions
  • Cursor - VS Code fork with built-in AI
  • Claude Code - Terminal-based AI coding assistant (works alongside any editor)

You’ll type commands in:

  • Windows: PowerShell or Command Prompt
  • Mac: Terminal
  • Linux: Terminal

Don’t worry - we’ll show you exactly what to type.


Let’s set everything up step by step. This might seem like extra work, but understanding these steps makes debugging easier later.

Wrangler is Cloudflare’s command-line tool. It’s how you create, test, and deploy Workers.

Terminal window
npm install -g wrangler

Verify it worked:

Terminal window
wrangler --version

You should see something like 3.x.x.

Terminal window
wrangler login

This opens your browser. Click “Allow” to give Wrangler permission to deploy code to your account.

Terminal window
mkdir my-first-worker
cd my-first-worker
Terminal window
npm init -y

This creates a package.json file that tracks your project’s dependencies.

Create a file called wrangler.toml (this tells Wrangler about your project):

name = "my-first-worker"
main = "src/index.js"
compatibility_date = "2024-01-01"
[observability]
enabled = true
Terminal window
mkdir src

Now your project structure looks like:

my-first-worker/
├── package.json
├── wrangler.toml
└── src/
└── (your code goes here)

Create src/index.js:

export default {
async fetch(request, env, ctx) {
return new Response("Hello from my first Cloudflare Worker! 🎉");
},
};

Let’s break this down:

export default {
// This makes your code available to Cloudflare
async fetch(request, env, ctx) {
// "fetch" runs every time someone visits your Worker
// "request" = info about the visitor (URL, headers, etc.)
// "env" = your environment variables and databases
// "ctx" = advanced stuff (ignore for now)
return new Response("Hello from my first Cloudflare Worker! 🎉");
// Send text back to the visitor
},
};

Before deploying, test on your computer:

Terminal window
wrangler dev

You’ll see:

⎔ Starting local server...
Ready on http://localhost:8787

Open http://localhost:8787 in your browser. You should see your message!

Press Ctrl+C to stop the local server.

Terminal window
wrangler deploy

You’ll see something like:

Uploaded my-first-worker (1.23 sec)
Published my-first-worker (0.45 sec)
https://my-first-worker.YOUR-SUBDOMAIN.workers.dev

That URL is live! Share it with friends. It works from anywhere in the world.


Let’s make a more useful Worker that responds differently based on the URL.

Replace your src/index.js:

export default {
async fetch(request, env, ctx) {
// Get the URL path (e.g., "/hello" from "https://example.com/hello")
const url = new URL(request.url);
const path = url.pathname;
// Route to different responses
if (path === "/") {
return new Response("Welcome to my API! Try /hello or /time");
}
if (path === "/hello") {
// Get the name from ?name=YourName
const name = url.searchParams.get("name") || "stranger";
return new Response(`Hello, ${name}! 👋`);
}
if (path === "/time") {
const now = new Date().toISOString();
return Response.json({ currentTime: now });
}
// 404 for unknown paths
return new Response("Not found", { status: 404 });
},
};

Test it:

Terminal window
wrangler dev

Then visit:

  • http://localhost:8787/ - Welcome message
  • http://localhost:8787/hello - “Hello, stranger!”
  • http://localhost:8787/hello?name=Alex - “Hello, Alex!”
  • http://localhost:8787/time - Current time as JSON
const url = new URL(request.url);
// Parses "https://example.com/hello?name=Alex" into parts
const path = url.pathname;
// Gets "/hello"
const name = url.searchParams.get("name");
// Gets "Alex" from "?name=Alex"
return Response.json({ currentTime: now });
// Returns JSON instead of plain text
// Automatically sets Content-Type: application/json

Cloudflare offers several storage options. Here’s when to use each:

ServiceUse CaseExample
KVSimple key-value dataUser sessions, feature flags
D1Relational data (SQL)User accounts, blog posts
R2Files and imagesProfile pictures, uploads

KV is like a giant dictionary: you store values by keys.

1. Create a KV namespace:

Terminal window
wrangler kv namespace create "MY_KV"

You’ll see:

🌀 Creating namespace with title "my-first-worker-MY_KV"
✨ Success! Add the following to your wrangler.toml:
[[kv_namespaces]]
binding = "MY_KV"
id = "abc123..."

2. Add it to wrangler.toml:

name = "my-first-worker"
main = "src/index.js"
compatibility_date = "2024-01-01"
[[kv_namespaces]]
binding = "MY_KV"
id = "abc123..." # Use YOUR actual ID from the command output

3. Use it in your code:

export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
if (url.pathname === "/visit") {
// Get current count (or 0 if not set)
const count = parseInt(await env.MY_KV.get("visits") || "0");
// Increment and save
const newCount = count + 1;
await env.MY_KV.put("visits", newCount.toString());
return new Response(`You are visitor #${newCount}!`);
}
return new Response("Visit /visit to be counted!");
},
};

4. Test locally:

Terminal window
wrangler dev

Visit /visit multiple times - the count increases!

D1 is SQLite at the edge - perfect for structured data.

1. Create a database:

Terminal window
wrangler d1 create my-database

2. Add to wrangler.toml:

[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "your-database-id" # From command output

3. Create a table (migration file):

Create migrations/0001_create_users.sql:

CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);

4. Run the migration:

Terminal window
# Locally (for testing)
wrangler d1 execute my-database --local --file=migrations/0001_create_users.sql
# Production
wrangler d1 execute my-database --remote --file=migrations/0001_create_users.sql

5. Use in your code:

export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
if (url.pathname === "/users" && request.method === "GET") {
// Get all users
const { results } = await env.DB.prepare(
"SELECT * FROM users ORDER BY created_at DESC"
).all();
return Response.json(results);
}
if (url.pathname === "/users" && request.method === "POST") {
// Add a user
const body = await request.json();
await env.DB.prepare(
"INSERT INTO users (name, email) VALUES (?, ?)"
).bind(body.name, body.email).run();
return new Response("User created!", { status: 201 });
}
return new Response("Try GET or POST /users");
},
};

Test with curl:

Terminal window
# Add a user
curl -X POST http://localhost:8787/users \
-H "Content-Type: application/json" \
-d '{"name": "Alex", "email": "alex@example.com"}'
# List users
curl http://localhost:8787/users

Once you understand the basics, AI coding assistants can dramatically speed up your workflow.

Claude Code is Anthropic’s CLI for AI-assisted development.

Setup:

Terminal window
# Install (requires Claude Pro/Max subscription)
npm install -g @anthropic-ai/claude-code
# Start in your project
cd my-first-worker
claude

Example prompts for Workers:

Create a Cloudflare Worker that:
- Has a /shorten endpoint to create short URLs
- Stores them in KV
- Redirects /go/:code to the original URL
- Returns JSON errors with proper status codes
Add a D1 database to store click analytics:
- Track timestamp, IP country, and user agent
- Add a /stats/:code endpoint to view analytics
Set up wrangler.toml for this project with:
- KV namespace for URLs
- D1 database for analytics
- Environment variables for admin auth

Cursor is VS Code with built-in AI.

Workflow:

  1. Open your project in Cursor
  2. Press Cmd+K (Mac) or Ctrl+K (Windows) to open the AI prompt
  3. Describe what you want

Example:

Create a REST API for a todo list with:
- GET /todos - list all
- POST /todos - create one
- DELETE /todos/:id - delete one
Use D1 for storage. Include proper error handling.

Cursor generates the code inline. Review it, then accept or modify.

If you use VS Code with GitHub Copilot:

  1. Start typing a comment describing what you want
  2. Copilot suggests code
  3. Press Tab to accept
// Create a function that validates email addresses
// and returns true/false

Copilot will suggest the implementation.

Be specific about Cloudflare APIs:

# Good
"Use Cloudflare KV to cache API responses for 1 hour"
# Vague
"Add caching"

Mention the environment:

# Good
"This runs on Cloudflare Workers - use the fetch handler pattern
and access KV through env.MY_KV"
# Missing context
"Store data in a key-value store"

Ask for error handling:

"Add try-catch blocks that return proper JSON errors
with status codes (400 for bad input, 500 for server errors)"

Manual wrangler deploy works, but real projects use Git for automatic deployments.

Every time you push code to GitHub, it automatically deploys.

1. Create a Cloudflare API token:

  1. Go to Cloudflare Dashboard → Profile → API Tokens
  2. Click “Create Token”
  3. Use the “Edit Cloudflare Workers” template
  4. Copy the token (you’ll only see it once!)

2. Add secrets to GitHub:

  1. Go to your repo → Settings → Secrets and variables → Actions
  2. Add these secrets:
    • CLOUDFLARE_API_TOKEN = your token
    • CLOUDFLARE_ACCOUNT_ID = from your Cloudflare dashboard URL

3. Create .github/workflows/deploy.yml:

name: Deploy Worker
on:
push:
branches:
- main # Deploy when you push to main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm install
- name: Deploy to Cloudflare
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}

4. Push and watch it deploy:

Terminal window
git add .
git commit -m "Add GitHub Actions deployment"
git push origin main

Go to your repo → Actions tab to watch the deployment.

Option 2: Cloudflare Pages (For Full Websites)

Section titled “Option 2: Cloudflare Pages (For Full Websites)”

If you’re building a website with a Worker backend, use Cloudflare Pages:

  1. Go to Cloudflare Dashboard → Workers & Pages
  2. Click “Create” → “Pages” → “Connect to Git”
  3. Select your GitHub repo
  4. Configure build settings:
    • Build command: npm run build
    • Output directory: dist
  5. Click “Save and Deploy”

For Workers functions with Pages:

Put your Worker code in a functions/ folder:

my-project/
├── src/ # Frontend code
├── functions/ # Worker code (automatic!)
│ └── api/
│ └── hello.js
├── package.json
└── wrangler.toml

functions/api/hello.js:

export async function onRequest(context) {
return new Response("Hello from Pages Function!");
}

This automatically creates an API at /api/hello.

Cloudflare can deploy directly from your repo without GitHub Actions:

  1. Workers & Pages → Create → Workers
  2. Click “Connect to Git”
  3. Authorize Cloudflare to access your GitHub
  4. Select your repo and branch

Cloudflare handles the rest - pushes trigger deployments automatically.


Never put passwords or API keys in your code!

For local development, create .dev.vars:

API_KEY=your-secret-key
DATABASE_URL=postgres://localhost/mydb

For production, use the dashboard or CLI:

Terminal window
# Set a secret (hidden in logs)
wrangler secret put API_KEY
# Then type or paste the value
# Set a plain variable (in wrangler.toml)

In wrangler.toml:

[vars]
ENVIRONMENT = "production"
MAX_ITEMS = "100"
export default {
async fetch(request, env, ctx) {
// Access secrets and variables through 'env'
const apiKey = env.API_KEY;
const maxItems = parseInt(env.MAX_ITEMS);
if (!apiKey) {
return new Response("API key not configured", { status: 500 });
}
// Use the variables...
},
};

Mistake 1: “Error: No account id found”

Section titled “Mistake 1: “Error: No account id found””

Problem: Wrangler doesn’t know your Cloudflare account.

Fix:

Terminal window
wrangler login
# Or add to wrangler.toml:
account_id = "your-account-id"

Mistake 2: “ReferenceError: require is not defined”

Section titled “Mistake 2: “ReferenceError: require is not defined””

Problem: Workers use ES modules, not CommonJS.

Wrong:

const axios = require('axios'); // ❌

Right:

import axios from 'axios'; // ✅ (but see below)

Better: Use the built-in fetch instead of axios:

const response = await fetch('https://api.example.com/data');
const data = await response.json();

Mistake 3: KV/D1 Returns null or undefined

Section titled “Mistake 3: KV/D1 Returns null or undefined”

Problem: You’re testing locally but didn’t create local storage.

Fix for KV:

Terminal window
wrangler kv namespace create "MY_KV" --preview
# Add the preview_id to wrangler.toml

Fix for D1:

Terminal window
# Run migrations locally first
wrangler d1 execute my-database --local --file=migrations/0001_init.sql

Mistake 4: “Error 10021: Script too large”

Section titled “Mistake 4: “Error 10021: Script too large””

Problem: Your Worker exceeds the 1MB limit (compressed).

Fixes:

  • Remove unused dependencies
  • Use dynamic imports for large libraries
  • Consider splitting into multiple Workers
  • Upgrade to Workers Paid ($5/month) for 10MB limit

Problem: Your frontend can’t call your Worker API.

Fix: Add CORS headers:

function corsHeaders() {
return {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type",
};
}
export default {
async fetch(request, env, ctx) {
// Handle preflight requests
if (request.method === "OPTIONS") {
return new Response(null, { headers: corsHeaders() });
}
// Your normal response with CORS headers
return new Response("Hello", {
headers: corsHeaders(),
});
},
};

Mistake 6: “Error: D1_ERROR: no such table”

Section titled “Mistake 6: “Error: D1_ERROR: no such table””

Problem: Migrations weren’t run on production.

Fix:

Terminal window
# Run on production (note: --remote not --local)
wrangler d1 execute my-database --remote --file=migrations/0001_init.sql

Build these to practice and add to your portfolio:

  • Store short codes in KV
  • Redirect to original URLs
  • Track click counts
  • Return random jokes from a list
  • Add categories (/joke/programming, /joke/dad)
  • Let users submit jokes
  • Receive webhooks from services (GitHub, Stripe)
  • Log them to D1
  • Forward to Discord/Slack
  • Accept image uploads to R2
  • Resize on request using Workers
  • Cache results in KV
  • Users, posts, comments
  • Authentication with JWTs
  • Rate limiting

Terminal window
# Create new project
wrangler init my-project
# Run locally
wrangler dev
# Deploy to production
wrangler deploy
# View logs
wrangler tail
# Create KV namespace
wrangler kv namespace create "NAME"
# Create D1 database
wrangler d1 create my-db
# Run D1 migration
wrangler d1 execute my-db --remote --file=migrations/001.sql
# Set secret
wrangler secret put SECRET_NAME
my-worker/
├── src/
│ └── index.js # Main Worker code
├── migrations/ # D1 database migrations
│ └── 0001_init.sql
├── wrangler.toml # Cloudflare config
├── package.json # Node.js dependencies
├── .dev.vars # Local secrets (don't commit!)
└── .gitignore # Ignore node_modules, .dev.vars
name = "my-worker"
main = "src/index.js"
compatibility_date = "2024-01-01"
[observability]
enabled = true
export default {
async fetch(request, env, ctx) {
return new Response("Hello World!");
},
};

Want to go deeper? Check out these official tutorials:

VideoWhat You’ll LearnDuration
Stateful Apps with WorkersKV caching, D1 databases, external APIs~20 min
Deploy React to WorkersMigrating existing React apps~15 min
Build Rust Powered AppsWorkers with Rust, global databases~25 min
Cloudflare Workflows IntroEvent-driven architecture~20 min
PlatformCourseNotes
egghead.ioIntroduction to Cloudflare WorkersFree, by Kristian Freeman
Cloudflare LabsInteractive learning pathHands-on exercises
FreeCodeCampBuild an AI Chatbot WidgetProject-based tutorial
ResourceDescription
Official DocsComprehensive reference for all Workers features
Workers ExamplesOfficial templates for common use cases
D1 TutorialsDatabase guides (Comments API, Staff Directory)
KV TutorialsKey-value storage guides
  1. Learn TypeScript - Better autocomplete and fewer bugs
  2. Try Hono - Fast framework built for Workers
  3. Use Drizzle ORM - Type-safe database queries
  4. Build something real - Best way to learn!


Questions? Found an error? Open an issue or contribute to this guide.