D1 Edge Database

Priority: P2 (Strategic)

What is D1?

Cloudflare's managed SQLite database with global read replication. Designed for per-user or \1 databases at scale (up to 50K databases per account). Point-in-time recovery via Time Travel.

Why This Matters for Company Manager

Multi-Tenant Database Architecture

Company Manager is a multi-tenant platform where \1. D1's design philosophy of "many small databases" maps perfectly to this:

ApproachCurrent (Neon)D1 Per-Tenant
IsolationShared DB + WHERE clausePhysical isolation
Max sizeShared (100+ GB)10 GB per tenant
Read latency~50ms (single region)~1ms (edge replica)
Noisy neighborPossibleImpossible
ComplianceShared jurisdictionPer-tenant region hints
BackupShared snapshotsPer-tenant Time Travel

D1 Is NOT a Full Replacement

D1 is best for \1. The primary Neon PostgreSQL remains the source of truth. D1 serves as an edge read replica for hot data.

Architecture


                    ┌─────────────────────────────┐
                    │        Edge (D1)             │
                    │  ┌───────────────────────┐   │
                    │  │ Tenant A: d1-tenant-a │   │
                    │  │  - site_config         │   │
                    │  │  - products (catalog)  │   │
                    │  │  - categories          │   │
Write Path:         │  │  - permissions         │   │
Neon → D1 sync      │  │  - menu_items         │   │
                    │  └───────────────────────┘   │
                    │  ┌───────────────────────┐   │
                    │  │ Tenant B: d1-tenant-b │   │
Read Path:          │  │  - site_config         │   │
Edge Worker → D1    │  │  - products (catalog)  │   │
(<1ms reads)        │  │  - categories          │   │
                    │  └───────────────────────┘   │
                    └─────────────────────────────┘
                                  ▲
                                  │ sync (Queue/Workflow)
                    ┌─────────────┴───────────────┐
                    │     Neon PostgreSQL           │
                    │   (source of truth)           │
                    └─────────────────────────────┘

Use Cases

1. Edge Product Catalog

Replicate product catalog to D1 for sub-millisecond reads:


// D1 schema for product catalog
const SCHEMA = `
  CREATE TABLE IF NOT EXISTS products (
    id TEXT PRIMARY KEY,
    name TEXT NOT NULL,
    slug TEXT,
    description TEXT,
    price REAL NOT NULL,
    compare_at_price REAL,
    category_id TEXT,
    image_url TEXT,
    active INTEGER DEFAULT 1,
    updated_at INTEGER NOT NULL
  );
  CREATE INDEX IF NOT EXISTS idx_products_category ON products(category_id);
  CREATE INDEX IF NOT EXISTS idx_products_active ON products(active);
  CREATE INDEX IF NOT EXISTS idx_products_slug ON products(slug);
`;

// Query from Worker (<1ms)
export default {
  async fetch(request: Request, env: Env) {
    const tenantId = getTenantId(request);
    const db = env[`D1_${tenantId}`]; // per-tenant D1 binding

    const products = await db.prepare(
      "SELECT * FROM products WHERE active = 1 ORDER BY name LIMIT 20"
    ).all();

    return Response.json(products.results);
  },
};

2. Tenant Configuration Cache

Instead of KV (eventually consistent), use D1 for consistent config:


// Site config in D1
const config = await db.prepare(
  "SELECT * FROM site_config WHERE key = ?"
).bind("theme").first();

// Permission sets in D1
const permissions = await db.prepare(
  "SELECT permission FROM role_permissions WHERE role_id = ?"
).bind(roleId).all();

3. Menu Items

The admin sidebar menu is auto-generated. Cache it in D1 for instant loading:


const menuItems = await db.prepare(
  "SELECT * FROM menu_items WHERE active = 1 ORDER BY sort_order"
).all();

4. Analytics Snapshots

Pre-computed daily/weekly analytics snapshots:


const snapshot = await db.prepare(`
  SELECT date, revenue, transactions, avg_ticket
  FROM daily_snapshots
  WHERE date >= ? AND date <= ?
  ORDER BY date
`).bind(startDate, endDate).all();

5. Feature Flags

Per-tenant feature flags with instant reads:


const flags = await db.prepare(
  "SELECT key, value FROM feature_flags"
).all();

const flagMap = Object.fromEntries(
  flags.results.map(f => [f.key, JSON.parse(f.value)])
);

Sync Strategy: Neon → D1

Option A: Queue-Based Sync


// When data changes in Neon, queue a sync message
// In TRPC mutation:
await env.D1_SYNC_QUEUE.send({
  tenantId: ctx.tenantId,
  table: "products",
  operation: "upsert",
  data: updatedProduct,
});

// Consumer: apply changes to D1
export default {
  async queue(batch: MessageBatch<D1SyncJob>, env: Env) {
    for (const message of batch.messages) {
      const { tenantId, table, operation, data } = message.body;
      const db = getTenantD1(env, tenantId);

      switch (operation) {
        case "upsert":
          await db.prepare(
            `INSERT OR REPLACE INTO ${table} (id, ...) VALUES (?, ...)`
          ).bind(data.id, ...).run();
          break;
        case "delete":
          await db.prepare(
            `DELETE FROM ${table} WHERE id = ?`
          ).bind(data.id).run();
          break;
      }
      message.ack();
    }
  },
};

Option B: Periodic Full Sync (Workflow)


export class D1SyncWorkflow extends WorkflowEntrypoint<Env, SyncParams> {
  async run(event: WorkflowEvent<SyncParams>, step: WorkflowStep) {
    const { tenantId, tables } = event.payload;

    for (const table of tables) {
      // Fetch from Neon via Hyperdrive
      const data = await step.do(`fetch-${table}`, async () => {
        return fetchTableData(this.env, tenantId, table);
      });

      // Write to D1
      await step.do(`sync-${table}`, async () => {
        const db = getTenantD1(this.env, tenantId);
        await db.batch([
          db.prepare(`DELETE FROM ${table}`), // Clear
          ...data.map(row =>
            db.prepare(`INSERT INTO ${table} VALUES (${placeholders(row)})`).bind(...Object.values(row))
          ),
        ]);
      });
    }
  }
}

Per-Tenant D1 Management

Create D1 for New Tenant


# On tenant creation
npx wrangler d1 create "tenant-${tenantId}"

Dynamic D1 Binding

Since you can't pre-bind 50K databases in wrangler.jsonc, use the D1 HTTP API:


async function getTenantDB(accountId: string, databaseId: string, apiToken: string) {
  // Use D1 HTTP API for dynamic access
  const res = await fetch(
    `https://api.cloudflare.com/client/v4/accounts/${accountId}/d1/database/${databaseId}/query`,
    {
      method: "POST",
      headers: { Authorization: `Bearer ${apiToken}` },
      body: JSON.stringify({ sql: "SELECT * FROM products LIMIT 20" }),
    }
  );
  return res.json();
}

Or use a router Worker with a limited set of active tenant bindings.

Time Travel (Disaster Recovery)


# Restore tenant's database to a point in time
npx wrangler d1 time-travel restore tenant-${tenantId} \
  --timestamp "2026-02-08T14:30:00Z"

Per-tenant recovery without affecting other tenants -- a major advantage over shared PostgreSQL snapshots.

Limits

MetricFreePaid
Databases/account1050,000
Max DB size500 MB10 GB
Max storage/account5 GB1 TB
Queries/invocation501,000
Max SQL length100 KB100 KB
Time Travel7 days30 days
Columns/table100100
Row size2 MB2 MB

Pricing

MetricIncludedOverage
Rows read25B/month$0.001/million
Rows written50M/month$1.00/million
Storage5 GB$0.75/GB-month

When to Use D1 vs. KV vs. Neon

ScenarioBest ChoiceWhy
Read-heavy catalog**D1**Relational queries, indexes, joins
Simple config cache**KV**Key-value, global, fastest reads
Transactional writes**Neon**ACID, complex relations, Prisma
Analytics aggregation**D1** or **Analytics Engine**Pre-computed snapshots
File metadata**D1**Relational with R2 references
User sessions**KV**Simple, TTL-based

Estimated Impact