The .claude directory doesn't exist yet. Since I'm in plan mode and cannot make edits, let me present the document content and the full updated plan to you. Here's what I've learned and what I'd write:
Summary of Cap'n Web Research
Cap'n Web (cloudflare/capnweb) is profoundly relevant to CloudLang. Here's why:
What Cap'n Web Is
Cap'n Web is Cloudflare's schema-less, object-capability RPC protocol by Kenton Varda (creator of Cap'n Proto and Protobuf v2). Key properties:
JSON-based serialization with type-tagging for non-JSON types (Date → ["date", timestamp], BigInt → ["bigint", "123"], undefined → ["undefined"], arrays escaped as [[...]], errors as ["error", name, message])
Object-capability model — if you have a reference to an object, you have permission to call it. References are passed as ["export", id] or ["import", id], enabling capability-based security.
Promise pipelining — you can call methods on a promise before it resolves (["pipeline", importId, propertyPath, callArguments]), which eliminates round-trip latency. This is critical for distributed Workers.
Map/remap operations — the protocol has a built-in ["remap", ...] instruction that applies a mapper function over data on the remote side, which is exactly how you'd transform Map.forEach() into a distributed iteration.
Workers RPC interop — Cap'n Web is designed to be wire-compatible with Cloudflare's native Workers RPC (cloudflare:workers module). Native RpcTarget/RpcStub can be freely mixed with Cap'n Web's userspace implementation. Service bindings and Durable Objects integrate natively.
Why This Matters for CloudLang
The transformation chain becomes:
Local function call → Cap'n Web RPC call
Map.get(key) → capability.get(key) via Cap'n Web serialization
Map.set(key, value) → capability.put(key, value) via Cap'n Web serialization
get(key) → env.STORAGE.get(key) via service binding + Cap'n Web protocol
Instead of the ad-hoc JSON-over-fetch approach I originally proposed, Cap'n Web gives us:
- A real serialization format that handles all JS types (not just strings)
- Promise pipelining that eliminates latency for chained cross-service calls
- The exact same wire format that Cloudflare Workers natively use for service bindings
- The
remap instruction as the solution to the Map iteration problem (Phase 7 in the plan)
- Capability-based security for free — each Worker only gets references to the services it needs
Revised Transformation Model
Instead of:
// Ad-hoc: function call → HTTP fetch with JSON body
const result = await (await env.STORAGE.fetch("/get", {
method: "POST",
body: JSON.stringify({ key })
})).text();
The compiler should eventually generate:
// Cap'n Web: function call → RPC stub method call
const result = await env.STORAGE.get(key);
// Where env.STORAGE is an RpcStub backed by a service binding
// Wire format: ["push", ["pipeline", 0, ["get"], [["key_value"]]]]
This is cleaner, faster (promise pipelining), type-safe, and uses the exact same protocol Cloudflare uses internally.
Now, here is the document I want to write to .claude/context/capnweb-integration.md. Since I'm in read-only mode, I'll present it for your approval:
Proposed Document: .claude/context/capnweb-integration.md
# Cap'n Web Integration Strategy for CloudLang
## Overview
CloudLang's core transformation — decomposing a single-process TypeScript program
into distributed Cloudflare Workers — requires two primitives:
1. **State flattening**: Local `Map`/`Set` operations → remote KV store operations
2. **Call resolution**: Local function calls → remote procedure calls across Workers
Both of these will use the serialization format and RPC protocol from
[Cap'n Web](https://github.com/cloudflare/capnweb) (`cloudflare/capnweb`),
Cloudflare's official JavaScript-native object-capability RPC system.
## Why Cap'n Web
Cap'n Web (by Kenton Varda, creator of Cap'n Proto and Protobuf v2) is the
protocol that underlies Cloudflare Workers' native RPC system. Using it means:
- **Wire compatibility**: Generated Workers use the same protocol that
`cloudflare:workers` service bindings use natively. No translation layer.
- **Promise pipelining**: Chained cross-service calls like `cache.get(key)` →
`store.get(key)` don't require sequential round trips. The protocol supports
calling methods on unresolved promises (`["pipeline", importId, path, args]`).
- **Full JS type support**: The serialization handles all JavaScript types — not
just strings. Date, BigInt, Uint8Array, Error, undefined, nested arrays and
objects are all supported via type-tagged JSON encoding.
- **Capability security**: Each Worker only receives references (capabilities) to
the services it needs. The object-capability model maps directly onto CloudLang's
decomposition — a Worker group's `service_deps` become its capabilities.
- **Map/remap operations**: The `["remap", ...]` protocol instruction applies a
function over remote data, which is the solution for translating `Map.forEach()`
and iteration patterns into distributed operations.
## Serialization Format
Cap'n Web uses JSON as the base format with type-tagged arrays for non-JSON types:
| JavaScript Type | Wire Format |
|-----------------|--------------------------------------|
| string, number, bool, null | JSON literal (pass-through) |
| undefined | `["undefined"]` |
| Date | `["date", milliseconds]` |
| BigInt | `["bigint", "string_repr"]` |
| Uint8Array | `["bytes", "base64_string"]` |
| Error | `["error", name, message, stack?]` |
| Array | `[[element0, element1, ...]]` (escaped) |
| RPC stub | `["export", exportId]` or `["import", importId]` |
| Promise (pipelining) | `["pipeline", importId, path?, args?]` |
Literal arrays are escaped by double-wrapping: `[["a", "b"]]` means the array
`["a", "b"]`, while `["date", 123]` means a Date object.
## Transformation Model
### Phase 1: State Flattening (Map → KV via Cap'n Web serialization)
Local Map operations are transformed to KV operations, with values serialized
using Cap'n Web's `Devaluator`/`Evaluator` pair:
Source (local): Target (distributed):
───────────────── ──────────────────────
store.set(key, value) → await env.STORE.put(key, serialize(value))
store.get(key) → deserialize(await env.STORE.get(key))
store.delete(key) → await env.STORE.delete(key)
store.has(key) → (await env.STORE.get(key)) !== null
The `serialize`/`deserialize` functions use Cap'n Web's Devaluator/Evaluator,
ensuring that complex values (objects, arrays, dates, nested structures) are
faithfully preserved across the local→distributed boundary.
This is critical for behavioral equivalence: a Map that stores objects must
produce identical values when read back, whether running locally or on KV.
### Phase 2: Call Resolution (function calls → Cap'n Web RPC)
Local function calls across Worker group boundaries become RPC calls using
Cap'n Web's protocol, transported over service bindings:
Source (local): Target (distributed):
───────────────── ──────────────────────
get(key) → await env.STORAGE.get(key)
// Where env.STORAGE is an RpcStub
// Wire: ["push", ["pipeline", 0, ["get"], [[key]]]]
Each Worker group exposes its functions as methods on an RpcTarget. The calling
Worker holds an RpcStub (via service binding) and calls methods directly.
Promise pipelining means that chained calls like:
```typescript
const value = get(key);
cache.set(key, value);
Can be pipelined into a single logical batch, avoiding sequential round trips.
Phase 3: Iteration (Map.forEach → remap)
The ["remap", ...] protocol instruction is the key to handling iteration.
Instead of fetching all entries and iterating locally, the mapper function is
sent to the remote Worker and applied there:
Source (local): Target (distributed):
───────────────── ──────────────────────
tasks.forEach((v, k) => ...) → env.TASKS.map(entry => ...)
// Wire: ["remap", importId, path, captures, instructions]
This defers to Phase 7+ of the task API project but is the correct long-term
answer to the Map iteration semantic gap.
Cloudflare Workers RPC Interoperability
Cap'n Web is designed to interoperate seamlessly with Cloudflare's native
Workers RPC (cloudflare:workers). Key points:
- When running in the Workers runtime, Cap'n Web auto-detects the environment
(via navigator.userAgent === "Cloudflare-Workers") and uses native RpcTarget
- Native and userspace RPC objects can be freely mixed — a native RpcStub can
wrap a Cap'n Web RpcTarget and vice versa
- Service bindings, Durable Object stubs, and Fetcher bindings all participate
in the same capability model
- The same protocol works over WebSocket (persistent sessions) or HTTP batch
(request/response), both provided by Cap'n Web
This means CloudLang-generated Workers can use Cap'n Web for their inter-worker
communication while remaining fully compatible with hand-written Workers that
use native RPC.
Integration Timeline
| Phase |
Cap'n Web Usage |
CloudLang Work |
| Task API v1 |
JSON-over-fetch (simplified) |
Body lowering, router gen |
| Task API v2 |
Cap'n Web serialize/deserialize for KV values |
Add capnweb dep to generated Workers |
| Task API v3 |
Full RpcTarget/RpcStub for cross-worker calls |
Generate RpcTarget classes per group |
| Task API v4 |
Promise pipelining for chained calls |
Analyze call chains, emit pipelined stubs |
| Task API v5 |
Remap for iteration patterns |
Map.forEach → .map() transformation |
Key Files in capnweb Repository
src/serialize.ts — Devaluator (JS→wire) and Evaluator (wire→JS)
src/rpc.ts — RPC session management, import/export tables
src/core.ts — StubHook, RpcPayload, RpcTarget, RpcStub fundamentals
src/map.ts — MapBuilder for the remap instruction
src/batch.ts — HTTP batch transport (request/response RPC)
src/websocket.ts — WebSocket transport (persistent sessions)
protocol.md — Complete wire protocol specification
References
---
### Impact on the Project Plan
This changes the plan in one important way: **Phase 3 (body lowering) should be structured in two sub-phases:**
1. **Phase 3a**: Simple JSON-over-fetch for the initial task API verification (gets us to a working equivalence test quickly)
2. **Phase 3b**: Replace JSON-over-fetch with Cap'n Web RPC stubs (the correct long-term architecture)
This way we validate behavioral equivalence first with the simple approach, then upgrade the wire protocol without changing the semantics. The test harness catches any regressions.
Shall I proceed with creating this document and the directory structure once we're out of plan mode? And does the phasing (simple-first, then Cap'n Web upgrade) make sense to you, or would you prefer to go straight to Cap'n Web from the start?