A lil' demo of a sync engine that can run fully serverless using cloudflare primitives.
dqnamo/serverless-syncRoom ID
#94462
Metrics
Local state update
N=0
Last
0.0
Avg
0.0
Max
0.0
Local DB write
N=0
Last
0.0
Avg
0.0
Max
0.0
Upstream DB write
N=0
Last
0.0
Avg
0.0
Max
0.0
This is a local-first chat. When you send a message, it appears right away because it is written to SQLite in your browser first. From there it waits in a small outbox. A worker picks it up, writes it to the real database, and then the change shows up for everyone else in the room.
I love local first applications. Once you try building apps that feel instant and keep working offline, you don't really want to go back. The hard part is sync. If you're new to local first and sync engines, this essay is a good place to start: Sync engines are the future. I kept wondering why sync engines usually need a server that is always running. A big reason is the database: something has to watch for changes, remember its place, and tell clients what changed. But a lot of that work is really small durable state: sessions, subscriptions, cursors, queued writes. Cloudflare has primitives for those pieces, so I wanted to see how far you could get without owning a server at all.
The design is to split the job into small parts. The browser keeps a local copy. Your app decides what a user is allowed to read or write. Workers move data. Durable Objects remember live sessions. The upstream database stays the source of truth.
Client
Client SDK
Reads data, writes locally, and sends queued work later.
Local Database
SQLite WASM
Keeps rows, cursors, and pending writes in the browser.
Server
Query Callback
Asks your app if a read should be allowed.
Mutation Callback
Asks your app if a queued write should be allowed.
Durable Objects
Client Sessions
Remembers each connected client and what it can do.
Query Subscriptions
Tells active clients when their data may have changed.
Workers
DB Connector Worker
Checks permissions and moves changes to the database.
Upstream Database
PostgreSQL or D1
The source of truth for rows and the change log.
The sync model is the contract. It says which tables exist locally, which reads are allowed by name, which writes are allowed by name, and what each operation must prove before it can run.
sync/model.ts
const model = createSyncModel({ schema: defineSyncSchema({ collections: { messages: { primaryKey: "id", columns: { id: "text", room_id: "text", author_name: "text", body: "text", created_at: "integer", last_modified_by: "text", }, }, }, }), queries: [ defineQuery({ name: "roomMessages", rootCollection: "messages", requiredPredicates: [ { arg: "roomId", column: "room_id", operator: "=" }, ], normalizeArgs: (args) => ({ roomId: args.roomId }), buildOperation: ({ roomId }) => db.selectFrom("messages").where("room_id", "=", roomId), }), ], mutators: [ defineMutator({ name: "sendMessage", collection: "messages", kind: "insert", requiredValues: [{ arg: "roomId", column: "room_id" }], normalizeArgs: ({ roomId, authorName, body }) => ({ roomId, authorName, body, }), buildOperation: (args, { clientId }) => db.insertInto("messages").values({ room_id: args.roomId, body: args.body, last_modified_by: clientId, }), }), ],})In React, you do not call the database directly. You ask for a named query, and you call a named mutator. Reads stay live. Writes feel instant because they happen locally first.
RoomMessages.tsx
function RoomMessages({ roomId }) { const { data: messages } = useQuery("roomMessages", { roomId }) const sendMessage = useMutator("sendMessage") return ( <form onSubmit={(event) => { event.preventDefault() sendMessage({ roomId, authorName: "Ada", body: event.currentTarget.message.value, }) }} > {messages.map((message) => ( <p key={message.id}>{message.body}</p> ))} <input name="message" /> <button type="submit">Send</button> </form> )}A query starts with the client asking for a named read, like messages in this room. The connector checks that the read is allowed, runs it against the upstream database, stores the result in local SQLite, and keeps the client subscribed so future changes can refresh the query.
Client
DB Connector Worker
Query Callback
Upstream Database
Your app still owns auth. The connector forwards the user's auth header, the query name, and the cleaned-up args. Your server can then answer a simple question: should this user be allowed to make this read?
Query Request
{ "headers": { "authorization": "Bearer <end-user-jwt>", "content-type": "application/json; charset=utf-8", "x-sync-engine-secret": "<callback-shared-secret>" }, "body": { "args": { "roomId": "room_01HYS7K7NQ5VQJ4Y3BT9PZ7M0D" }, "client_id": "client_01HYS7J9GZ8P6VA3K3Z4X5M2K1", "operation_name": "roomMessages", "operation_kind": "query", "session_id": "session_01HYS7K0NV8NC2D3AEX4N6V1BT" }}Approved Response
{ "allow": true}Denied Response
{ "allow": false, "reason": "query operation is not allowlisted"}A mutation goes the other way. The client applies the change locally, puts the operation in an outbox, and sends it to the connector when it can. The connector asks your app if the write is allowed. If it is, the connector commits it upstream and sends the confirmed version back down.
Client
Local Database
DB Connector Worker
Mutation Callback
Upstream Database
Mutation auth works like query auth, except the payload is a list of queued writes. Your app can allow one write, reject one write, or reject the batch. The sync layer does not have to know your product rules.
Mutation Request
{ "headers": { "authorization": "Bearer <end-user-jwt>", "content-type": "application/json; charset=utf-8", "x-sync-engine-secret": "<callback-shared-secret>" }, "body": { "client_id": "client_01HYS7J9GZ8P6VA3K3Z4X5M2K1", "operation_kind": "mutation", "operations": [ { "args": { "authorName": "Ada", "body": "hello", "roomId": "room_01HYS7K7NQ5VQJ4Y3BT9PZ7M0D" }, "collection": "messages", "id": "outbox_01HYS8M2Q0B5W0NY5H3A6E9D2R", "mutation_kind": "insert", "operation_name": "sendMessage", "record_id": "message_01HYS8M33DK68VQT6C9X4T5H0A" } ], "session_id": "session_01HYS7K0NV8NC2D3AEX4N6V1BT" }}Approved Response
{ "operations": [ { "allow": true, "id": "outbox_01HYS8M2Q0B5W0NY5H3A6E9D2R" } ]}Denied Response
{ "allow": false, "reason": "mutation operation is not allowlisted"}Conflict resolution is where sync gets serious. This demo uses the smallest rule that can work: the latest accepted write wins. If two clients change the same thing at the same time, whichever write reaches the upstream database last becomes the version everyone sees.
This is still a small demo. It does not try to solve every sync problem. Real apps need sharper answers for merges, migrations, expired auth while offline, large files, and stranger query shapes. Those problems are not magic, but they are product decisions.
There is also one important constraint: writes have to pass through the connector. This demo is not tailing a native Postgres changelog, so the connector has to record the change, move the cursor forward, and tell live queries to refresh. On Cloudflare this feels fairly natural, because Postgres traffic often goes through Hyperdrive anyway.