Pigeon — Building a Browser-Native P2P File Transfer App
An AirDrop-style file transfer that runs entirely in your browser. Files travel directly between devices over encrypted WebRTC Data Channels — the server only brokers a handshake and never sees your data.
> Table of Contents
1. Motivation
AirDrop is magic — tap a device, drop a file. But it only works inside Apple's walled garden. Tools like LocalSend solve this with native apps, but require installation on every device.
Pigeon takes a different approach: what if the browser itself was the app? No installs. No accounts. No file data ever hitting a server. Just open a URL on two devices and send.
The constraint — browsers cannot speak raw UDP or LAN multicast — makes this a genuinely interesting engineering problem. The solution is WebRTC: a standard that lets browsers establish encrypted peer-to-peer connections, but requires a rendezvous point (a signaling server) to bootstrap them.
2. Architecture Overview
Pigeon has two independently deployed pieces that do very different jobs:
┌─────────────────────────────────────────────────────┐
│ USER'S BROWSER (Device A) │
│ Next.js App → usePigeon hook → WebRTC DataChannel │
└──────────────────────┬──────────────────────────────┘
│ WebSocket (SDP + ICE only, ~1 KB total)
▼
┌──────────────────────┐
│ Signaling Server │ ← routes handshake metadata
│ (signal.ts / WS) │ ← never sees file bytes
└──────────────────────┘
│ WebSocket
▼
┌─────────────────────────────────────────────────────┐
│ USER'S BROWSER (Device B) │
│ Next.js App → usePigeon hook → WebRTC DataChannel │
└─────────────────────────────────────────────────────┘
↕ Direct WebRTC P2P (DTLS encrypted)
Files never leave the device-to-device pathThe signaling server exchanges roughly 1 KB of metadata per connection (SDP offer/answer + ICE candidates). After that it is completely out of the picture — the actual data channel is a direct, encrypted tunnel between the two browsers.
3. Signaling Server Design
The signaling server is a standalone Node.js WebSocket server using the ws package. It intentionally does as little as possible:
- Maintains an in-memory registry of connected peers
- Broadcasts
join/leaveevents to all peers on the same server - Routes
offer,answer, andice-candidatemessages between specific peer pairs - Exposes a
/healthzHTTP endpoint for uptime monitoring - Validates frontend origin via
ALLOWED_ORIGINSenv var in production
// Simplified core of signal.ts
const peers = new Map<string, PeerRecord>();
wss.on("connection", (ws) => {
ws.on("message", (raw) => {
const msg = JSON.parse(raw.toString());
if (msg.type === "join") {
peers.set(msg.id, { ws, name: msg.name, deviceType: msg.deviceType });
broadcast({ type: "join", from: msg.id, name: msg.name }, ws);
ws.send(JSON.stringify({ type: "peer-list", peers: getPeerList() }));
} else if (msg.to) {
// Route signal directly to target peer
peers.get(msg.to)?.ws.send(JSON.stringify({ ...msg, from: msg.from }));
}
});
ws.on("close", () => {
// Remove peer, broadcast leave
broadcast({ type: "leave", from: peerId });
peers.delete(peerId);
});
});4. WebRTC Perfect Negotiation
WebRTC connection setup is notoriously tricky when both sides can initiate simultaneously — an "offer collision". Pigeon implements the Perfect Negotiation pattern from the WebRTC spec to handle this gracefully:
- One peer is designated polite: it rolls back its own offer if it receives one from the remote side simultaneously
- The other is impolite: it simply ignores incoming offers during a collision and lets its own offer win
makingOfferandignoreOfferflags guard the signaling state machine against race conditions
async handleSignal(msg: SignalMessage) {
const { type, payload } = msg;
if (type === "offer" || type === "answer") {
const offerCollision =
type === "offer" &&
(this.makingOffer || this.pc.signalingState !== "stable");
this.ignoreOffer = !this.isPolite && offerCollision;
if (this.ignoreOffer) return; // impolite peer ignores during collision
await this.pc.setRemoteDescription(new RTCSessionDescription(payload));
if (type === "offer") {
await this.pc.setLocalDescription();
this.callbacks.onSignal({ type: "answer", payload: this.pc.localDescription });
}
}
}5. Chunked File Transfer
Raw WebRTC Data Channels have a practical message size limit (~256 KB depending on the browser). For large files, Pigeon chunks at 64 KB and streams them with backpressure control:
- Sender sends a JSON
metaframe (file name, size, MIME type, transfer ID) - Sender streams
ArrayBufferbinary chunks usingfile.slice(offset, end).arrayBuffer() - When
bufferedAmountexceeds the threshold, sending pauses and resumes ononbufferedamountlow - Sender sends a JSON
doneframe to signal completion
The critical detail: file.slice(offset, end).arrayBuffer() reads only one 64 KB window from disk at a time. RAM usage is constant at ~64 KB regardless of file size — a 10 GB file transfers with no more memory than a 1 KB one.
const CHUNK_SIZE = 64 * 1024; // 64 KB
const pump = async () => {
while (offset < total) {
// Backpressure: pause if send buffer is filling up
if (channel.bufferedAmount > CHUNK_SIZE * 8) {
channel.onbufferedamountlow = () => {
channel.onbufferedamountlow = null;
pump(); // resume
};
return;
}
// Read only this one chunk from disk — never the whole file
const chunk = await file.slice(offset, offset + CHUNK_SIZE).arrayBuffer();
channel.send(chunk);
offset += chunk.byteLength;
onProgress(transferId, Math.round((offset / total) * 100));
}
channel.send(JSON.stringify({ type: "done", id: transferId }));
};6. Streaming Large Files to Disk
On the receiver side, accumulating all chunks in a ArrayBuffer[] before triggering a download would exhaust RAM for files over ~500 MB. Pigeon solves this with the File System Access API (Chrome / Edge): chunks stream directly to disk as they arrive.
// Chrome/Edge: stream straight to disk
const handle = await window.showSaveFilePicker({
suggestedName: meta.fileName,
types: [{ description: "File", accept: { [meta.fileType]: [".ext"] } }],
});
const writable = await handle.createWritable();
// Each incoming chunk goes straight to disk — zero RAM accumulation
channel.onmessage = async ({ data }) => {
if (data instanceof ArrayBuffer) {
await writable.write(data);
}
};
// Finalize on "done" frame
await writable.close();As of 2026, Firefox and Safari don't yet support showSaveFilePicker. Pigeon falls back gracefully to in-memory buffering with a blob URL auto-download — the UX is identical, just limited to files that fit in RAM.
There's a subtle race condition: the save picker dialog is async and binary chunks can start arriving from the sender before the picker resolves. Pigeon handles this with a pendingChunks[] queue that drains into the writable stream once the contextReady Promise resolves.
// Race condition fix: queue chunks arriving before picker resolves
private pendingChunks: ArrayBuffer[] = [];
private contextReady: Promise<void> | null = null;
private handleChunk(data: ArrayBuffer) {
if (!this.receiveContext) {
this.pendingChunks.push(data); // queue it
return;
}
// ... write to stream or buffer
}
// In handleControlMessage("meta"):
this.contextReady = this.openWriteStream(meta).then((ctx) => {
// Drain queued chunks into the now-ready context
for (const chunk of this.pendingChunks) {
ctx.writable.write(chunk);
}
this.pendingChunks = [];
this.receiveContext = ctx;
});7. Technical Challenges Solved
Hydration Mismatch
Next.js renders on the server first, then hydrates on the client. crypto.randomUUID() and peer name generation produce different values server-side vs client-side, causing React hydration errors. The fix: defer all identity generation to a useEffect and render null identity until the client is ready.
Non-Secure Context on LAN
Accessing the app at http://192.168.x.x:3000 is a non-secure context. Both crypto.randomUUID and showSaveFilePicker are blocked by the browser. Pigeon includes a Math.random-based UUID v4 fallback for LAN development, while a production HTTPS deployment gets the full APIs.
Stale React Closures Causing Double-Connect
The peer connection factory was a useCallback that closed over identity state. Every time identity changed, a new function reference was created, causing the useEffect that runs the SignalClient to re-execute — disconnecting and reconnecting mid-session with spurious join/leave signals. Fixed by reading identity from a useRef instead, breaking the dependency chain.
Build-time vs Runtime Signal URL
NEXT_PUBLIC_* env vars are baked into the JS bundle at build time. On another LAN device, the baked-in localhost:4000 is unreachable. Pigeon resolves the signal URL at runtime from window.location.hostname when no env var is explicitly set, so any device on the network automatically reaches the right server.
8. The Stack
9. Deployment Architecture
The two components have different runtime requirements and are deployed independently:
Next.js static/SSR build served over HTTPS. The signal URL is injected as an environment variable at build time.
Persistent Node.js process that runs continuously, separate from the frontend. Auto-deploys from the main branch on every push.
WebRTC peer connections use Google's public STUN servers (stun.l.google.com:19302) for NAT traversal. For deployments behind symmetric NAT, a TURN relay (e.g. coturn) can be added by extending the ICE_SERVERS array in src/lib/webrtc.ts.
10. Free to Use
Pigeon is free for the community.
If you build something with this or have questions, reach out on LinkedIn.