From 04c2681009c7830e13f325968c0c1613e7389733 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 23 Feb 2026 15:45:41 +0100 Subject: [PATCH 01/28] Add E2EE spec for Matrix-based project replication --- specs/e2ee.md | 271 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 271 insertions(+) create mode 100644 specs/e2ee.md diff --git a/specs/e2ee.md b/specs/e2ee.md new file mode 100644 index 00000000..2613d148 --- /dev/null +++ b/specs/e2ee.md @@ -0,0 +1,271 @@ +# ODIN E2EE – End-to-End Encryption for Project Replication + +## Overview + +Add end-to-end encryption (E2EE) to ODIN's Matrix-based project replication. Encrypted rooms ensure that feature data exchanged between project participants cannot be read by the homeserver or unauthorized third parties. + +## Scope + +- **In scope:** E2EE for project rooms (feature data replication) +- **Out of scope:** PROJECT-LIST device (space management, invitations, permissions — no sensitive payload) + +## Background + +ODIN uses `@syncpoint/matrix-client-api` to replicate project data via Matrix rooms. Each project creates a dedicated Matrix device (`device_id: projectUUID`). The PROJECT-LIST uses `device_id: 'PROJECT-LIST'` for structural operations (sharing spaces, invitations). + +Currently, all room communication is unencrypted. The Matrix homeserver can read all replicated feature data. + +## Architecture + +### Crypto SDK + +**Package:** `@matrix-org/matrix-sdk-crypto-wasm` (already a dependency) + +The Wasm bindings run natively in Electron's renderer process (Chromium), where IndexedDB is available as a persistent store. No package change required. + +**Why not `matrix-sdk-crypto-nodejs`?** +The native Node.js bindings would require moving all crypto logic to the main process and proxying via IPC. This is a larger architectural change with no clear benefit, since the `MatrixClient` already lives in the renderer. + +### Persistence + +Each project gets its own IndexedDB-backed crypto store: + +``` +IndexedDB: 'crypto-' +``` + +The `StoreHandle.open()` API creates and manages the IndexedDB database internally. The Wasm library controls the schema — there is no custom storage adapter needed. + +### Passphrase Management + +The IndexedDB crypto store is encrypted with a per-project passphrase. + +**Flow:** + +1. **Project is shared (first time):** + - Generate a random passphrase: `crypto.randomBytes(32).toString('base64')` + - Encrypt via Electron's `safeStorage.encryptString(passphrase)` + - Store the encrypted passphrase in the project's LevelDB: `session` sublevel, key `crypto:passphrase` + +2. **Project is opened:** + - Read encrypted passphrase from LevelDB (`session['crypto:passphrase']`) + - Decrypt via `safeStorage.decryptString(encryptedPassphrase)` (main process, exposed via preload/IPC) + - Open crypto store: `StoreHandle.open('crypto-' + projectUUID, passphrase)` + - Initialize OlmMachine: `OlmMachine.initFromStore(userId, deviceId, storeHandle)` + +3. **Project is deleted:** + - Delete the IndexedDB database `crypto-` (via `indexedDB.deleteDatabase()`) + - The LevelDB (including encrypted passphrase) is already deleted with the project directory + +### IPC for safeStorage + +`safeStorage` is a main-process-only API. The decrypted passphrase is passed to the renderer via the existing preload/IPC bridge. This is acceptable because: + +- The passphrase protects the local IndexedDB store only +- An attacker with renderer access already has access to IndexedDB directly +- The passphrase never leaves the local machine + +**Preload addition:** + +```javascript +// preload: expose decryptPassphrase to renderer +replication: { + decryptPassphrase: (encrypted) => ipcRenderer.invoke('replication:decrypt-passphrase', encrypted) +} +``` + +```javascript +// main process: handle decryption +ipcMain.handle('replication:decrypt-passphrase', (event, encrypted) => { + return safeStorage.decryptString(Buffer.from(encrypted)) +}) +``` + +## Integration with matrix-client-api + +### CryptoManager Changes + +The existing `CryptoManager` in `matrix-client-api/src/crypto.mjs` needs to be updated: + +1. **`initialize()` → `initializeWithStore()`**: Accept a `storePath` (IndexedDB name) and passphrase instead of creating an in-memory OlmMachine. + +2. **Sync integration**: `receiveSyncChanges()` must be called with every `/sync` response to process to-device messages and update device tracking. + +3. **Outgoing request processing**: After each sync cycle, `outgoingRequests()` must be polled and sent via HTTP (key uploads, key queries, key claims, to-device messages). + +4. **Room encryption setup**: When a room has `m.room.encryption` state, call `setRoomEncryption()` to register it with the OlmMachine. + +5. **Encrypt before send**: `command-api` messages must be encrypted via `encryptRoomEvent()` before sending. + +6. **Decrypt on receive**: `timeline-api` must decrypt `m.room.encrypted` events via `decryptRoomEvent()`. + +7. **Key sharing**: When a user joins a project room, room keys must be shared via `shareRoomKey()`. + +### Enabling Encryption on Room Creation + +**Default: E2EE enabled (secure by default, opt-out)** + +When a project is shared, encryption is enabled by default. The user can explicitly opt out during the sharing dialog (e.g. a checkbox "Encrypt project data (recommended)" — checked by default). + +The opt-out choice is stored in the project's LevelDB (`session['crypto:enabled']`): +- `true` (default) → rooms are created with encryption +- `false` (user opted out) → rooms are created without encryption + +When E2EE is enabled, new project rooms (layers) are created with the `m.room.encryption` state event: + +```javascript +{ + type: 'm.room.encryption', + content: { + algorithm: 'm.megolm.v1.aes-sha2' + } +} +``` + +This is done in the `structure-api` when creating rooms for shared projects. + +**Note:** Once a room is encrypted, it cannot be un-encrypted (Matrix protocol constraint). The opt-out decision applies at project-share time and affects all subsequently created rooms/layers. + +## Data Flow + +``` +Project Open + │ + ├─ Read encrypted passphrase from LevelDB + ├─ Decrypt via safeStorage (IPC to main process) + ├─ StoreHandle.open('crypto-', passphrase) + ├─ OlmMachine.initFromStore(userId, deviceId, storeHandle) + ├─ Process outgoing requests (key upload) + │ + ▼ +Sync Loop + │ + ├─ /sync response received + ├─ receiveSyncChanges(toDevice, deviceLists, otkeyCounts, fallbackKeys) + ├─ Process outgoing requests (key queries, claims, to-device) + ├─ Decrypt m.room.encrypted events → pass to timeline-api + │ + ▼ +Send Message + │ + ├─ shareRoomKey(roomId, userIds) if needed + ├─ encryptRoomEvent(roomId, eventType, content) + ├─ Send encrypted payload via command-api + │ + ▼ +Project Close + │ + └─ OlmMachine is dropped, IndexedDB persists automatically +``` + +## Project Deletion Cleanup + +When a project is deleted, the following must be cleaned up: + +1. Project LevelDB directory (existing behavior) +2. IndexedDB database `crypto-` (new: `indexedDB.deleteDatabase('crypto-' + projectUUID)`) + +## Migration + +Existing shared projects are unencrypted. Migration strategy: + +- **New rooms** created after E2EE is enabled will have `m.room.encryption` state → encrypted +- **Existing rooms** remain unencrypted (no retroactive encryption possible in Matrix) +- The `timeline-api` must handle both encrypted and unencrypted events (check for `m.room.encrypted` type) +- Optional: provide a "re-share project" action that creates new encrypted rooms and migrates data + +## Acceptance Criteria + +1. New shared project rooms are created with `m.room.encryption` state event +2. Feature data sent to project rooms is encrypted (Megolm) +3. Feature data received from project rooms is decrypted transparently +4. Crypto keys persist across ODIN restarts (IndexedDB + passphrase in LevelDB) +5. `safeStorage` protects the passphrase at rest +6. Project deletion removes both LevelDB and IndexedDB crypto store +7. Existing unencrypted projects continue to work without modification +8. PROJECT-LIST remains unencrypted + +## Design Decisions + +### Key Verification + +**Decision: TOFU (Trust on First Use) for V1.** + +Devices are trusted on first contact. ODIN projects are typically shared within organizations where the homeserver is trusted. The existing `CryptoManager` already uses `TrustRequirement.Untrusted`, which is de facto TOFU. + +Cross-signing and interactive verification (emoji comparison) can be added as a future enhancement. + +### Key Backup + +**Decision: Server-side key backup is required.** + +ODIN uses delta-based replication: every change produces a separate message. There are no snapshots. When a new device joins a project room, it must **replay the entire message history** to reconstruct the local state. Without access to the Megolm session keys that encrypted those messages, the replay fails and the project is unusable. + +Therefore, encrypted server-side key backup (`m.megolm_backup.v1.curve25519-aes-sha2`) must be implemented: + +1. **Backup creation:** When the first E2EE project is shared, generate a backup key and store it encrypted (via `safeStorage`) in the master DB. +2. **Continuous backup:** After each Megolm session rotation, back up the new session keys to the homeserver. +3. **Key restore:** When a new device opens an existing encrypted project, download and decrypt the backed-up keys before starting the history replay. +4. **Recovery key:** Provide the user with a human-readable recovery key (e.g., base58-encoded) for disaster recovery. This can be shown once during setup and stored by the user. + +**Note:** Without key backup, E2EE would effectively break project sharing for any new device — which contradicts the core use case. + +### Megolm Session Rotation + +**Decision: Use library defaults (100 messages or 1 week).** + +ODIN produces many small state updates, so rotation will happen frequently. This is acceptable — the key-sharing overhead is minimal (one `m.room_key` to-device event per rotation per participant), and frequent rotation provides better forward secrecy. Can be tuned later if performance issues arise. + +### Multi-Device and Message Filter + +**Current limitation:** ODIN does not support the same user participating in the same project from multiple physical devices simultaneously. The timeline message filter excludes all events where the current user is the sender. + +**Impact on E2EE:** The crypto layer uses to-device messages (`m.room.encrypted` to-device events) for key exchange. These are **not** room events and are not affected by the timeline filter. However, the following must be verified: + +1. **To-device events** (key sharing, key requests) must **not** be filtered — they are processed in `receiveSyncChanges()` before the timeline filter runs. +2. **Room events from self** are currently filtered out. With E2EE, encrypted events from self (`m.room.encrypted` with own sender) must still be filtered the same way as unencrypted self-events — the filter should apply **after** decryption, based on the decrypted sender, not on the encrypted envelope. +3. If multi-device support is added later, the self-filter must be revisited: same user on a different device is a legitimate source of changes. + +### Sync Filter Changes (matrix-client-api/src/project.mjs) + +The Matrix sync filter is applied **server-side**, before events reach the client. With E2EE, the server only sees `m.room.encrypted` as the event type — not the original type (e.g. `io.syncpoint.odin.operation`). This means the current `types` filter would **drop all encrypted events**. + +**Current filter (two locations):** + +1. `content()` (history replay): `types: [ODINv2_MESSAGE_TYPE]` +2. `filterProvider()` (live sync): `types: [M_ROOM_NAME, M_ROOM_POWER_LEVELS, M_SPACE_CHILD, M_ROOM_MEMBER, ODINv2_MESSAGE_TYPE, ODINv2_EXTENSION_MESSAGE_TYPE]` + +**Required change:** Add `m.room.encrypted` to the `types` array when E2EE is active: + +```javascript +// In filterProvider(): +const EVENT_TYPES = [ + M_ROOM_NAME, + M_ROOM_POWER_LEVELS, + M_SPACE_CHILD, + M_ROOM_MEMBER, + ODINv2_MESSAGE_TYPE, + ODINv2_EXTENSION_MESSAGE_TYPE, + 'm.room.encrypted' // NEW: let encrypted events through for client-side decryption +] + +// In content(): +types: [ODINv2_MESSAGE_TYPE, 'm.room.encrypted'] +``` + +**Post-decryption filtering:** After decryption in the `timeline-api`, the original event type is restored. However, `m.room.encrypted` is a catch-all — it could contain any event type, including types not in the original filter list. Therefore, a **client-side type filter** must run after decryption to ensure only the expected event types are processed: + +```javascript +// timeline-api.mjs, after decryption block: +// Re-apply type filter on decrypted events +if (filter?.types) { + events[roomId] = roomEvents.filter(event => filter.types.includes(event.type)) +} +``` + +**`not_senders` is unaffected:** The sender is event metadata (not encrypted), so the server-side `not_senders` filter continues to work correctly with encrypted events. + +## Open Questions + +1. **Snapshot mechanism:** A snapshot/checkpoint feature would reduce dependence on full history replay and make key backup less critical for day-to-day usage. Worth considering as a separate feature. +2. **Multi-device:** If same-user multi-device support is planned, the self-message filter and device key management need to be designed accordingly from the start. From 1e3d7570f73638d18981313ed22655f8c4a1a145 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Sun, 15 Mar 2026 18:36:18 +0100 Subject: [PATCH 02/28] feat: E2EE integration with matrix-client-api MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Wire up E2EE in ODIN's replication layer: 1. package.json: matrix-client-api → github:syncpoint/matrix-client-api#feature/e2ee 2. ipc.js: safeStorage IPC handlers - ipc:replication/encryptPassphrase → safeStorage.encryptString() - ipc:replication/decryptPassphrase → safeStorage.decryptString() - Uses OS keychain (DPAPI/Keychain/libsecret) for at-rest protection 3. preload/modules/replication.js: expose encrypt/decryptPassphrase to renderer 4. Project-services.js: passphrase lifecycle - Reads crypto:enabled from session LevelDB - If enabled: loads or generates passphrase, encrypts via safeStorage - Passes encryption config to MatrixClient factory: { enabled: true, storeName: 'crypto-', passphrase } - MatrixClient handles CryptoManager init + persistent IndexedDB store --- package.json | 2 +- src/main/ipc.js | 22 ++++++++++++- src/main/preload/modules/replication.js | 6 +++- src/renderer/components/Project-services.js | 36 ++++++++++++++++++--- 4 files changed, 59 insertions(+), 7 deletions(-) diff --git a/package.json b/package.json index 74c00837..0eb76101 100644 --- a/package.json +++ b/package.json @@ -63,7 +63,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "^1.11.1", + "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#feature/e2ee", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", diff --git a/src/main/ipc.js b/src/main/ipc.js index 7a950c9e..b670b63b 100644 --- a/src/main/ipc.js +++ b/src/main/ipc.js @@ -1,6 +1,6 @@ import path from 'path' import { promises as fs } from 'fs' -import { app } from 'electron' +import { app, safeStorage } from 'electron' import { leveldb, sessionDB } from '../shared/level' import { initPaths } from './paths' @@ -83,4 +83,24 @@ export const ipc = (ipcMain, projectStore) => { console.error(error) } }) + + // E2EE: encrypt/decrypt passphrases via Electron's safeStorage API. + // safeStorage uses the OS keychain (DPAPI on Windows, Keychain on macOS, libsecret on Linux) + // to protect the passphrase at rest. + + ipcMain.handle('ipc:replication/encryptPassphrase', (_, passphrase) => { + if (!safeStorage.isEncryptionAvailable()) { + throw new Error('safeStorage encryption is not available on this platform') + } + // Returns a Buffer; convert to base64 for storage in LevelDB + return safeStorage.encryptString(passphrase).toString('base64') + }) + + ipcMain.handle('ipc:replication/decryptPassphrase', (_, encryptedBase64) => { + if (!safeStorage.isEncryptionAvailable()) { + throw new Error('safeStorage encryption is not available on this platform') + } + const encrypted = Buffer.from(encryptedBase64, 'base64') + return safeStorage.decryptString(encrypted) + }) } diff --git a/src/main/preload/modules/replication.js b/src/main/preload/modules/replication.js index 17d72b52..23422585 100644 --- a/src/main/preload/modules/replication.js +++ b/src/main/preload/modules/replication.js @@ -6,5 +6,9 @@ module.exports = { getCredentials: (id) => ipcRenderer.invoke('ipc:get:replication/credentials', id), putCredentials: (id, credentials) => ipcRenderer.invoke('ipc:put:replication/credentials', id, credentials), delCredentials: (id) => ipcRenderer.invoke('ipc:del:replication/credentials', id), - putReplicationSeed: (id, seed) => ipcRenderer.invoke('ipc:put:project:replication/seed', id, seed) + putReplicationSeed: (id, seed) => ipcRenderer.invoke('ipc:put:project:replication/seed', id, seed), + + // E2EE: passphrase management via safeStorage (main process only) + encryptPassphrase: (passphrase) => ipcRenderer.invoke('ipc:replication/encryptPassphrase', passphrase), + decryptPassphrase: (encrypted) => ipcRenderer.invoke('ipc:replication/decryptPassphrase', encrypted) } diff --git a/src/renderer/components/Project-services.js b/src/renderer/components/Project-services.js index 2081912c..3f7d76b4 100644 --- a/src/renderer/components/Project-services.js +++ b/src/renderer/components/Project-services.js @@ -131,12 +131,40 @@ export default async projectUUID => { const isRemoteProject = projectTags.includes('SHARED') const credentials = await projectStore.getCredentials('default') - services.replicationProvider = (isRemoteProject && credentials) - ? MatrixClient({ + if (isRemoteProject && credentials) { + // Check if E2EE is enabled for this project + const cryptoEnabled = await sessionStore.get('crypto:enabled', false) + let encryption = null + + if (cryptoEnabled) { + let passphrase + const encryptedPassphrase = await sessionStore.get('crypto:passphrase', null) + + if (encryptedPassphrase) { + // Decrypt existing passphrase via safeStorage (main process) + passphrase = await window.odin.replication.decryptPassphrase(encryptedPassphrase) + } else { + // First time: generate random passphrase, encrypt and store it + passphrase = crypto.randomUUID() + crypto.randomUUID() // 72 chars of randomness + const encrypted = await window.odin.replication.encryptPassphrase(passphrase) + await sessionStore.put('crypto:passphrase', encrypted) + } + + encryption = { + enabled: true, + storeName: `crypto-${projectUUID}`, + passphrase + } + } + + services.replicationProvider = MatrixClient({ ...credentials, - device_id: projectUUID + device_id: projectUUID, + ...(encryption && { encryption }) }) - : { disabled: true } + } else { + services.replicationProvider = { disabled: true } + } services.signals = {} services.signals['replication/operational'] = Signal.of(false) From 69296a87aeee6b1f9726d0e6cbf2d2f941b25b73 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Sun, 15 Mar 2026 18:45:30 +0100 Subject: [PATCH 03/28] feat: Share dialog with E2EE opt-out checkbox Secure by default: E2EE checkbox is checked by default when sharing. User can opt out by unchecking 'Encrypt project data (recommended)'. Components: - ShareDialog.js: Modal with checkbox + explanatory text - ProjectList.js: Share button opens dialog instead of sharing directly doShare() passes { encrypted: true/false } to replication.share() - ProjectList.css: Dialog overlay + styling E2EE preference storage: - ipc.js: new handler ipc:put:project:crypto/enabled writes to project's session LevelDB - replication.js (preload): setCryptoEnabled() exposed to renderer - Project-services.js reads crypto:enabled on project open Layer encryption inheritance: - toolbar.js: shareLayer() inherits encrypted flag from project (checks if cryptoManager is active on the replicatedProject) --- src/main/ipc.js | 14 ++++ src/main/preload/modules/replication.js | 3 +- .../components/projectlist/ProjectList.css | 78 +++++++++++++++++++ .../components/projectlist/ProjectList.js | 28 +++++-- .../components/projectlist/ShareDialog.js | 57 ++++++++++++++ src/renderer/replication/handler/toolbar.js | 4 +- 6 files changed, 177 insertions(+), 7 deletions(-) create mode 100644 src/renderer/components/projectlist/ShareDialog.js diff --git a/src/main/ipc.js b/src/main/ipc.js index b670b63b..723c3b31 100644 --- a/src/main/ipc.js +++ b/src/main/ipc.js @@ -84,6 +84,20 @@ export const ipc = (ipcMain, projectStore) => { } }) + // E2EE: store crypto:enabled flag in project's session DB + ipcMain.handle('ipc:put:project:crypto/enabled', async (_, id, enabled) => { + try { + const uuid = id.split(':')[1] + const location = path.join(paths.databases, uuid) + const db = leveldb({ location }) + const session = sessionDB(db) + await session.put('crypto:enabled', enabled) + await db.close() + } catch (error) { + console.error('Failed to store crypto:enabled:', error) + } + }) + // E2EE: encrypt/decrypt passphrases via Electron's safeStorage API. // safeStorage uses the OS keychain (DPAPI on Windows, Keychain on macOS, libsecret on Linux) // to protect the passphrase at rest. diff --git a/src/main/preload/modules/replication.js b/src/main/preload/modules/replication.js index 23422585..d5ce7cbd 100644 --- a/src/main/preload/modules/replication.js +++ b/src/main/preload/modules/replication.js @@ -10,5 +10,6 @@ module.exports = { // E2EE: passphrase management via safeStorage (main process only) encryptPassphrase: (passphrase) => ipcRenderer.invoke('ipc:replication/encryptPassphrase', passphrase), - decryptPassphrase: (encrypted) => ipcRenderer.invoke('ipc:replication/decryptPassphrase', encrypted) + decryptPassphrase: (encrypted) => ipcRenderer.invoke('ipc:replication/decryptPassphrase', encrypted), + setCryptoEnabled: (id, enabled) => ipcRenderer.invoke('ipc:put:project:crypto/enabled', id, enabled) } diff --git a/src/renderer/components/projectlist/ProjectList.css b/src/renderer/components/projectlist/ProjectList.css index 6aec927f..d14e6c3f 100644 --- a/src/renderer/components/projectlist/ProjectList.css +++ b/src/renderer/components/projectlist/ProjectList.css @@ -62,3 +62,81 @@ padding: 4px; } + +/* ShareDialog */ +.share-dialog-overlay { + position: fixed; + top: 0; + left: 0; + right: 0; + bottom: 0; + background: rgba(0, 0, 0, 0.5); + display: flex; + align-items: center; + justify-content: center; + z-index: 1000; +} + +.share-dialog { + background: var(--color-bg, #fff); + border-radius: 8px; + padding: 24px; + max-width: 420px; + width: 90%; + box-shadow: 0 8px 32px rgba(0, 0, 0, 0.3); +} + +.share-dialog h3 { + margin: 0 0 12px 0; +} + +.share-dialog p { + margin: 8px 0; + line-height: 1.4; +} + +.share-dialog-checkbox { + display: flex; + align-items: center; + gap: 8px; + margin: 16px 0 4px 0; + cursor: pointer; + font-weight: 500; +} + +.share-dialog-checkbox input[type="checkbox"] { + width: 18px; + height: 18px; + cursor: pointer; +} + +.share-dialog-hint { + font-size: 0.85em; + opacity: 0.7; + margin: 4px 0 16px 26px; +} + +.share-dialog-buttons { + display: flex; + justify-content: flex-end; + gap: 8px; + margin-top: 16px; +} + +.share-dialog-buttons button { + padding: 6px 16px; + border-radius: 4px; + border: 1px solid var(--color-border, #d9d9d9); + cursor: pointer; + font-size: 14px; +} + +.share-dialog-primary { + background: #1890ff; + color: #fff; + border-color: #1890ff !important; +} + +.share-dialog-primary:hover { + background: #40a9ff; +} diff --git a/src/renderer/components/projectlist/ProjectList.js b/src/renderer/components/projectlist/ProjectList.js index 6bedb761..f0166325 100644 --- a/src/renderer/components/projectlist/ProjectList.js +++ b/src/renderer/components/projectlist/ProjectList.js @@ -7,6 +7,7 @@ import { Card } from './Card' import { useList, useServices } from '../hooks' import { militaryFormat } from '../../../shared/datetime' import MemberManagement from './MemberManagement' +import ShareDialog from './ShareDialog' /** * @@ -120,6 +121,7 @@ export const ProjectList = () => { const [replication, setReplication] = React.useState(undefined) const [managedProject, setManagedProject] = React.useState(null) + const [shareProject, setShareProject] = React.useState(null) /* system/OS level notifications */ const notifications = React.useRef(new Set()) @@ -334,6 +336,24 @@ export const ProjectList = () => { const handleFilterChange = React.useCallback(value => setFilter(value), []) const handleCreate = () => projectStore.createProject() + const doShare = async ({ encrypted }) => { + if (!shareProject) return + const project = shareProject + const options = encrypted ? { encrypted: true } : {} + const seed = await replication.share(project.id, project.name, project.description || '', options) + await projectStore.addTag(project.id, 'SHARED') + await projectStore.putReplicationSeed(project.id, seed) + + // Store E2EE preference in the project's session DB (via IPC to main process). + // The crypto:enabled flag is read by Project-services.js when opening the project. + if (encrypted) { + await window.odin.replication.setCryptoEnabled(project.id, true) + } + + setShareProject(null) + fetch(project.id) + } + /* eslint-disable react/prop-types */ const child = React.useCallback(props => { const { entry: project } = props @@ -354,11 +374,8 @@ export const ProjectList = () => { await projectStore.putReplicationSeed(project.id, seed) } - const handleShare = async () => { - const seed = await replication.share(project.id, project.name, project.description || '') - await projectStore.addTag(project.id, 'SHARED') - await projectStore.putReplicationSeed(project.id, seed) - fetch(project.id) + const handleShare = () => { + setShareProject(project) } /* const handleMembers = async () => { @@ -429,6 +446,7 @@ export const ProjectList = () => { return (
{ managedProject && setManagedProject(null)}/>} + { shareProject && setShareProject(null)}/>}
{ + const [encrypted, setEncrypted] = React.useState(true) + + const handleConfirm = () => { + onConfirm({ encrypted }) + } + + return ( +
+
+

Share Project

+

+ Share {projectName} with other users? + Once shared, other users can be invited to collaborate. +

+ + +

+ {encrypted + ? 'All project data will be end-to-end encrypted. Only project participants can read the data — not even the server.' + : 'Project data will be sent without encryption. The server can read all replicated data.' + } +

+ +
+ + +
+
+
+ ) +} + +ShareDialog.propTypes = { + projectName: PropTypes.string.isRequired, + onConfirm: PropTypes.func.isRequired, + onCancel: PropTypes.func.isRequired +} + +export default ShareDialog diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 8a495d20..5dd2ec93 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -26,7 +26,9 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { } case 'share': { const { name } = await store.value(id) - const layer = await replicatedProject.shareLayer(id, name) + // Inherit encryption setting from the project (set during handleShare in ProjectList) + const cryptoEnabled = replicatedProject.cryptoManager !== null + const layer = await replicatedProject.shareLayer(id, name, '', { encrypted: cryptoEnabled }) if (!layer) { console.log('layer is already shared') return From b4add9b78c6f246a3dba9390432a30f313a5d4a2 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Sun, 15 Mar 2026 19:03:10 +0100 Subject: [PATCH 04/28] fix: resolve browser entrypoint for matrix-sdk-crypto-wasm MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Webpack's electron-renderer target activates the 'node' export condition, which loads node.mjs (uses fileURLToPath — incompatible with bundling). Set conditionNames to ['browser', 'import', 'default'] to force the browser/wasm entrypoint (index.mjs) which runs natively in Chromium. --- webpack.config.js | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/webpack.config.js b/webpack.config.js index 6824cd26..406aecd5 100644 --- a/webpack.config.js +++ b/webpack.config.js @@ -81,6 +81,13 @@ const rendererConfig = (env, argv) => ({ mode: mode(env), stats: 'errors-only', module: { rules: rules() }, + resolve: { + // Force the browser/wasm entrypoint for matrix-sdk-crypto-wasm. + // Without this, Webpack's electron-renderer target resolves the 'node' export condition + // which loads node.mjs (uses fileURLToPath, incompatible with Webpack bundling). + // The Wasm bindings run natively in Chromium's renderer (IndexedDB available). + conditionNames: ['browser', 'import', 'default'] + }, entry: { renderer: ['./index.js'] }, From fae623f9feb21ed00365e56ebef9e16a1899ef96 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Sun, 15 Mar 2026 19:05:43 +0100 Subject: [PATCH 05/28] fix: use alias instead of conditionNames for crypto-wasm conditionNames override broke CommonJS modules (jexl). Use a targeted alias to point @matrix-org/matrix-sdk-crypto-wasm directly at index.mjs (browser/wasm entrypoint). --- webpack.config.js | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/webpack.config.js b/webpack.config.js index 406aecd5..bffb5e4f 100644 --- a/webpack.config.js +++ b/webpack.config.js @@ -86,7 +86,11 @@ const rendererConfig = (env, argv) => ({ // Without this, Webpack's electron-renderer target resolves the 'node' export condition // which loads node.mjs (uses fileURLToPath, incompatible with Webpack bundling). // The Wasm bindings run natively in Chromium's renderer (IndexedDB available). - conditionNames: ['browser', 'import', 'default'] + alias: { + '@matrix-org/matrix-sdk-crypto-wasm': path.resolve( + __dirname, 'node_modules/@matrix-org/matrix-sdk-crypto-wasm/index.mjs' + ) + } }, entry: { renderer: ['./index.js'] From fff95a19e53ad8a277d754e5b060eac39e83bcca Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Sun, 15 Mar 2026 19:11:23 +0100 Subject: [PATCH 06/28] test: Tuwunel test environment for ODIN E2EE Docker Compose + setup script for local E2EE testing: - Server name: odin.battlefield - Alice (@alice:odin.battlefield / Alice) - Bob (@bob:odin.battlefield / Bob) - http://localhost:8008 Usage: cd test-e2e && ./setup.sh --- test-e2e/docker-compose.yml | 18 +++++++++ test-e2e/setup.sh | 74 +++++++++++++++++++++++++++++++++++++ test-e2e/tuwunel.toml | 13 +++++++ 3 files changed, 105 insertions(+) create mode 100644 test-e2e/docker-compose.yml create mode 100755 test-e2e/setup.sh create mode 100644 test-e2e/tuwunel.toml diff --git a/test-e2e/docker-compose.yml b/test-e2e/docker-compose.yml new file mode 100644 index 00000000..b15c0f9c --- /dev/null +++ b/test-e2e/docker-compose.yml @@ -0,0 +1,18 @@ +services: + homeserver: + image: jevolk/tuwunel:latest + ports: + - "8008:8008" + volumes: + - ./tuwunel.toml:/etc/tuwunel.toml:ro + - tuwunel-data:/var/lib/tuwunel + command: ["-c", "/etc/tuwunel.toml"] + healthcheck: + test: ["CMD", "wget", "-q", "--spider", "http://localhost:8008/_matrix/client/versions"] + interval: 2s + timeout: 5s + retries: 15 + start_period: 5s + +volumes: + tuwunel-data: diff --git a/test-e2e/setup.sh b/test-e2e/setup.sh new file mode 100755 index 00000000..00a92b5b --- /dev/null +++ b/test-e2e/setup.sh @@ -0,0 +1,74 @@ +#!/bin/bash +# Start Tuwunel and register test users for ODIN E2EE testing +# +# Usage: ./setup.sh +# +# After setup: +# - Alice: @alice:odin.battlefield / password: Alice +# - Bob: @bob:odin.battlefield / password: Bob +# - Server: http://localhost:8008 + +set -e +HOMESERVER="http://localhost:8008" + +echo "Starting Tuwunel..." +docker compose up -d + +echo "Waiting for homeserver..." +for i in $(seq 1 30); do + if curl -sf "$HOMESERVER/_matrix/client/versions" > /dev/null 2>&1; then + echo "Homeserver ready!" + break + fi + sleep 1 +done + +# Check if homeserver is up +if ! curl -sf "$HOMESERVER/_matrix/client/versions" > /dev/null 2>&1; then + echo "ERROR: Homeserver failed to start" + docker compose logs + exit 1 +fi + +echo "" +echo "Registering Alice..." +ALICE=$(curl -sf -X POST "$HOMESERVER/_matrix/client/v3/register" \ + -H 'Content-Type: application/json' \ + -d '{"username":"alice","password":"Alice","auth":{"type":"m.login.dummy"}}' 2>&1) || true + +if echo "$ALICE" | grep -q "user_id"; then + echo " ✓ @alice:odin.battlefield" +elif echo "$ALICE" | grep -q "M_USER_IN_USE"; then + echo " ✓ @alice:odin.battlefield (already exists)" +else + echo " ✗ Failed: $ALICE" +fi + +echo "Registering Bob..." +BOB=$(curl -sf -X POST "$HOMESERVER/_matrix/client/v3/register" \ + -H 'Content-Type: application/json' \ + -d '{"username":"bob","password":"Bob","auth":{"type":"m.login.dummy"}}' 2>&1) || true + +if echo "$BOB" | grep -q "user_id"; then + echo " ✓ @bob:odin.battlefield" +elif echo "$BOB" | grep -q "M_USER_IN_USE"; then + echo " ✓ @bob:odin.battlefield (already exists)" +else + echo " ✗ Failed: $BOB" +fi + +echo "" +echo "=== ODIN E2EE Test Environment Ready ===" +echo "" +echo " Homeserver: $HOMESERVER" +echo " Server name: odin.battlefield" +echo "" +echo " Alice: @alice:odin.battlefield / Alice" +echo " Bob: @bob:odin.battlefield / Bob" +echo "" +echo " In ODIN's login dialog:" +echo " Homeserver URL: $HOMESERVER" +echo " Username: @alice:odin.battlefield" +echo " Password: Alice" +echo "" +echo "To stop: cd test-e2e && docker compose down -v" diff --git a/test-e2e/tuwunel.toml b/test-e2e/tuwunel.toml new file mode 100644 index 00000000..803da347 --- /dev/null +++ b/test-e2e/tuwunel.toml @@ -0,0 +1,13 @@ +# Tuwunel config for ODIN E2EE testing +# Local only, no federation, open registration + +[global] +server_name = "odin.battlefield" +database_path = "/var/lib/tuwunel" +address = ["0.0.0.0"] +port = 8008 + +allow_registration = true +yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse = true +allow_federation = false +new_user_displayname_suffix = "" From b0c029c500035741a2f3cef5f469f40b0ae6f76c Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 09:52:43 +0100 Subject: [PATCH 07/28] fix: intercept --user-data-dir to prevent app:// protocol breakage --- src/main/main.js | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/src/main/main.js b/src/main/main.js index 2766a1b7..acd91d76 100644 --- a/src/main/main.js +++ b/src/main/main.js @@ -17,6 +17,17 @@ import * as dotenv from 'dotenv' import SelfUpdate from './SelfUpdate' import { isEnabled } from './environment' +// Override userData path before Electron/Chromium initializes. +// --user-data-dir is a Chromium flag that breaks custom protocol handlers (app://), +// so we intercept it and use Electron's app.setPath() instead. +const userDataArg = process.argv.find(a => a.startsWith('--user-data-dir=')) +if (userDataArg) { + const userDataPath = userDataArg.split('=')[1] + app.setPath('userData', userDataPath) + // Remove the flag from argv so Chromium doesn't process it. + app.commandLine.removeSwitch('user-data-dir') +} + const paths = initPaths(app) /** From 41e782c16fc4c4d5f97d5b37028183b74d7e7fdd Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 09:55:25 +0100 Subject: [PATCH 08/28] fix: remove --user-data-dir from argv before Chromium processes it Chromium's --user-data-dir flag changes the session context, which breaks custom protocol handlers (app://) registered on the default session. We now strip the flag from process.argv and use Electron's app.setPath('userData') instead. --- src/main/main.js | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/src/main/main.js b/src/main/main.js index acd91d76..da5ace9b 100644 --- a/src/main/main.js +++ b/src/main/main.js @@ -18,14 +18,15 @@ import SelfUpdate from './SelfUpdate' import { isEnabled } from './environment' // Override userData path before Electron/Chromium initializes. -// --user-data-dir is a Chromium flag that breaks custom protocol handlers (app://), -// so we intercept it and use Electron's app.setPath() instead. -const userDataArg = process.argv.find(a => a.startsWith('--user-data-dir=')) -if (userDataArg) { - const userDataPath = userDataArg.split('=')[1] +// --user-data-dir is a Chromium flag that changes the session context, +// which breaks custom protocol handlers registered on the default session. +// We must remove it from process.argv before Chromium reads it and use +// Electron's app.setPath() instead. +const userDataArgIndex = process.argv.findIndex(a => a.startsWith('--user-data-dir=')) +if (userDataArgIndex !== -1) { + const userDataPath = process.argv[userDataArgIndex].split('=')[1] + process.argv.splice(userDataArgIndex, 1) app.setPath('userData', userDataPath) - // Remove the flag from argv so Chromium doesn't process it. - app.commandLine.removeSwitch('user-data-dir') } const paths = initPaths(app) From 2ec73476dc2be0bc40799f8c7f11fefef26c7de0 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 10:13:03 +0100 Subject: [PATCH 09/28] revert: remove --user-data-dir workaround (not the actual issue) --- src/main/main.js | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/src/main/main.js b/src/main/main.js index da5ace9b..2766a1b7 100644 --- a/src/main/main.js +++ b/src/main/main.js @@ -17,18 +17,6 @@ import * as dotenv from 'dotenv' import SelfUpdate from './SelfUpdate' import { isEnabled } from './environment' -// Override userData path before Electron/Chromium initializes. -// --user-data-dir is a Chromium flag that changes the session context, -// which breaks custom protocol handlers registered on the default session. -// We must remove it from process.argv before Chromium reads it and use -// Electron's app.setPath() instead. -const userDataArgIndex = process.argv.findIndex(a => a.startsWith('--user-data-dir=')) -if (userDataArgIndex !== -1) { - const userDataPath = process.argv[userDataArgIndex].split('=')[1] - process.argv.splice(userDataArgIndex, 1) - app.setPath('userData', userDataPath) -} - const paths = initPaths(app) /** From 524d953319c31cb39182fe5229d46115b55b2f9e Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 11:01:54 +0100 Subject: [PATCH 10/28] fix: persist E2EE setting when joining an encrypted project When Alice joins Bob's encrypted project, the crypto:enabled flag is now written to Alice's session store. Without this, Alice's ODIN instance would not initialize a CryptoManager, causing any layers she creates to be unencrypted. --- src/renderer/components/projectlist/ProjectList.js | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/src/renderer/components/projectlist/ProjectList.js b/src/renderer/components/projectlist/ProjectList.js index f0166325..8951348f 100644 --- a/src/renderer/components/projectlist/ProjectList.js +++ b/src/renderer/components/projectlist/ProjectList.js @@ -372,6 +372,10 @@ export const ProjectList = () => { // createProject requires the id to be a UUID without prefix await projectStore.createProject(project.id.split(':')[1], project.name, ['SHARED']) await projectStore.putReplicationSeed(project.id, seed) + // Persist the project's E2EE setting so Project-services.js picks it up on open. + if (seed.encrypted) { + await window.odin.replication.setCryptoEnabled(project.id, true) + } } const handleShare = () => { From 3fd51cd1d51ec9478e8cd29a67804a4430c94802 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 11:39:58 +0100 Subject: [PATCH 11/28] docs: E2EE key sharing scenarios and test requirements --- docs/e2ee-key-sharing-scenarios.md | 178 +++++++++++++++++++++++++++++ 1 file changed, 178 insertions(+) create mode 100644 docs/e2ee-key-sharing-scenarios.md diff --git a/docs/e2ee-key-sharing-scenarios.md b/docs/e2ee-key-sharing-scenarios.md new file mode 100644 index 00000000..0607efe3 --- /dev/null +++ b/docs/e2ee-key-sharing-scenarios.md @@ -0,0 +1,178 @@ +# E2EE Key Sharing Scenarios + +This document describes the key sharing scenarios for ODIN's end-to-end encrypted collaboration. It covers when and how Megolm session keys must be distributed to ensure all participants can decrypt layer content. + +## Background + +ODIN uses Matrix E2EE (Megolm) for encrypted layers. Each encrypted message is encrypted with a Megolm session key. To decrypt, a participant needs the corresponding session key. Keys are distributed via `to_device` messages (encrypted per-device with Olm). + +**Critical constraint:** ODIN replays all events in a layer when a user joins ("catch-up"). Without the Megolm session keys for historical events, the replay fails and the layer appears empty or broken. + +--- + +## Scenarios + +### 1. Alice creates an encrypted layer and shares it with Bob + +**Precondition:** Alice creates a layer with content, then shares it (Bob gets invited to the layer room). + +**Flow:** +1. Alice creates layer, adds content (features on the map). +2. Each `io.syncpoint.odin.operation` is encrypted with a Megolm session. Keys are shared with current room members (only Alice at this point). +3. Alice invites Bob to the layer. +4. **At invite time, Alice MUST share all existing Megolm session keys with Bob** via `to_device`. Alice is guaranteed to be online (she initiated the invite). +5. Bob accepts the invitation. +6. Bob performs the replay (catches up on all events). He can decrypt because he received the keys at step 4. + +**Status:** ❌ Not implemented. Currently, keys are only shared when sending a new message (`command-api.mjs`), not at invite time. + +**Required fix:** When inviting a user to an encrypted room, proactively share all Megolm session keys for that room with the invited user. + +--- + +### 2. Alice shares an empty layer, Bob joins, then Alice adds content + +**Precondition:** Layer has no content when Bob joins. + +**Flow:** +1. Alice creates and shares an empty layer, invites Bob. +2. Bob accepts. +3. Alice adds content. The Megolm session key is shared with all room members (Alice + Bob) at send time. +4. Bob receives the events and can decrypt. + +**Status:** ✅ Works. `command-api.mjs` already shares keys with all joined members when sending. + +--- + +### 3. Bob joins a layer that already has content from multiple participants + +**Precondition:** Alice and Carol have both contributed encrypted content. Bob is invited later. + +**Flow:** +1. Alice and Carol add content to the layer over time. Multiple Megolm sessions may exist (sessions rotate periodically). +2. Alice invites Bob. +3. **Alice must share ALL Megolm session keys she holds for this room** — including sessions originally created by Carol (Alice received Carol's keys when Carol sent messages). +4. Bob accepts and replays. He can decrypt all historical content. + +**Status:** ❌ Not implemented. + +**Note:** The inviting user shares keys they possess. If Alice somehow doesn't have Carol's keys (e.g., Alice joined after Carol left), those events remain undecryptable for Bob. This is an edge case; in practice, all active participants hold all session keys for events they've received. + +--- + +### 4. Real-time collaboration (steady state) + +**Precondition:** All participants have joined. Content is added in real-time. + +**Flow:** +1. Any participant sends an operation. +2. `command-api.mjs` shares the Megolm session key with all room members before encrypting. +3. All participants receive the key via `to_device` and can decrypt. + +**Status:** ✅ Works. + +--- + +### 5. Role change: Alice demotes Bob to READER, then promotes back to CONTRIBUTOR + +**Precondition:** Bob was CONTRIBUTOR, gets demoted to READER, then promoted again. + +**Flow:** +1. Alice changes Bob's power level to READER. +2. Bob's ODIN instance detects `m.room.power_levels` change → layer is restricted (locked). +3. Bob cannot add content (UI enforces restriction). +4. Alice promotes Bob back to CONTRIBUTOR. +5. Bob's layer is unlocked. +6. Bob can add content again. New Megolm session keys are shared normally. + +**Status:** ⚠️ Partially works. Role changes propagate but the layer restriction/locking needs verification with Tuwunel (see power_levels state event delivery issue). + +**Note:** Demotion to READER does not require key revocation — Bob can still decrypt existing content, he just can't write. Megolm doesn't support key revocation; a new session is created when membership changes. + +--- + +### 6. Events between invite and join + +**Precondition:** Alice invites Bob. Before Bob accepts, Alice (or Carol) sends new content. + +**Flow:** +1. Alice invites Bob and shares existing keys (Scenario 1). +2. Alice sends new content. Key is shared with all room members — but Bob hasn't joined yet, so he may not be in the member list. +3. Bob accepts the invite and replays. + +**Status:** ❌ Potential gap. Events sent between invite and join may use a new Megolm session that wasn't shared with Bob. + +**Required fix:** After Bob joins, either: +- Alice detects `m.room.member` join event and re-shares all session keys, OR +- Bob sends a key request (`m.room_key_request`) for any sessions he can't decrypt. + +**Recommended approach:** Combine both — proactive share on invite + reactive share on join for any gaps. + +--- + +### 7. Participant goes offline and comes back + +**Precondition:** Bob is offline while Alice sends content. + +**Flow:** +1. Bob goes offline. +2. Alice sends content. Key sharing via `to_device` is queued server-side. +3. Bob comes back online, syncs, receives `to_device` messages with keys. +4. Bob receives the encrypted events and can decrypt. + +**Status:** ✅ Works (Matrix handles `to_device` delivery when recipient comes online). + +--- + +### 8. New device / fresh user-data-dir + +**Precondition:** Bob opens the project on a new device (or with `--user-data-dir=/tmp/bob2`). + +**Flow:** +1. Bob's new device has no Megolm session keys. +2. Bob syncs and tries to replay layer content → fails, no keys. +3. Bob needs to obtain keys from somewhere. + +**Options:** +- **Server-side Key Backup:** Bob's keys are backed up encrypted. New device restores from backup. Only works for Bob's **own** keys, not keys from other sessions he received. +- **Key forwarding:** Bob's old device (if still active) forwards keys to the new device. +- **Re-share from peers:** Other room members re-share keys when they see Bob's new device. + +**Status:** ❌ Not implemented. This is a future concern (single-device model for now). + +--- + +## Implementation Priority + +| Priority | Scenario | Action | +|----------|----------|--------| +| **P0** | #1, #3 | Share all room session keys on invite | +| **P0** | #6 | Re-share keys on member join (catch gaps) | +| **P1** | #5 | Verify role changes work with Tuwunel | +| **P2** | #8 | Key backup / multi-device (future) | + +## Key Sharing Implementation Notes + +### Where to implement key share on invite + +The invite happens in `structure-api.mjs` → `invite()` which just calls `httpAPI.invite()`. After a successful invite, we need to: + +1. Get all Megolm session keys for the room from `CryptoManager` / `OlmMachine` +2. Export the session keys (via `OlmMachine.exportRoomKeys()` or equivalent) +3. Encrypt them for the invited user's devices (Olm) +4. Send via `to_device` + +The `OlmMachine.shareRoomKey()` in `crypto.mjs` already handles the Olm encryption and `to_device` sending. The question is whether it shares **all** historical session keys or only the current session. + +### Matrix SDK Crypto WASM API + +- `shareRoomKey(roomId, userIds)` — shares the **current** Megolm session. May NOT include historical sessions. +- `exportRoomKeys()` — exports all session keys (for backup). Could be used to get historical keys, but they'd need to be re-imported on the recipient side via a custom mechanism. +- **Alternative:** Check if `shareRoomKey` with a freshly tracked user triggers sharing of all known sessions for that room. + +### Testing + +Each scenario above should have a corresponding integration test in `test-e2e/`. Tests should run against Tuwunel (Docker) with two users (Alice, Bob) and verify: +- Events can be decrypted after the described flow +- Layer content replay works completely +- No `Failed to decrypt` errors in the log From ad2b775f6a323f2ec8326a7ae35978e29a3c5b3a Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 12:20:23 +0100 Subject: [PATCH 12/28] feat: share historical Megolm keys after initial layer content post --- src/renderer/replication/handler/toolbar.js | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 5dd2ec93..3f7d1855 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -42,7 +42,11 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { const keys = await store.collectKeys([id], [ID.STYLE, ID.LINK, ID.TAGS, ID.FEATURE]) const tuples = await store.tuples(keys) const operations = tuples.map(([key, value]) => ({ type: 'put', key, value })) - replicatedProject.post(id, operations) + await replicatedProject.post(id, operations) + + /* Share Megolm session keys with all project members so they can + decrypt this layer's content even if they join later (offline). */ + await replicatedProject.shareHistoricalKeys(id) break } case 'leave': { From 8a46d5307439447d811786f96b611690eda61166 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 12:36:18 +0100 Subject: [PATCH 13/28] feat: defer content loading to selfJoined event MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Content is no longer loaded immediately after joinLayer() in the toolbar handler. Instead, the upstream handler's new selfJoined callback loads content when the sync stream confirms our own membership join — ensuring the server has processed the join before we request content. This fixes empty content on initial join, especially with federated servers or slow homeservers like Tuwunel. --- src/renderer/replication/Replication.js | 2 +- src/renderer/replication/handler/toolbar.js | 11 ++++------- src/renderer/replication/handler/upstream.js | 15 ++++++++++++++- 3 files changed, 19 insertions(+), 9 deletions(-) diff --git a/src/renderer/replication/Replication.js b/src/renderer/replication/Replication.js index ad512a6f..b7a3f891 100644 --- a/src/renderer/replication/Replication.js +++ b/src/renderer/replication/Replication.js @@ -140,7 +140,7 @@ const Replication = () => { Start the timeline sync process with the most recent stream token */ const mostRecentStreamToken = await sessionStore.get(KEYS.STREAM_TOKEN, null) - replicatedProject.start(mostRecentStreamToken, upstreamHandler({ sessionStore, setOffline, store, CREATOR_ID })) + replicatedProject.start(mostRecentStreamToken, upstreamHandler({ sessionStore, setOffline, store, CREATOR_ID, replicatedProject })) feedback(null) signals['replication/operational'](true) setInitialized(true) diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 3f7d1855..631c31d4 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -14,14 +14,11 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { ], { creatorId: CREATOR_ID }) await store.delete(id) // invitation ID /* - We load the entire existing content. This may be huge, especially - if you join long running rooms. Unless we have a solid solution - for managing snapshots: this is the way. + Content loading is deferred to the selfJoined event handler + (upstream.js). This ensures the server has fully processed + the join before we attempt to load content — avoids empty + responses on federated or slow servers. */ - const operations = await replicatedProject.content(layer.id) - console.log(`Initial sync has ${operations.length} operations`) - await store.import(operations, { creatorId: CREATOR_ID }) - // TODO: check the powerlevel and apply restrictions if required break } case 'share': { diff --git a/src/renderer/replication/handler/upstream.js b/src/renderer/replication/handler/upstream.js index 3b02d54f..d7f734c5 100644 --- a/src/renderer/replication/handler/upstream.js +++ b/src/renderer/replication/handler/upstream.js @@ -1,7 +1,7 @@ import * as ID from '../../ids' import { KEYS, rolesReducer } from '../shared' -export default ({ sessionStore, setOffline, store, CREATOR_ID }) => { +export default ({ sessionStore, setOffline, store, CREATOR_ID, replicatedProject }) => { /* Handling upstream events triggered by the timeline API */ @@ -41,6 +41,19 @@ export default ({ sessionStore, setOffline, store, CREATOR_ID }) => { const rolesOperations = roles.map(l => ({ type: 'put', key: ID.roleId(l.id), value: l.role })) await store.import(rolesOperations, { creatorId: CREATOR_ID }) }, + selfJoined: async ({ roomId, id }) => { + if (!id) return + console.log(`Self joined room ${roomId}, loading initial content for layer ${id}`) + try { + const operations = await replicatedProject.content(id) + console.log(`Initial sync (via selfJoined) has ${operations.length} operations`) + if (operations.length > 0) { + await store.import(operations, { creatorId: CREATOR_ID }) + } + } catch (err) { + console.error(`Failed to load content after self-join for ${id}:`, err) + } + }, error: async (error) => { console.error(error) setOffline(true) From 6fbe78f73978c06063dc7df2babe705d80b47ca4 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 13:23:00 +0100 Subject: [PATCH 14/28] fix: restore content loading after join in toolbar handler MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The selfJoined stream approach doesn't work due to filter timing. Restore the direct content() call after joinLayer() — the HTTP join is synchronous so the messages endpoint should have content. Historical keys are received via to_device before the join, so decryption should work. --- src/renderer/replication/handler/toolbar.js | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 631c31d4..9bc189ea 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -14,11 +14,13 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { ], { creatorId: CREATOR_ID }) await store.delete(id) // invitation ID /* - Content loading is deferred to the selfJoined event handler - (upstream.js). This ensures the server has fully processed - the join before we attempt to load content — avoids empty - responses on federated or slow servers. + Load the entire existing content. The join HTTP call is synchronous — + once it returns 200, the messages endpoint should have the content. */ + const operations = await replicatedProject.content(layer.id) + console.log(`Initial sync has ${operations.length} operations`) + await store.import(operations, { creatorId: CREATOR_ID }) + // TODO: check the powerlevel and apply restrictions if required break } case 'share': { From e057a2705ebf8cb3d1d7b554dcc898706db9b705 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 13:28:14 +0100 Subject: [PATCH 15/28] cleanup: remove selfJoined handler and replicatedProject from upstream Content loading is back in toolbar.js. The selfJoined event approach didn't work and left dead code. --- src/renderer/replication/Replication.js | 2 +- src/renderer/replication/handler/upstream.js | 15 +-------------- 2 files changed, 2 insertions(+), 15 deletions(-) diff --git a/src/renderer/replication/Replication.js b/src/renderer/replication/Replication.js index b7a3f891..ad512a6f 100644 --- a/src/renderer/replication/Replication.js +++ b/src/renderer/replication/Replication.js @@ -140,7 +140,7 @@ const Replication = () => { Start the timeline sync process with the most recent stream token */ const mostRecentStreamToken = await sessionStore.get(KEYS.STREAM_TOKEN, null) - replicatedProject.start(mostRecentStreamToken, upstreamHandler({ sessionStore, setOffline, store, CREATOR_ID, replicatedProject })) + replicatedProject.start(mostRecentStreamToken, upstreamHandler({ sessionStore, setOffline, store, CREATOR_ID })) feedback(null) signals['replication/operational'](true) setInitialized(true) diff --git a/src/renderer/replication/handler/upstream.js b/src/renderer/replication/handler/upstream.js index d7f734c5..3b02d54f 100644 --- a/src/renderer/replication/handler/upstream.js +++ b/src/renderer/replication/handler/upstream.js @@ -1,7 +1,7 @@ import * as ID from '../../ids' import { KEYS, rolesReducer } from '../shared' -export default ({ sessionStore, setOffline, store, CREATOR_ID, replicatedProject }) => { +export default ({ sessionStore, setOffline, store, CREATOR_ID }) => { /* Handling upstream events triggered by the timeline API */ @@ -41,19 +41,6 @@ export default ({ sessionStore, setOffline, store, CREATOR_ID, replicatedProject const rolesOperations = roles.map(l => ({ type: 'put', key: ID.roleId(l.id), value: l.role })) await store.import(rolesOperations, { creatorId: CREATOR_ID }) }, - selfJoined: async ({ roomId, id }) => { - if (!id) return - console.log(`Self joined room ${roomId}, loading initial content for layer ${id}`) - try { - const operations = await replicatedProject.content(id) - console.log(`Initial sync (via selfJoined) has ${operations.length} operations`) - if (operations.length > 0) { - await store.import(operations, { creatorId: CREATOR_ID }) - } - } catch (err) { - console.error(`Failed to load content after self-join for ${id}:`, err) - } - }, error: async (error) => { console.error(error) setOffline(true) From 2884e10df77c43599a524ad3309f8cb9d76e5460 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 13:40:14 +0100 Subject: [PATCH 16/28] fix: apply role-based restrictions when joining a layer The join handler stored the role but never called store.restrict() for READER roles. This allowed local edits that the homeserver would reject, causing a gap between local ODIN state and server. Now applies rolesReducer after join, same as the hydrate path. --- src/renderer/replication/handler/toolbar.js | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 9bc189ea..83d3581a 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -1,4 +1,5 @@ import * as ID from '../../ids' +import { rolesReducer } from '../shared' export default ({ store, replicatedProject, CREATOR_ID }) => { return async ({ action, id, parameter }) => { @@ -20,7 +21,11 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { const operations = await replicatedProject.content(layer.id) console.log(`Initial sync has ${operations.length} operations`) await store.import(operations, { creatorId: CREATOR_ID }) - // TODO: check the powerlevel and apply restrictions if required + + // Apply layer restrictions based on the user's role + const permissions = [layer].reduce(rolesReducer, { restrict: [], permit: [] }) + if (permissions.restrict.length > 0) await store.restrict(permissions.restrict) + if (permissions.permit.length > 0) await store.permit(permissions.permit) break } case 'share': { From 73496cb14e1b9e39492e7c498cbddcab3b374a20 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 13:48:30 +0100 Subject: [PATCH 17/28] fix: restrict imported features when layer role is READER Features imported after join were not restricted even though the layer was marked as restricted. Now applies store.restrict() to all imported operation keys when the layer is restricted. --- src/renderer/replication/handler/toolbar.js | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 83d3581a..e4efbca1 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -18,14 +18,21 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { Load the entire existing content. The join HTTP call is synchronous — once it returns 200, the messages endpoint should have the content. */ + // Apply layer restrictions based on the user's role BEFORE importing content, + // so the layer is marked as restricted first. + const permissions = [layer].reduce(rolesReducer, { restrict: [], permit: [] }) + if (permissions.restrict.length > 0) await store.restrict(permissions.restrict) + if (permissions.permit.length > 0) await store.permit(permissions.permit) + const operations = await replicatedProject.content(layer.id) console.log(`Initial sync has ${operations.length} operations`) await store.import(operations, { creatorId: CREATOR_ID }) - // Apply layer restrictions based on the user's role - const permissions = [layer].reduce(rolesReducer, { restrict: [], permit: [] }) - if (permissions.restrict.length > 0) await store.restrict(permissions.restrict) - if (permissions.permit.length > 0) await store.permit(permissions.permit) + // Restrict individual features if the layer is restricted + if (permissions.restrict.length > 0 && operations.length > 0) { + const operationKeys = operations.map(o => o.key) + await store.restrict(operationKeys) + } break } case 'share': { From 038654ef17a9e270aa3b3f2f6bc1959a56449ea2 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 13:50:53 +0100 Subject: [PATCH 18/28] refactor: unify operation import logic (toolbar + upstream) Extract importOperations() as shared function that handles both import and restriction check. Used by: - toolbar.js join handler (initial content load) - upstream.js received handler (stream events) Single code path, no duplication. --- src/renderer/replication/handler/toolbar.js | 27 +++++++++++++------- src/renderer/replication/handler/upstream.js | 8 ++---- 2 files changed, 20 insertions(+), 15 deletions(-) diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index e4efbca1..7098f8f6 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -1,6 +1,21 @@ import * as ID from '../../ids' import { rolesReducer } from '../shared' +/** + * Import operations into the store, respecting layer restrictions. + * Used by both the initial content load (join) and the stream handler (received). + */ +const importOperations = async (store, id, operations, CREATOR_ID) => { + const [restricted] = await store.collect(id, [ID.restrictedId]) + await store.import(operations, { creatorId: CREATOR_ID }) + if (restricted) { + const operationKeys = operations.map(o => o.key) + await store.restrict(operationKeys) + } +} + +export { importOperations } + export default ({ store, replicatedProject, CREATOR_ID }) => { return async ({ action, id, parameter }) => { switch (action) { @@ -18,21 +33,15 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { Load the entire existing content. The join HTTP call is synchronous — once it returns 200, the messages endpoint should have the content. */ - // Apply layer restrictions based on the user's role BEFORE importing content, - // so the layer is marked as restricted first. + // Apply layer restrictions based on the user's role const permissions = [layer].reduce(rolesReducer, { restrict: [], permit: [] }) if (permissions.restrict.length > 0) await store.restrict(permissions.restrict) if (permissions.permit.length > 0) await store.permit(permissions.permit) + // Load and import initial content (respects layer restrictions) const operations = await replicatedProject.content(layer.id) console.log(`Initial sync has ${operations.length} operations`) - await store.import(operations, { creatorId: CREATOR_ID }) - - // Restrict individual features if the layer is restricted - if (permissions.restrict.length > 0 && operations.length > 0) { - const operationKeys = operations.map(o => o.key) - await store.restrict(operationKeys) - } + await importOperations(store, layer.id, operations, CREATOR_ID) break } case 'share': { diff --git a/src/renderer/replication/handler/upstream.js b/src/renderer/replication/handler/upstream.js index 3b02d54f..ae389a1c 100644 --- a/src/renderer/replication/handler/upstream.js +++ b/src/renderer/replication/handler/upstream.js @@ -1,5 +1,6 @@ import * as ID from '../../ids' import { KEYS, rolesReducer } from '../shared' +import { importOperations } from './toolbar' export default ({ sessionStore, setOffline, store, CREATOR_ID }) => { /* @@ -16,12 +17,7 @@ export default ({ sessionStore, setOffline, store, CREATOR_ID }) => { await store.import([content], { creatorId: CREATOR_ID }) }, received: async ({ id, operations }) => { - const [restricted] = await store.collect(id, [ID.restrictedId]) - await store.import(operations, { creatorId: CREATOR_ID }) - if (restricted) { - const operationKeys = operations.map(o => o.key) - await store.restrict(operationKeys) - } + await importOperations(store, id, operations, CREATOR_ID) }, renamed: async (renamed) => { /* From 5f931b4e2ad4ecc3f88537551c4f73815396e54f Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 16 Mar 2026 15:20:13 +0100 Subject: [PATCH 19/28] docs: E2EE device verification proposal (SAS emoji comparison) --- docs/e2ee-device-verification.md | 227 +++++++++++++++++++++++++++++++ 1 file changed, 227 insertions(+) create mode 100644 docs/e2ee-device-verification.md diff --git a/docs/e2ee-device-verification.md b/docs/e2ee-device-verification.md new file mode 100644 index 00000000..81746dee --- /dev/null +++ b/docs/e2ee-device-verification.md @@ -0,0 +1,227 @@ +# E2EE Device Verification – Proposal + +## Problem + +Currently ODIN uses TOFU (Trust on First Use) for device trust. This means any device claiming to be a project participant is trusted without verification. An attacker who compromises the homeserver or performs a MITM attack could inject a rogue device and intercept encrypted content. + +For military/government use cases, this is insufficient. Users need to verify that they're communicating with the genuine devices of their collaborators. + +## Matrix SAS Verification + +Matrix specifies [Short Authentication String (SAS)](https://spec.matrix.org/v1.12/client-server-api/#short-authentication-string-sas-verification) verification, where two users compare a set of 7 emojis displayed on their screens. If the emojis match, the devices are mutually verified. + +The `@matrix-org/matrix-sdk-crypto-wasm` SDK fully supports this: + +- `VerificationRequest` — initiates/receives a verification flow +- `Sas` — the SAS verification state machine +- `Sas.emoji()` — returns 7 `Emoji` objects (symbol + description) +- `Sas.confirm()` — confirms match, sends `m.key.verification.done` +- `Sas.cancel()` — cancels if emojis don't match + +## Proposed Flow for ODIN + +### When: Verification happens at **project join** time + +1. **Alice** shares an E2EE project with **Bob** (invite). +2. **Bob** accepts the invitation and joins. +3. ODIN detects Bob's new device (via `m.room.member` join + `device_lists.changed` in sync). +4. ODIN shows a **verification prompt** to Alice: _"Bob has joined. Verify Bob's device?"_ +5. Alice initiates verification. +6. Both Alice and Bob see **7 emojis** in a modal dialog. +7. They compare emojis out-of-band (voice call, in person, secure messenger). +8. Both confirm → devices are marked as verified. + +### Where in the UI + +- **Verification prompt** appears in the project's sharing panel or as a notification bar at the top of the map view. +- **Emoji comparison dialog** is a modal overlay showing the 7 emojis in a grid, with "They match" and "They don't match" buttons. +- **Verification status** is shown per-member in the sharing properties panel (✅ verified / ⚠️ unverified). + +### What changes if a device is unverified? + +Two possible strategies: + +**Option A: Warn but allow (recommended for V1)** +- Unverified devices get a ⚠️ warning in the sharing panel. +- All operations work normally. +- Users can verify at any time. +- Pragmatic for field use where verification might be deferred. + +**Option B: Block until verified** +- Unverified devices cannot decrypt content. +- Keys are only shared with verified devices. +- More secure, but may break workflows if verification is delayed. + +**Recommendation:** Start with Option A. The WASM SDK already has `TrustRequirement` settings that can switch behavior later. + +## Implementation: matrix-client-api + +### New CryptoManager Methods + +```javascript +/** + * Request verification of another user's device. + * @param {string} userId + * @returns {VerificationRequest} the request object to track the flow + */ +async requestVerification(userId) + +/** + * Accept an incoming verification request. + * @param {string} userId + * @param {string} flowId + * @returns {VerificationRequest} + */ +async acceptVerification(userId, flowId) + +/** + * Start SAS verification on an accepted request. + * @param {VerificationRequest} request + * @returns {Sas} the SAS state machine + */ +async startSas(request) + +/** + * Get the 7 emojis for comparison. + * @param {Sas} sas + * @returns {Array<{symbol: string, description: string}>} + */ +getEmojis(sas) + +/** + * Confirm that emojis match. + * @param {Sas} sas + * @returns {OutgoingRequest[]} requests to send + */ +async confirmSas(sas) + +/** + * Cancel the verification. + * @param {Sas} sas + * @returns {OutgoingRequest|undefined} + */ +cancelSas(sas) + +/** + * Check if a user's device is verified. + * @param {string} userId + * @param {string} deviceId + * @returns {boolean} + */ +async isDeviceVerified(userId, deviceId) + +/** + * Get verification status for all devices of a user. + * @param {string} userId + * @returns {Array<{deviceId: string, verified: boolean}>} + */ +async getDeviceVerificationStatus(userId) +``` + +### Verification Event Handling + +The SAS flow uses `to_device` events: +- `m.key.verification.request` +- `m.key.verification.ready` +- `m.key.verification.start` +- `m.key.verification.accept` +- `m.key.verification.key` +- `m.key.verification.mac` +- `m.key.verification.done` +- `m.key.verification.cancel` + +These are already routed through `receiveSyncChanges()` and handled by the OlmMachine internally. We need to: + +1. **Detect incoming requests** — poll `getVerificationRequests(userId)` after sync. +2. **Surface them to ODIN** — emit events that ODIN can listen to (e.g., `verificationRequested`, `verificationReady`, `emojisAvailable`, `verificationDone`). +3. **Send outgoing requests** — `accept()`, `confirm()`, `cancel()` return `OutgoingRequest` objects that need to be sent via HTTP. + +### Event Emission + +Add verification events to the existing stream handler pattern: + +```javascript +// In project.mjs or a new verification-handler.mjs +streamHandler.verificationRequested({ userId, flowId, request }) +streamHandler.emojisAvailable({ userId, flowId, emojis }) +streamHandler.verificationDone({ userId, deviceId }) +streamHandler.verificationCancelled({ userId, flowId, reason }) +``` + +## Implementation: ODIN (Electron) + +### UI Components + +1. **VerificationPrompt** — Notification bar or toast: _"New device detected for Bob. [Verify]"_ +2. **EmojiComparisonDialog** — Modal showing 7 emojis in a grid with confirm/cancel buttons. +3. **MemberVerificationBadge** — ✅/⚠️ icon next to member names in sharing properties. + +### Electron-specific + +- The verification flow involves multiple async steps (request → accept → emojis → confirm). +- Use a React state machine or reducer to track the verification phase. +- Emojis are Unicode — no custom graphics needed. + +## Protocol Sequence + +``` + Alice Homeserver Bob + │ │ │ + │ m.key.verification.request │ │ + ├───────────────────────────────►│ to_device │ + │ ├──────────────────────────────►│ + │ │ │ + │ │ m.key.verification.ready │ + │ to_device │◄──────────────────────────────┤ + │◄───────────────────────────────┤ │ + │ │ │ + │ m.key.verification.start │ │ + ├───────────────────────────────►│ to_device │ + │ ├──────────────────────────────►│ + │ │ │ + │ m.key.verification.accept │ │ + │ to_device │◄──────────────────────────────┤ + │◄───────────────────────────────┤ │ + │ │ │ + │ m.key.verification.key │ (both exchange DH keys) │ + │◄──────────────────────────────►│◄─────────────────────────────►│ + │ │ │ + │ ┌─────────────────────┐ │ ┌─────────────────────┐ │ + │ │ 🐶 🔑 🎵 🌍 🎩 ☂️ 🌻 │ │ │ 🐶 🔑 🎵 🌍 🎩 ☂️ 🌻 │ │ + │ │ "Do these match?" │ │ │ "Do these match?" │ │ + │ └────────┬────────────┘ │ └────────┬────────────┘ │ + │ │ [Yes!] │ │ [Yes!] │ + │ │ │ + │ m.key.verification.mac │ (both send MACs) │ + │◄──────────────────────────────►│◄─────────────────────────────►│ + │ │ │ + │ m.key.verification.done │ │ + │◄──────────────────────────────►│◄─────────────────────────────►│ + │ │ │ + │ ✅ Bob's device verified │ ✅ Alice verified │ +``` + +## Scope & Phases + +### Phase 1 (minimal viable) +- CryptoManager methods for SAS verification +- Verification event emission in stream handler +- Basic ODIN UI: prompt + emoji dialog + status badge +- Manual verification (user clicks "Verify" in sharing properties) + +### Phase 2 (polish) +- Auto-prompt on new device detection +- Verification status persists across restarts (already in OlmMachine store) +- Block key sharing to unverified devices (Option B) +- QR code verification as alternative to emoji + +### Phase 3 (advanced) +- Cross-signing (verify user, not individual devices) +- Verification via room events (instead of to_device) for audit trail + +## Open Questions + +1. **When to block?** Should we ever refuse to share keys with unverified devices, or always warn-only? +2. **Verification UI location** — Modal? Side panel? Notification? +3. **Re-verification** — What happens when a user gets a new device? Auto-detect and re-prompt? +4. **Offline verification** — If Bob is offline when Alice initiates, the request waits. Timeout? From 5921bb99fa06ba87a5b5b8fee0b56a3082aba415 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Tue, 17 Mar 2026 12:59:30 +0100 Subject: [PATCH 20/28] Integrate matrix-client-api v2.0.0 with persistent command queue - Update dependency from GitHub branch to npm v2.0.0 - Project: persistent command queue using sublevel of project DB - ProjectList: in-memory command queue (no persistent operations) 327 unit tests passing, webpack build clean, eslint clean. --- package-lock.json | 18 ++++++++++++++---- package.json | 2 +- src/renderer/components/Project-services.js | 1 + .../components/ProjectList-services.js | 6 +++++- 4 files changed, 21 insertions(+), 6 deletions(-) diff --git a/package-lock.json b/package-lock.json index 715dcef1..0f29fab4 100644 --- a/package-lock.json +++ b/package-lock.json @@ -12,7 +12,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "^1.11.1", + "@syncpoint/matrix-client-api": "^2.0.0", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", @@ -3093,6 +3093,15 @@ "node": ">= 10.0.0" } }, + "node_modules/@matrix-org/matrix-sdk-crypto-wasm": { + "version": "17.1.0", + "resolved": "https://registry.npmjs.org/@matrix-org/matrix-sdk-crypto-wasm/-/matrix-sdk-crypto-wasm-17.1.0.tgz", + "integrity": "sha512-yKPqBvKlHSqkt/UJh+Z+zLKQP8bd19OxokXYXh3VkKbW0+C44nPHsidSwd3SH+RxT+Ck2PDRwVcVXEnUft+/2g==", + "license": "Apache-2.0", + "engines": { + "node": ">= 18" + } + }, "node_modules/@mdi/js": { "version": "7.4.47", "resolved": "https://registry.npmjs.org/@mdi/js/-/js-7.4.47.tgz", @@ -3521,11 +3530,12 @@ } }, "node_modules/@syncpoint/matrix-client-api": { - "version": "1.13.0", - "resolved": "https://registry.npmjs.org/@syncpoint/matrix-client-api/-/matrix-client-api-1.13.0.tgz", - "integrity": "sha512-kb71iJEoe5kq7q/bbeUkaQRblnGh1cVspMlW1bNgnQgA89nly0KH0Oe6NK78Fmvq+gXGtSfnlQF3TbdytIplyw==", + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/@syncpoint/matrix-client-api/-/matrix-client-api-2.0.0.tgz", + "integrity": "sha512-W/M6wL569uB9+CWID6FXHsEN7IUxGTsy/cOBFJHmI0wgZQLruwO5uKcEEBje+Q9rhSSh3UMdlTCzIDcENI9FcA==", "license": "MIT", "dependencies": { + "@matrix-org/matrix-sdk-crypto-wasm": "^17.1.0", "js-base64": "^3.7.7", "ky": "^1.7.2" } diff --git a/package.json b/package.json index d8204a6f..a09e7895 100644 --- a/package.json +++ b/package.json @@ -63,7 +63,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#feature/e2ee", + "@syncpoint/matrix-client-api": "^2.0.0", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", diff --git a/src/renderer/components/Project-services.js b/src/renderer/components/Project-services.js index 3f7d76b4..48e24021 100644 --- a/src/renderer/components/Project-services.js +++ b/src/renderer/components/Project-services.js @@ -160,6 +160,7 @@ export default async projectUUID => { services.replicationProvider = MatrixClient({ ...credentials, device_id: projectUUID, + db: L.leveldb({ up: db, encoding: 'json', prefix: 'command-queue' }), ...(encryption && { encryption }) }) } else { diff --git a/src/renderer/components/ProjectList-services.js b/src/renderer/components/ProjectList-services.js index dcd9acdb..deccd0b6 100644 --- a/src/renderer/components/ProjectList-services.js +++ b/src/renderer/components/ProjectList-services.js @@ -1,3 +1,6 @@ +import levelup from 'levelup' +import memdown from 'memdown' +import sublevel from 'subleveldown' import ProjectStore from '../store/ProjectStore' import { Selection } from '../Selection' import { MatrixClient } from '@syncpoint/matrix-client-api' @@ -13,7 +16,8 @@ export default async () => { services.replicationProvider = credentials ? MatrixClient({ ...credentials, - device_id: 'PROJECT-LIST' + device_id: 'PROJECT-LIST', + db: sublevel(levelup(memdown()), 'command-queue', { valueEncoding: 'json' }) }) : { disabled: true From fe757694210edb7d126123df219ab6ff93092ce2 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Tue, 17 Mar 2026 16:05:54 +0100 Subject: [PATCH 21/28] Fix OSD: render all grid cells including B column MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The B column cells were hardcoded empty — state.B1, state.B2, state.B3 were never rendered. This caused the replication offline feedback ('Looks like we are offline!') to be silently dropped since it targets cell B2. --- src/renderer/components/OSD.js | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/renderer/components/OSD.js b/src/renderer/components/OSD.js index 97154616..5f954f97 100644 --- a/src/renderer/components/OSD.js +++ b/src/renderer/components/OSD.js @@ -25,13 +25,13 @@ export const OSD = () => { return
{ state.A1 }
-
+
{ state.B1 }
{ state.C1 }
{ state.A2 }
-
+
{ state.B2 }
{ state.C2 }
-
-
+
{ state.A3 }
+
{ state.B3 }
{ state.C3 }
} From 65869fa97fb93f592473c0a06460d0923e982b5f Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Tue, 17 Mar 2026 17:12:25 +0100 Subject: [PATCH 22/28] Add temporary debug logging for join content timing investigation --- src/renderer/replication/handler/toolbar.js | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 7098f8f6..40b233bc 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -39,6 +39,18 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { if (permissions.permit.length > 0) await store.permit(permissions.permit) // Load and import initial content (respects layer restrictions) + console.log(`DEBUG: layer.id = "${layer.id}"`) + console.log(`DEBUG: idMapping for layer.id = "${replicatedProject.idMapping.get(layer.id)}"`) + const upstreamId = replicatedProject.idMapping.get(layer.id) + if (upstreamId) { + // Raw fetch to compare with our API + const creds = replicatedProject.timelineAPI.credentials() + const rawUrl = `${creds.home_server_url}/_matrix/client/v3/rooms/${encodeURIComponent(upstreamId)}/messages?dir=f&limit=100` + const rawRes = await fetch(rawUrl, { headers: { Authorization: `Bearer ${creds.access_token}` } }) + const rawData = await rawRes.json() + console.log(`DEBUG: raw /messages returned ${rawData.chunk?.length || 0} events`) + rawData.chunk?.forEach(e => console.log(`DEBUG: → type=${e.type} sender=${e.sender}`)) + } const operations = await replicatedProject.content(layer.id) console.log(`Initial sync has ${operations.length} operations`) await importOperations(store, layer.id, operations, CREATOR_ID) From f1019bbd71855e6e0016ad06d67b834764e1b118 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Tue, 17 Mar 2026 17:27:47 +0100 Subject: [PATCH 23/28] Workaround: wait for key import before loading content after join After joining an E2EE layer, historical keys arrive via the sync stream's to_device events in parallel. Without waiting, content() tries to decrypt before the keys are imported, resulting in 0 operations on first join. This is a temporary workaround (1s delay). A proper fix should use a dedicated sync cycle after join to ensure keys are available. --- src/renderer/replication/handler/toolbar.js | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 40b233bc..72b70c2e 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -38,19 +38,15 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { if (permissions.restrict.length > 0) await store.restrict(permissions.restrict) if (permissions.permit.length > 0) await store.permit(permissions.permit) - // Load and import initial content (respects layer restrictions) - console.log(`DEBUG: layer.id = "${layer.id}"`) - console.log(`DEBUG: idMapping for layer.id = "${replicatedProject.idMapping.get(layer.id)}"`) - const upstreamId = replicatedProject.idMapping.get(layer.id) - if (upstreamId) { - // Raw fetch to compare with our API - const creds = replicatedProject.timelineAPI.credentials() - const rawUrl = `${creds.home_server_url}/_matrix/client/v3/rooms/${encodeURIComponent(upstreamId)}/messages?dir=f&limit=100` - const rawRes = await fetch(rawUrl, { headers: { Authorization: `Bearer ${creds.access_token}` } }) - const rawData = await rawRes.json() - console.log(`DEBUG: raw /messages returned ${rawData.chunk?.length || 0} events`) - rawData.chunk?.forEach(e => console.log(`DEBUG: → type=${e.type} sender=${e.sender}`)) + // When E2EE is active, historical keys arrive via the sync stream's + // to_device events (receiveSyncChanges → importRoomKeys). The stream + // runs in parallel, so we need to give it time to process the keys + // before attempting to decrypt content. + if (replicatedProject.cryptoManager) { + await new Promise(resolve => setTimeout(resolve, 1000)) } + + // Load and import initial content (respects layer restrictions) const operations = await replicatedProject.content(layer.id) console.log(`Initial sync has ${operations.length} operations`) await importOperations(store, layer.id, operations, CREATOR_ID) From 32f331bf8fe5c3af4ff68be6e213fa1a776c9a15 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Tue, 17 Mar 2026 17:34:32 +0100 Subject: [PATCH 24/28] Replace timeout workaround with decrypt retry in matrix-client-api Remove the 1s setTimeout workaround and use the fix/decrypt-retry branch of matrix-client-api instead. The library now retries decryption up to 5 times (500ms intervals) when keys are not yet available, which handles the race condition properly. --- package-lock.json | 5 ++--- package.json | 2 +- src/renderer/replication/handler/toolbar.js | 8 -------- 3 files changed, 3 insertions(+), 12 deletions(-) diff --git a/package-lock.json b/package-lock.json index 0f29fab4..7af0e8a9 100644 --- a/package-lock.json +++ b/package-lock.json @@ -12,7 +12,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "^2.0.0", + "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#fix/decrypt-retry", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", @@ -3531,8 +3531,7 @@ }, "node_modules/@syncpoint/matrix-client-api": { "version": "2.0.0", - "resolved": "https://registry.npmjs.org/@syncpoint/matrix-client-api/-/matrix-client-api-2.0.0.tgz", - "integrity": "sha512-W/M6wL569uB9+CWID6FXHsEN7IUxGTsy/cOBFJHmI0wgZQLruwO5uKcEEBje+Q9rhSSh3UMdlTCzIDcENI9FcA==", + "resolved": "git+ssh://git@github.com/syncpoint/matrix-client-api.git#7f16d27190ecf58ba3f5436e922584fdf8632c24", "license": "MIT", "dependencies": { "@matrix-org/matrix-sdk-crypto-wasm": "^17.1.0", diff --git a/package.json b/package.json index a09e7895..0cade9a0 100644 --- a/package.json +++ b/package.json @@ -63,7 +63,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "^2.0.0", + "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#fix/decrypt-retry", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 72b70c2e..7098f8f6 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -38,14 +38,6 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { if (permissions.restrict.length > 0) await store.restrict(permissions.restrict) if (permissions.permit.length > 0) await store.permit(permissions.permit) - // When E2EE is active, historical keys arrive via the sync stream's - // to_device events (receiveSyncChanges → importRoomKeys). The stream - // runs in parallel, so we need to give it time to process the keys - // before attempting to decrypt content. - if (replicatedProject.cryptoManager) { - await new Promise(resolve => setTimeout(resolve, 1000)) - } - // Load and import initial content (respects layer restrictions) const operations = await replicatedProject.content(layer.id) console.log(`Initial sync has ${operations.length} operations`) From ab094fa9ac8fe7490e0e7eef8d1babd8c0c4a759 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Wed, 18 Mar 2026 12:59:58 +0100 Subject: [PATCH 25/28] Use sync-gated content after join, remove immediate content() call - Remove content() call after joinLayer() in toolbar handler - Content now arrives via the received() stream handler after matrix-client-api detects the room in the next sync cycle - Permissions are still applied immediately after join (before content arrives) - Point to matrix-client-api feature/sync-gated-content branch --- package.json | 2 +- src/renderer/replication/handler/toolbar.js | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/package.json b/package.json index 0cade9a0..df7ac837 100644 --- a/package.json +++ b/package.json @@ -63,7 +63,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#fix/decrypt-retry", + "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#feature/sync-gated-content", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", diff --git a/src/renderer/replication/handler/toolbar.js b/src/renderer/replication/handler/toolbar.js index 7098f8f6..1f335bdd 100644 --- a/src/renderer/replication/handler/toolbar.js +++ b/src/renderer/replication/handler/toolbar.js @@ -38,10 +38,10 @@ export default ({ store, replicatedProject, CREATOR_ID }) => { if (permissions.restrict.length > 0) await store.restrict(permissions.restrict) if (permissions.permit.length > 0) await store.permit(permissions.permit) - // Load and import initial content (respects layer restrictions) - const operations = await replicatedProject.content(layer.id) - console.log(`Initial sync has ${operations.length} operations`) - await importOperations(store, layer.id, operations, CREATOR_ID) + // Content is NOT fetched here. It will arrive via the sync-gated + // mechanism in matrix-client-api: Project.start() detects the room + // in the next sync cycle and delivers operations through the + // received() stream handler. break } case 'share': { From b70206d6befa6d369f46f4d8305e3a7defee4910 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Wed, 18 Mar 2026 14:27:31 +0100 Subject: [PATCH 26/28] Use @syncpoint/matrix-client-api 2.1.0 from npm --- package-lock.json | 7 ++++--- package.json | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/package-lock.json b/package-lock.json index 7af0e8a9..5099ac3a 100644 --- a/package-lock.json +++ b/package-lock.json @@ -12,7 +12,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#fix/decrypt-retry", + "@syncpoint/matrix-client-api": "^2.1.0", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", @@ -3530,8 +3530,9 @@ } }, "node_modules/@syncpoint/matrix-client-api": { - "version": "2.0.0", - "resolved": "git+ssh://git@github.com/syncpoint/matrix-client-api.git#7f16d27190ecf58ba3f5436e922584fdf8632c24", + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/@syncpoint/matrix-client-api/-/matrix-client-api-2.1.0.tgz", + "integrity": "sha512-u6UGjqH/Ph6cezNo289LyV4yBJixzMy5QIEanZflJeQkCDedu7lVqsv5fYGo+R8+R0C3+VsMcgGpuZqiR490bA==", "license": "MIT", "dependencies": { "@matrix-org/matrix-sdk-crypto-wasm": "^17.1.0", diff --git a/package.json b/package.json index df7ac837..907f6514 100644 --- a/package.json +++ b/package.json @@ -63,7 +63,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#feature/sync-gated-content", + "@syncpoint/matrix-client-api": "^2.1.0", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", From 6a1c9539985d75918ff0d27d9b76eceec014eb77 Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Wed, 18 Mar 2026 14:39:21 +0100 Subject: [PATCH 27/28] Use matrix-client-api main with joinedRoomIds fix --- package-lock.json | 5 ++--- package.json | 2 +- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/package-lock.json b/package-lock.json index 5099ac3a..e2902210 100644 --- a/package-lock.json +++ b/package-lock.json @@ -12,7 +12,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "^2.1.0", + "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#main", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", @@ -3531,8 +3531,7 @@ }, "node_modules/@syncpoint/matrix-client-api": { "version": "2.1.0", - "resolved": "https://registry.npmjs.org/@syncpoint/matrix-client-api/-/matrix-client-api-2.1.0.tgz", - "integrity": "sha512-u6UGjqH/Ph6cezNo289LyV4yBJixzMy5QIEanZflJeQkCDedu7lVqsv5fYGo+R8+R0C3+VsMcgGpuZqiR490bA==", + "resolved": "git+ssh://git@github.com/syncpoint/matrix-client-api.git#46e85c1329b10257a3173e63a79ccd4b5793b75a", "license": "MIT", "dependencies": { "@matrix-org/matrix-sdk-crypto-wasm": "^17.1.0", diff --git a/package.json b/package.json index 907f6514..f8177017 100644 --- a/package.json +++ b/package.json @@ -63,7 +63,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "^2.1.0", + "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#main", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", From 15179fcefaebebadab2179822b892e592654398c Mon Sep 17 00:00:00 2001 From: Axel Krapotke Date: Mon, 23 Mar 2026 14:03:59 +0100 Subject: [PATCH 28/28] chore: update @syncpoint/matrix-client-api to 2.2.0 Includes sync-restart-on-join fixes: - Awaitable restartSync() to prevent race condition - Backward pagination with prev_batch for federation backfill - Content fetch retry across sync cycles for key delivery - Increased retry window (20x10s) for slow federation servers --- package-lock.json | 7 ++++--- package.json | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/package-lock.json b/package-lock.json index e2902210..f8cc18c1 100644 --- a/package-lock.json +++ b/package-lock.json @@ -12,7 +12,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#main", + "@syncpoint/matrix-client-api": "^2.2.0", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2", @@ -3530,8 +3530,9 @@ } }, "node_modules/@syncpoint/matrix-client-api": { - "version": "2.1.0", - "resolved": "git+ssh://git@github.com/syncpoint/matrix-client-api.git#46e85c1329b10257a3173e63a79ccd4b5793b75a", + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/@syncpoint/matrix-client-api/-/matrix-client-api-2.2.0.tgz", + "integrity": "sha512-GYf81z8D37glOpmbSQy/457tK1HGCjtQPkY3tTsv9LTdxy3LFpB+pyOEPAI7vr6FwU1fp6KE0eM54gXG9EMNhQ==", "license": "MIT", "dependencies": { "@matrix-org/matrix-sdk-crypto-wasm": "^17.1.0", diff --git a/package.json b/package.json index f8177017..ce04c8a5 100644 --- a/package.json +++ b/package.json @@ -63,7 +63,7 @@ "dependencies": { "@mdi/js": "^7.0.96", "@mdi/react": "^1.6.0", - "@syncpoint/matrix-client-api": "github:syncpoint/matrix-client-api#main", + "@syncpoint/matrix-client-api": "^2.2.0", "@syncpoint/signal": "^1.3.0", "@syncpoint/signs": "^1.1.0", "@syncpoint/wkx": "^0.5.2",