You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,11 @@
2
2
3
3
# UNRELEASED
4
4
5
+
### Improve frontend canister sync logic
6
+
7
+
Previously, committing frontend canister changes happened in multiple batches defined by simple heuristics that would likely not exceed the ingress message size limit.
8
+
Now, the ingress message size limit is respected more explicitly, and also a limit of total content size per batch since all content in the batch newly gets hashed in the canister.
/// Batches are created based on three conditions (any of which triggers a new batch):
194
+
/// 1. 500 operations reached - generally respected limit to avoid too much cert tree work
195
+
/// 2. 1.5MB of header map data reached - headers are the largest part of ingress message size
196
+
/// 3. 100MB of total chunk size reached - the full asset content gets hashed in the commit message
197
+
asyncfncreate_commit_batches<'a>(
198
+
operations:Vec<BatchOperationKind>,
199
+
chunk_uploader:&ChunkUploader<'a>,
200
+
) -> Vec<Vec<BatchOperationKind>>{
201
+
constMAX_OPERATIONS_PER_BATCH:usize = 500;// empirically this works good enough
202
+
constMAX_HEADER_MAP_SIZE:usize = 1_500_000;// 1.5 MB leaves plenty of room for other data that we do not calculate precisely
203
+
constMAX_ASSET_CONTENT_SIZE:usize = 100_000_000;// 100 MB is ~20% of how much data we can hash in a single message: 40b instructions per update call, measured best case of 80 instructions per byte hashed -> ~500MB limit
204
+
205
+
letmut batches = Vec::new();
206
+
letmut current_batch = Vec::new();
207
+
letmut operation_count = 0;
208
+
letmut header_map_size = 0;
209
+
letmut content_size = 0;
210
+
211
+
for operation in operations {
212
+
let operation_header_size = calculate_header_size(&operation);
213
+
let operation_chunk_size = calculate_content_size(&operation, chunk_uploader).await;
214
+
215
+
// Check if adding this operation would exceed any limits
216
+
let would_exceed_operation_limit = operation_count >= MAX_OPERATIONS_PER_BATCH;
0 commit comments