fiff
Read and write OME-TIFF files through a zarrita.js Zarr store following the OME-Zarr v0.5 data model.
- π Lazy HTTP range requests -- Chunk data is fetched on demand via
geotiff.js
readRasters(), no full file download needed - π OME-XML support -- Parses OME-XML metadata for dimensions, channels, physical units, and all 6 DimensionOrder permutations
- π Pyramid detection -- Automatically discovers multi-resolution levels via SubIFDs (modern), flat IFDs (legacy), or COG overviews
- π§© Edge-chunk zero-padding -- Boundary chunks are automatically padded to full tile size for correct Zarr consumption
- π zarrita.js compatible -- Implements the
AsyncReadableinterface; use directly withzarr.open()from zarrita.js - π OME-Zarr v0.5 output -- Generates Zarr v3 metadata with
ome.multiscalesandome.omeroattributes
- π OME-TIFF generation -- Convert ngff-zarr
Multiscalesobjects to valid OME-TIFF files with embedded OME-XML metadata - π» Full pyramid support -- Multi-resolution levels are written as SubIFDs, matching the modern OME-TIFF pyramid convention
- ποΈ Deflate compression -- Async zlib/deflate via native
CompressionStream(non-blocking) with synchronous fflate fallback - π§΅ Worker pool support -- Optional
@fideus-labs/worker-poolintegration offloads compression and decompression to Web Workers, fully releasing the main thread - π§± Tiled output -- Large images are automatically written as 256x256 tiles (configurable), the OME-TIFF recommended format for efficient random access
- β‘ Parallel plane reading -- Planes are read with bounded concurrency (configurable) for faster writes from async data sources
- πΎ BigTIFF support -- Automatic 64-bit offset format when files exceed
4 GB, with manual override via
formatoption - π 5D support -- Handles all dimension orders (XYZCT, XYZTC, etc.) and arbitrary combinations of T, C, Z, Y, X axes
- π Custom plane reader -- Pluggable
getPlanecallback replaces the internalzarr.get(), enabling worker-pool decompression or zero-copy reads from uncompressed zarr arrays
npm install @fideus-labs/fiffFor write support, also install the optional peer dependency:
npm install @fideus-labs/ngff-zarrFor worker pool support (offloading compression/decompression to Web Workers):
npm install @fideus-labs/worker-poolimport { TiffStore } from "@fideus-labs/fiff";
import * as zarr from "zarrita";
const store = await TiffStore.fromUrl("https://example.com/image.ome.tif");
const group = await zarr.open(store as unknown as zarr.Readable, { kind: "group" });
// Open the full-resolution array (level 0)
const arr = await zarr.open(group.resolve("0"), { kind: "array" });
const chunk = await zarr.get(arr);
console.log(chunk.shape); // e.g. [1, 3, 1, 512, 512]
console.log(chunk.data); // Float32Array, Uint16Array, etc.const response = await fetch("https://example.com/image.tif");
const buffer = await response.arrayBuffer();
const store = await TiffStore.fromArrayBuffer(buffer);const file = document.querySelector("input[type=file]").files[0];
const store = await TiffStore.fromBlob(file);import { fromUrl } from "geotiff";
const tiff = await fromUrl("https://example.com/image.tif");
const store = await TiffStore.fromGeoTIFF(tiff);const store = await TiffStore.fromUrl("https://example.com/image.ome.tif");
store.levels; // number of resolution levels
store.dataType; // "uint16", "float32", etc.
store.dimensionNames; // ["t", "c", "z", "y", "x"]
store.getShape(0); // full-res shape, e.g. [1, 3, 1, 2048, 2048]
store.getShape(1); // level 1 shape, e.g. [1, 3, 1, 1024, 1024]
store.ome; // parsed OME-XML image metadata (if present)
store.pyramidInfo; // pyramid structure detailsTiffStore implements the zarrita Readable interface, so it can be passed
directly to ngff-zarr's fromNgffZarr to obtain a Multiscales object:
import { TiffStore } from "@fideus-labs/fiff";
import { fromNgffZarr } from "@fideus-labs/ngff-zarr";
const store = await TiffStore.fromUrl("https://example.com/image.ome.tif");
const multiscales = await fromNgffZarr(store, { version: "0.5" });
const image = multiscales.images[0];
console.log(image.dims); // e.g. ["t", "c", "z", "y", "x"]
console.log(image.data.shape); // e.g. [1, 3, 1, 512, 512]
console.log(image.data.dtype); // e.g. "uint16"
console.log(image.scale); // e.g. { t: 1, c: 1, z: 1, y: 0.5, x: 0.5 }
console.log(multiscales.metadata.axes); // axis definitions
console.log(multiscales.metadata.datasets); // dataset paths and transformstoOmeTiff() takes an ngff-zarr Multiscales object and returns a complete
OME-TIFF file as an ArrayBuffer.
import { toOmeTiff } from "@fideus-labs/fiff";
import {
createNgffImage,
createAxis,
createDataset,
createMetadata,
createMultiscales,
} from "@fideus-labs/ngff-zarr";
import * as zarr from "zarrita";
// 1. Create an ngff-zarr image
const image = await createNgffImage(
[], // no parent images
[512, 512], // shape: [y, x]
"uint16", // data type
["y", "x"], // dimension names
{ y: 0.5, x: 0.5 }, // pixel spacing (micrometers)
{ y: 0.0, x: 0.0 }, // origin offsets
"my-image",
);
// 2. Populate pixel data
const data = new Uint16Array(512 * 512);
for (let i = 0; i < data.length; i++) data[i] = i % 65536;
await zarr.set(image.data, null, {
data,
shape: [512, 512],
stride: [512, 1],
});
// 3. Build the Multiscales object
const axes = [
createAxis("y", "space", "micrometer"),
createAxis("x", "space", "micrometer"),
];
const datasets = [createDataset("0", [0.5, 0.5], [0.0, 0.0])];
const metadata = createMetadata(axes, datasets, "my-image");
const multiscales = createMultiscales([image], metadata);
// 4. Write to OME-TIFF
const buffer = await toOmeTiff(multiscales);
// Save to disk (Node.js / Bun)
await Bun.write("output.ome.tif", buffer);const buffer = await toOmeTiff(multiscales, {
compression: "deflate", // "deflate" (default) or "none"
compressionLevel: 6, // 1-9, default 6 (only for deflate)
dimensionOrder: "XYZCT", // IFD layout order, default "XYZCT"
imageName: "my-image", // name in OME-XML metadata
creator: "my-app", // creator string in OME-XML
tileSize: 256, // tile size in px (0 = strip-based), default 256
concurrency: 4, // parallel plane reads, default 4
format: "auto", // "auto" | "classic" | "bigtiff", default "auto"
getPlane: zarrGet, // custom plane reader (e.g. ngff-zarr's zarrGet)
});By passing a @fideus-labs/worker-pool instance, deflate compression (writes)
and decompression (reads) run on Web Workers using the standard
CompressionStream / DecompressionStream APIs β fully releasing the main
thread.
import { toOmeTiff } from "@fideus-labs/fiff";
import WorkerPool from "@fideus-labs/worker-pool";
const pool = new WorkerPool(navigator.hardwareConcurrency ?? 4);
const buffer = await toOmeTiff(multiscales, {
pool, // tiles are compressed on workers
compression: "deflate",
});
pool.terminateWorkers();import { TiffStore } from "@fideus-labs/fiff";
import WorkerPool from "@fideus-labs/worker-pool";
const pool = new WorkerPool(4);
// Registers a worker-backed deflate decoder with geotiff.js.
// This is a global registration β it affects all geotiff instances.
const store = await TiffStore.fromUrl(
"https://example.com/image.ome.tif",
{ pool },
);
// All subsequent chunk reads decompress on workers
const group = await zarr.open(store as unknown as zarr.Readable, { kind: "group" });
const arr = await zarr.open(group.resolve("0"), { kind: "array" });
const chunk = await zarr.get(arr);
pool.terminateWorkers();import { buildTiff, type WritableIfd } from "@fideus-labs/fiff";
import WorkerPool from "@fideus-labs/worker-pool";
const pool = new WorkerPool(4);
const ifds: WritableIfd[] = [/* ... */];
const buffer = await buildTiff(ifds, {
compression: "deflate",
pool,
});
pool.terminateWorkers();By default toOmeTiff() reads each (C, Z, T) plane with the built-in
zarr.get(). You can replace this with a custom callback via the getPlane
option. This is useful for:
- Offloading decompression -- ngff-zarr's
zarrGetruns blosc decompression on a worker pool instead of the main thread. - Skipping decompression entirely -- When the in-memory zarr arrays were
created with
bytesOnlyCodecs()(no compression), reads are a trivial memcpy with zero overhead.
import { toOmeTiff } from "@fideus-labs/fiff"
import { bytesOnlyCodecs, toMultiscales, zarrGet } from "@fideus-labs/ngff-zarr"
// Create multiscales with no compression (OME-TIFF will re-compress with deflate)
const multiscales = await toMultiscales(ngffImage, {
method: Methods.ITKWASM_GAUSSIAN,
codecs: bytesOnlyCodecs(),
})
// Write OME-TIFF using zarrGet to read planes (worker-pool-aware)
const buffer = await toOmeTiff(multiscales, {
compression: "deflate",
getPlane: zarrGet,
})The GetPlane callback type mirrors zarr.get():
type GetPlane = (
data: zarr.Array<zarr.DataType, zarr.Readable>,
selection: (number | null)[],
) => Promise<{ data: unknown }>- Workers use
CompressionStream("deflate")/DecompressionStream("deflate")-- no fflate or other dependencies inside the worker - The worker script is inlined as a blob URL at runtime (no separate file to serve)
- ArrayBuffers are transferred (zero-copy) between the main thread and workers
- When the compression level is not the default (6), or no pool is provided, fiff falls back to the existing main-thread path (CompressionStream -> fflate)
- The pool's bounded concurrency replaces unbounded
Promise.allover tiles
When the Multiscales object contains multiple images (resolution levels),
all sub-resolution levels are written as SubIFDs:
const fullRes = await createNgffImage([], [1024, 1024], "uint16", ["y", "x"], ...);
const halfRes = await createNgffImage([], [512, 512], "uint16", ["y", "x"], ...);
// ... populate both images with zarr.set() ...
const datasets = [
createDataset("0", [0.5, 0.5], [0.0, 0.0]),
createDataset("1", [1.0, 1.0], [0.0, 0.0]),
];
const metadata = createMetadata(axes, datasets, "pyramid");
const multiscales = createMultiscales([fullRes, halfRes], metadata);
const buffer = await toOmeTiff(multiscales);
// Result: OME-TIFF with full-res IFDs + half-res SubIFDsimport { toOmeTiff, TiffStore } from "@fideus-labs/fiff";
import * as zarr from "zarrita";
const buffer = await toOmeTiff(multiscales);
const store = await TiffStore.fromArrayBuffer(buffer);
const group = await zarr.open(store as unknown as zarr.Readable, { kind: "group" });
const arr = await zarr.open(group.resolve("0"), { kind: "array" });
const result = await zarr.get(arr);
// result.data contains the original pixel values| Method | Description |
|---|---|
TiffStore.fromUrl(url, opts?) |
Open from a remote URL (HTTP range requests) |
TiffStore.fromArrayBuffer(buf) |
Open from an in-memory ArrayBuffer |
TiffStore.fromBlob(blob) |
Open from a Blob or File |
TiffStore.fromGeoTIFF(tiff) |
Open from an already-opened GeoTIFF instance |
All factory methods accept an optional TiffStoreOptions object:
| Option | Type | Default | Description |
|---|---|---|---|
offsets |
number[] |
undefined |
Pre-computed IFD byte offsets for O(1) access |
headers |
Record<string, string> |
undefined |
Additional HTTP headers for remote TIFF requests |
pool |
DeflatePool |
undefined |
Worker pool for offloading decompression (global geotiff decoder override) |
workerUrl |
string |
undefined |
Custom worker script URL (only used when pool is provided) |
| Accessor | Type | Description |
|---|---|---|
store.levels |
number |
Number of resolution levels |
store.dataType |
ZarrDataType |
Zarr data type string |
store.ome |
OmeImage[] |
Parsed OME-XML images (if present) |
store.pyramidInfo |
PyramidInfo |
Pyramid structure details |
store.dimensionNames |
string[] |
Dimension names (e.g. ["t", "c", "z", "y", "x"]) |
store.getShape(l) |
number[] |
Shape for resolution level l |
store.getChunkShape(l) |
number[] |
Chunk shape for resolution level l |
| Parameter | Type | Description |
|---|---|---|
multiscales |
Multiscales |
ngff-zarr Multiscales object with populated pixel data |
options |
WriteOptions |
Optional writer configuration |
Returns: Promise<ArrayBuffer> -- a complete OME-TIFF file.
| Option | Type | Default | Description |
|---|---|---|---|
compression |
"none" | "deflate" |
"deflate" |
Pixel data compression |
compressionLevel |
number |
6 |
Deflate level 1-9 (higher = smaller) |
dimensionOrder |
string |
"XYZCT" |
IFD plane layout order |
imageName |
string |
"image" |
Image name in OME-XML |
creator |
string |
"fiff" |
Creator string in OME-XML |
tileSize |
number |
256 |
Tile size (0 = strip-based, must be x16) |
concurrency |
number |
4 |
Max parallel plane reads |
format |
"auto" | "classic" | "bigtiff" |
"auto" |
TIFF format (auto-detects BigTIFF > 4 GB) |
pool |
DeflatePool |
undefined |
Worker pool for offloading deflate compression to Web Workers |
workerUrl |
string |
undefined |
Custom worker script URL (only used when pool is provided) |
getPlane |
GetPlane |
undefined |
Custom plane reader replacing the internal zarr.get() call (see below) |
- Bun >= 1.0
git clone https://github.com/fideus-labs/fiff.git
cd fiff
bun installsrc/
index.ts # Public API exports
tiff-store.ts # TiffStore class (AsyncReadable implementation)
metadata.ts # Zarr v3 / OME-Zarr 0.5 metadata synthesis
ome-xml.ts # OME-XML parser (dimensions, channels, DimensionOrder)
ifd-indexer.ts # IFD-to-pyramid-level mapping (SubIFD/legacy/COG)
chunk-reader.ts # Pixel data reading via geotiff.js readRasters
dtypes.ts # TIFF β Zarr data_type mapping
utils.ts # Key parsing, pixel window computation, encoding
write.ts # High-level toOmeTiff() writer
tiff-writer.ts # Low-level TIFF binary builder (IFDs, SubIFDs, deflate)
ome-xml-writer.ts # OME-XML generation from Multiscales metadata
deflate-worker.ts # Inline Web Worker source for compress/decompress
worker-utils.ts # Worker pool task factories and blob URL helper
worker-decoder.ts # Worker-backed geotiff deflate decoder
test/
fixtures.ts # Test TIFF generation helpers
*.test.ts # 201 tests across 11 files
| Command | Description |
|---|---|
bun run build |
Build to dist/ (ESM + declarations) |
bun test |
Run all tests |
bun run typecheck |
Type-check the full project |
Contributions are welcome! Please see CONTRIBUTING.md for setup instructions, code style guidelines, and the pull request workflow.
MIT -- Copyright (c) Fideus Labs LLC