Support larger-than-RAM images #1224
jo-mueller
started this conversation in
Ideas
Replies: 2 comments
-
|
In my opinion it should certainly be possible, but would need some work. Currently, it's possible to load larger images into the RAM and then process slices of those in the VRAM. But indeed if your data is in the TB range a specialized solution would be needed. Since zarr and dask are python libraries it should be "easy" for someone experienced with those (not me though ;) ) |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
What are your thoughts about this @onuralpszr? 🤔 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi devs,
first off: Great library. It's really easy and seamless to use within common CV frameworks. On to my question/idea/feature request: I often work with image data which is larger than memory. Specifically I work in biological image processing and the data generated from microscopescan easily range into TB, which of course makes loading images into RAM unfeasible.
To address this, common zarr and dask are commonly used to read/write such datasets or to handle computational jobs on these. I was wondering whether it would be possible to adapt the
predict.get_prediction(...)function to operate on such datasets? I haven't looked exactly at the inner workings of the sliced inference, but the basic difference would be that the inference would have to load the queried tile from disk on demand rather than indexing into an in-memory array.I think
.zarris a relatively common file format across many domains for larger-than-memory data so supporting it wouldn't be overly niche, i think.Would be interested in any opinions on this - or maybe it is already possible and I just haven't read the documentation careful enough 🙈 )
Beta Was this translation helpful? Give feedback.
All reactions