Dear Authors,
I have a question about the structure of UNETR.
I am now trying to segment a 3D volume image of 512x512x400~512x512x700.
Since the embedding is done in a size of 768, is it possible even if the input is variable (eg, using images of different sizes above)?
Or is it necessary to cut into patches of (eg, 96,96,96) and then cut them into patches for the vision transformer structure?