At present inpainting requires a drawn mask, and an attempt is made to launch a basic HTML interface is initiated if the user does not provide one. However, text based masking in such instances would be preferable, especially for remote interference.
Bryce Drennan's imaginAIry CLI package incorporated ClipSeg for this purpose (enhancers/clip_masking.py) while also allowing mask files to be inputted (see manual). In the case of stablepy, as YOLOv8 is already used for detailfix, it should be reasonably straightforward to incorporate the segmentation model into the package as a fallback.
The inference code returns boxes and masks, of which the latter will be relevant for stablepy.
At present inpainting requires a drawn mask, and an attempt is made to launch a basic HTML interface is initiated if the user does not provide one. However, text based masking in such instances would be preferable, especially for remote interference.
Bryce Drennan's imaginAIry CLI package incorporated ClipSeg for this purpose (enhancers/clip_masking.py) while also allowing mask files to be inputted (see manual). In the case of stablepy, as YOLOv8 is already used for detailfix, it should be reasonably straightforward to incorporate the segmentation model into the package as a fallback.
The inference code returns boxes and masks, of which the latter will be relevant for stablepy.