Skip to content

Comments

Nucleo centric feature extraction#131

Open
MikeLippincott wants to merge 40 commits intoWayScience:mainfrom
MikeLippincott:nucleo_centric_feature_extraction
Open

Nucleo centric feature extraction#131
MikeLippincott wants to merge 40 commits intoWayScience:mainfrom
MikeLippincott:nucleo_centric_feature_extraction

Conversation

@MikeLippincott
Copy link
Member

This PR focuses on implementing the "Nucleocentric" approach to cell mask free featurization of the nucleus bounding box via deep learning features. This is perfomred at the object(nucleus) level.

MikeLippincott and others added 30 commits January 13, 2026 14:07
* ready for hPC

* processed all segs

* rerun organoid segs on HPC

* fixed HPC script

* fixed HPC script

* update run list

* update run list

* update run list

* update run list

* update run list

* segmentations re-completed

* Update 2.segment_images/scripts/0.nuclei_segmentation.py

Co-authored-by: Dave Bunten <ekgto445@gmail.com>

* addressing comments

---------

Co-authored-by: Dave Bunten <ekgto445@gmail.com>
* ready for hPC

* processed all segs

* rerun organoid segs on HPC

* fixed HPC script

* fixed HPC script

* update run list

* update run list

* update run list

* update run list

* update run list

* segmentations re-completed

* Update 2.segment_images/scripts/0.nuclei_segmentation.py

Co-authored-by: Dave Bunten <ekgto445@gmail.com>

* addressing comments

---------

Co-authored-by: Dave Bunten <ekgto445@gmail.com>
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements a "Nucleocentric" approach for cell mask-free featurization of nucleus bounding boxes using deep learning features. The changes include new utility modules, refactoring of existing code, dependency updates, and channel mapping corrections across the codebase.

Changes:

  • Added new image utility functions for object selection, bounding box manipulation, and cropping operations
  • Implemented CHAMMI-75 (MorphEm) based 2D image featurization pipeline with custom transformations
  • Refactored colocalization utilities by extracting common image manipulation functions
  • Enhanced SAMMed3D featurizer to support model reuse via optional extractor parameter
  • Corrected AGP/ER channel mappings from 488/555 to 555/488 across multiple scripts and notebooks

Reviewed changes

Copilot reviewed 24 out of 27 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
uv_setup.sh Removed empty lines and added unset VIRTUAL_ENV for cleaner environment handling
uv.lock Added dependencies for transformers ecosystem (certifi, charset-normalizer, huggingface-hub, requests, etc.) and image processing (scikit-image, scipy, networkx)
utils/src/image_analysis_3D/image_utils/image_utils.py New module with utility functions for 3D image manipulation, object selection, and bounding box operations
utils/src/image_analysis_3D/featurization_utils/chammi75_featurization.py New module implementing CHAMMI-75/MorphEm featurization pipeline with custom transformations
utils/src/image_analysis_3D/featurization_utils/colocalization_utils.py Refactored to import image utilities from new centralized module
utils/src/image_analysis_3D/featurization_utils/sammed3d_featurizer.py Added optional extractor parameter to avoid model reloading in loops
pyproject.toml Added pandas, scikit-image, and transformers dependencies
3.cellprofiling/scripts/*.py Corrected channel mappings for AGP (555) and ER (488)
3.cellprofiling/notebooks/*.ipynb Updated kernel display names and corrected channel mappings
2.segment_images/scripts/get_run_combinations.py Corrected channel mappings
.pre-commit-config.yaml Updated ruff version from v0.15.1 to v0.15.2
.gitignore Added 7.technical_analysis/results/* entry (though duplicated)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +2 to +4
This utils file has module that utilize CHAMMI-75's featurization model.
This used a self-supervised deep-learning model
that uses a Vision Transformer (ViT) architecture
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The filename and function names reference "chami75" but the docstring and model reference "CHAMMI-75". For consistency, either rename the file to chammi75_featurization.py or update all references to use "chami75" (without the double 'M'). The actual model repository is "CaicedoLab/MorphEm" so verify the correct spelling.

Suggested change
This utils file has module that utilize CHAMMI-75's featurization model.
This used a self-supervised deep-learning model
that uses a Vision Transformer (ViT) architecture
This utils file provides utilities that use the chami75 featurization model
based on the CaicedoLab/MorphEm self-supervised deep-learning model
that uses a Vision Transformer (ViT) architecture.

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +346
from typing import Tuple, Union

import numpy


def select_objects_from_label(
label_image: numpy.ndarray, object_ids: list
) -> numpy.ndarray:
"""
Selects objects from a label image based on the provided object IDs.

Parameters
----------
label_image : numpy.ndarray
The segmented label image.
object_ids : list
The object IDs to select.

Returns
-------
numpy.ndarray
The label image with only the selected objects.
"""
label_image = label_image.copy()
label_image[label_image != object_ids] = 0
return label_image


def expand_box(
min_coor: int, max_coord: int, current_min: int, current_max: int, expand_by: int
) -> Union[Tuple[int, int], ValueError]:
"""
Expand the bounding box of an object in a 3D image.

Parameters
----------
min_coor : int
The minimum coordinate of the image for any dimension.
max_coord : int
The maximum coordinate of the image for any dimension.
current_min : int
The current minimum coordinate of the bounding box of an object for any dimension.
current_max : int
The current maximum coordinate of the bounding box of an object for any dimension.
expand_by : int
The amount to expand the bounding box by.

Returns
-------
Union[Tuple[int, int], ValueError]
The new minimum and maximum coordinates of the bounding box.
Raises ValueError if the expansion is not possible.
"""

if max_coord - min_coor - (current_max - current_min) < expand_by:
return ValueError("Cannot expand box by the requested amount")
while expand_by > 0:
if current_min > min_coor:
current_min -= 1
expand_by -= 1
elif current_max < max_coord:
current_max += 1
expand_by -= 1

return current_min, current_max


def new_crop_border(
bbox1: Tuple[
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
],
bbox2: Tuple[
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
],
image: numpy.ndarray,
) -> Tuple[
Tuple[
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
],
Tuple[
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
],
]:
"""
Expand the bounding boxes of two objects in a 3D image to match their sizes.

Parameters
----------
bbox1 : Tuple[Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float]]
The bounding box of the first object.
bbox2 : Tuple[Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float]]
The bounding box of the second object.
image : numpy.ndarray
The image to crop for each of the bounding boxes.

Returns
-------
Tuple[Tuple[Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float]], Tuple[Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float]]]
The new bounding boxes of the two objects.
Raises
ValueError
If the expansion is not possible.
"""
i1z1, i1y1, i1x1, i1z2, i1y2, i1x2 = bbox1
i2z1, i2y1, i2x1, i2z2, i2y2, i2x2 = bbox2
z_range1 = i1z2 - i1z1
y_range1 = i1y2 - i1y1
x_range1 = i1x2 - i1x1
z_range2 = i2z2 - i2z1
y_range2 = i2y2 - i2y1
x_range2 = i2x2 - i2x1
z_diff = numpy.abs(z_range1 - z_range2)
y_diff = numpy.abs(y_range1 - y_range2)
x_diff = numpy.abs(x_range1 - x_range2)
min_z_coord = 0
max_z_coord = image.shape[0]
min_y_coord = 0
max_y_coord = image.shape[1]
min_x_coord = 0
max_x_coord = image.shape[2]
if z_range1 < z_range2:
i1z1, i1z2 = expand_box(
min_coor=min_z_coord,
max_coord=max_z_coord,
current_min=i1z1,
current_max=i1z2,
expand_by=z_diff,
)
elif z_range1 > z_range2:
i2z1, i2z2 = expand_box(
min_coor=min_z_coord,
max_coord=max_z_coord,
current_min=i2z1,
current_max=i2z2,
expand_by=z_diff,
)
if y_range1 < y_range2:
i1y1, i1y2 = expand_box(
min_coor=min_y_coord,
max_coord=max_y_coord,
current_min=i1y1,
current_max=i1y2,
expand_by=y_diff,
)
elif y_range1 > y_range2:
i2y1, i2y2 = expand_box(
min_coor=min_y_coord,
max_coord=max_y_coord,
current_min=i2y1,
current_max=i2y2,
expand_by=y_diff,
)
if x_range1 < x_range2:
i1x1, i1x2 = expand_box(
min_coor=min_x_coord,
max_coord=max_x_coord,
current_min=i1x1,
current_max=i1x2,
expand_by=x_diff,
)
elif x_range1 > x_range2:
i2x1, i2x2 = expand_box(
min_coor=min_x_coord,
max_coord=max_x_coord,
current_min=i2x1,
current_max=i2x2,
expand_by=x_diff,
)
return (i1z1, i1y1, i1x1, i1z2, i1y2, i1x2), (i2z1, i2y1, i2x1, i2z2, i2y2, i2x2)


# crop the image to the bbox of the mask
def crop_3D_image(
image: numpy.ndarray,
bbox: Tuple[
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
Union[int, float],
],
) -> numpy.ndarray:
"""
Crop a 3D image to the bounding box of a mask.

Parameters
----------
image : numpy.ndarray
The image to crop.
bbox : Tuple[Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float], Union[int, float]]
The bounding box of the mask.

Returns
-------
numpy.ndarray
The cropped image.
"""
z1, y1, x1, z2, y2, x2 = bbox
return image[z1:z2, y1:y2, x1:x2]


def single_3D_image_expand_bbox(
image: numpy.ndarray,
bbox: tuple[int, int, int, int, int, int],
expand_pixels: int,
anisotropy_factor: int,
) -> tuple[int, int, int, int, int, int]:
"""
Expand the bbox in a way that keeps the crop within the
confines of the image volume

Parameters
----------
image : numpy.ndarray
3D image array from which the bbox was derived
bbox : tuple[int, int, int, int, int, int]
3D bbox in the format (zmin, ymin, xmin, zmax, ymax, xmax)
expand_pixels : int
number of pixels to expand the bbox in each direction (z, y, x)
the corrdinates become isotropic here so the expansion is the same across dimensions,
but the anisotropy factor is used to adjust for the z dimension
anisotropy_factor : int
The ratio of "pixel" size in um between the z dimension and the x/y dimensions.
This is used to adjust the expansion of the bbox in the z dimension to account
for anisotropy in the image volume.
For example, if the z spacing is 5um and the x/y spacing is 1um,
then the anisotropy factor would be 5.

Returns
-------
tuple[int, int, int, int, int, int]
Updated bbox in the format (zmin, ymin, xmin, zmax, ymax, xmax)
after expansion and adjustment for anisotropy
"""
z1, y1, x1, z2, y2, x2 = bbox
zmin, ymin, xmin = 0, 0, 0
zmax, ymax, xmax = image.shape
# adjust the anisotropy factor for the z dimension
z1, z2 = z1 * anisotropy_factor, z2 * anisotropy_factor
zmax = zmax * anisotropy_factor
# expand the bbox by the specified number of pixels in each direction
z1_expanded = z1 - expand_pixels
y1_expanded = y1 - expand_pixels
x1_expanded = x1 - expand_pixels
z2_expanded = z2 + expand_pixels
y2_expanded = y2 + expand_pixels
x2_expanded = x2 + expand_pixels
# convert the expanded bbox back to the original z dimension scale
z1_expanded = numpy.floor(z1_expanded / anisotropy_factor)
z2_expanded = numpy.ceil(z2_expanded / anisotropy_factor)
# ensure the expanded bbox does not go outside the image boundaries
z1_expanded, z2_expanded = (
max(z1_expanded, numpy.floor(zmin / anisotropy_factor)).astype(int),
min(z2_expanded, numpy.ceil(zmax / anisotropy_factor)).astype(int),
)
y1_expanded, y2_expanded = max(y1_expanded, ymin), min(y2_expanded, ymax)
x1_expanded, x2_expanded = max(x1_expanded, xmin), min(x2_expanded, xmax)

return (
z1_expanded,
y1_expanded,
x1_expanded,
z2_expanded,
y2_expanded,
x2_expanded,
)


def check_for_xy_squareness(bbox: tuple[int, int, int, int, int, int]) -> float:
"""
This function returns the ratio of the x length to the y length
A value of 1 indicates a square bbox is present

Parameters
----------
bbox : The bbox to check
(z_min, y_min, x_min, z_max, y_max, x_max)
Where each value is an int representing the pixel coordinate of the bbox in that dimension

Returns
-------
float
The ratio of the y length to the x length of the bbox. A value of 1 indicates a square bbox.
"""
z_min, y_min, x_min, z_max, y_max, x_max = bbox
xy_squareness = (y_max - y_min) / (x_max - x_min)
return xy_squareness


def square_off_xy_crop_bbox(
bbox: tuple[int, int, int, int, int, int],
) -> tuple[int, int, int, int, int, int]:
"""
This function adjusts the bbox to be square in the xy plane
based on the current x,y dimensions of the bbox.
Parameters
----------
bbox : tuple[int,int,int,int,int,int]
The bbox to adjust
(z_min, y_min, x_min, z_max, y_max, x_max)
Where each value is an int representing the pixel coordinate of the bbox in that dimension
Returns
-------
tuple[int,int,int,int,int,int]
The adjusted bbox that is square in the xy plane
(z_min, new_y_min, new_x_min, z_max, new_y_max, new_x_max)
Where each value is an int representing the pixel coordinate of the bbox in that dimension
"""
zmin, ymin, xmin, zmax, ymax, xmax = bbox
# first find the larger dimension between x and y
x_size = xmax - xmin
y_size = ymax - ymin
if x_size > y_size:
# need to expand y dimension
new_ymin = int(ymin - (x_size - y_size) / 2)
new_ymax = int(ymax + (x_size - y_size) / 2)
return (zmin, new_ymin, xmin, zmax, new_ymax, xmax)
elif y_size > x_size:
# need to expand x dimension
new_xmin = int(xmin - (y_size - x_size) / 2)
new_xmax = int(xmax + (y_size - x_size) / 2)
return (zmin, ymin, new_xmin, zmax, ymax, new_xmax)
else:
# already square
return bbox
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newly added image_utils.py module contains several utility functions but there are no corresponding unit tests in the utils/tests directory. Given that the repository has comprehensive test coverage for other modules, tests should be added for the new image utility functions, especially for functions like select_objects_from_label, expand_box, single_3D_image_expand_bbox, and check_for_xy_squareness to prevent regressions.

Copilot uses AI. Check for mistakes.
The label image with only the selected objects.
"""
label_image = label_image.copy()
label_image[label_image != object_ids] = 0
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function select_objects_from_label incorrectly filters the label image. The line label_image[label_image != object_ids] = 0 will not work as intended when object_ids is a list. This comparison will always be True for array != list comparison. The function should use numpy.isin() instead: label_image[~numpy.isin(label_image, object_ids)] = 0

Suggested change
label_image[label_image != object_ids] = 0
label_image[~numpy.isin(label_image, object_ids)] = 0

Copilot uses AI. Check for mistakes.
Comment on lines +15 to +16
def get_chami75_model(device):
device = "cuda"
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The device parameter is passed to the function but immediately overridden to "cuda" on line 16. Either use the parameter or remove it from the function signature. This hardcoded override prevents the function from running on CPU-only systems.

Suggested change
def get_chami75_model(device):
device = "cuda"
def get_chami75_model(device=None):
if device is None:
device = "cuda" if torch.cuda.is_available() else "cpu"

Copilot uses AI. Check for mistakes.

def check_for_xy_squareness(bbox: tuple[int, int, int, int, int, int]) -> float:
"""
This function returns the ratio of the x length to the y length
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring states this function "returns the ratio of the x length to the y length" but the implementation returns (y_max - y_min) / (x_max - x_min) which is actually the y length divided by x length (inverse of what's stated). Either fix the docstring or the implementation to match the intended behavior.

Suggested change
This function returns the ratio of the x length to the y length
This function returns the ratio of the y length to the x length

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +108
"""
This utils file has module that utilize CHAMMI-75's featurization model.
This used a self-supervised deep-learning model
that uses a Vision Transformer (ViT) architecture
"""

import numpy
import torch
import torch.nn as nn
from torchvision import transforms as v2
from transformers import AutoModel


# get the model
def get_chami75_model(device):
device = "cuda"
model = AutoModel.from_pretrained("CaicedoLab/MorphEm", trust_remote_code=True)
model.to(device).eval()

return model


# Noise Injector transformation
class SaturationNoiseInjector(nn.Module):
def __init__(self, low=200, high=255):
super().__init__()
self.low = low
self.high = high

def forward(self, x: torch.Tensor) -> torch.Tensor:
channel = x[0].clone()
noise = torch.empty_like(channel).uniform_(self.low, self.high)
mask = (channel == 255).float()
noise_masked = noise * mask
channel[channel == 255] = 0
channel = channel + noise_masked
x[0] = channel
return x


# Self Normalize transformation
class PerImageNormalize(nn.Module):
def __init__(self, eps=1e-7):
super().__init__()
self.eps = eps
self.instance_norm = nn.InstanceNorm2d(
num_features=1,
affine=False,
track_running_stats=False,
eps=self.eps,
)

def forward(self, x: torch.Tensor) -> torch.Tensor:
if x.dim() == 3:
x = x.unsqueeze(0)
x = self.instance_norm(x)
if x.shape[0] == 1:
x = x.squeeze(0)
return x


def featurize_2D_image_w_chami75(
image_tensor: torch.Tensor, model: torch.nn.Module, device: torch.device
):
# Define transforms
transform = v2.Compose(
[
SaturationNoiseInjector(),
PerImageNormalize(),
v2.Resize(size=(224, 224), antialias=True),
]
)
# Bag of Channels (BoC) - process each channel independently
with torch.no_grad():
batch_feat = []
image_tensor = image_tensor.to(device)

for c in range(image_tensor.shape[1]):
# Extract single channel: (N, C, H, W) -> (N, 1, H, W)
# where:
# N is batch size (1 in this case),
# C is number of channels,
# H and W are Y and X dimensions
single_channel = image_tensor[:, c, :, :].unsqueeze(1)

# Apply transforms
single_channel = transform(single_channel.squeeze(1)).unsqueeze(1)

# Extract features
output = model.forward_features(single_channel)
feat_temp = output["x_norm_clstoken"].cpu().detach().numpy()
batch_feat.append(feat_temp)
return batch_feat[0]


def call_chami75_featurization_pipeline(
cropped_image: numpy.ndarray, model: torch.nn.Module
):
device = "cuda" if torch.cuda.is_available() else "cpu"
images = torch.tensor(cropped_image, dtype=torch.float32).unsqueeze(
0
) # Add batch dimension
# images is now (B, Y, X), add channel dimension -> (B, 1, Y, X)
images = images.unsqueeze(1)
# Replicate channel 3 times to get (B, 3, Y, X)
images = images.repeat(1, 3, 1, 1)
batch_feat = featurize_2D_image_w_chami75(images, model, device)
return batch_feat
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newly added chammi75_featurization.py module lacks unit tests. Given that the repository has comprehensive test coverage for other modules, tests should be added for the new featurization functions, especially for the transformation classes (SaturationNoiseInjector, PerImageNormalize) and the main featurization pipeline functions.

Copilot uses AI. Check for mistakes.
"""

if max_coord - min_coor - (current_max - current_min) < expand_by:
return ValueError("Cannot expand box by the requested amount")
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function returns a ValueError instead of raising it. Line 56 should be raise ValueError("Cannot expand box by the requested amount") not return ValueError(...)

Copilot uses AI. Check for mistakes.
expand_by -= 1
elif current_max < max_coord:
current_max += 1
expand_by -= 1
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The while loop may not terminate if the box cannot be expanded even though the initial check passes. When current_min equals min_coor and current_max equals max_coord, neither condition in the if-elif will be true, causing an infinite loop. Add an else clause with a break or raise an exception.

Suggested change
expand_by -= 1
expand_by -= 1
else:
# Defensive check: if neither boundary can be moved further,
# abort to avoid a non-terminating loop.
return ValueError("Cannot expand box by the requested amount")

Copilot uses AI. Check for mistakes.
3D bbox in the format (zmin, ymin, xmin, zmax, ymax, xmax)
expand_pixels : int
number of pixels to expand the bbox in each direction (z, y, x)
the corrdinates become isotropic here so the expansion is the same across dimensions,
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a typo in the comment: "corrdinates" should be "coordinates".

Suggested change
the corrdinates become isotropic here so the expansion is the same across dimensions,
the coordinates become isotropic here so the expansion is the same across dimensions,

Copilot uses AI. Check for mistakes.
output = model.forward_features(single_channel)
feat_temp = output["x_norm_clstoken"].cpu().detach().numpy()
batch_feat.append(feat_temp)
return batch_feat[0]
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function only returns features for the first channel (batch_feat[0]) even when processing multiple channels. This appears to be unintended - either return all channel features (remove the indexing) or clarify in the documentation why only the first channel is returned.

Suggested change
return batch_feat[0]
return batch_feat

Copilot uses AI. Check for mistakes.


# get the model
def get_chami75_model(device):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider correcting the spelling of chammi here (two m's).

return x


# Self Normalize transformation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a bit more formal documentation to these classes in the form of docstrings where they belong.

return x


def featurize_2D_image_w_chami75(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding docstrings where possible to functions without them to help describe what's happening.

cropped_image: numpy.ndarray, model: torch.nn.Module
):
device = "cuda" if torch.cuda.is_available() else "cpu"
images = torch.tensor(cropped_image, dtype=torch.float32).unsqueeze(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you use torch.from_numpy here to conserve memory use (where otherwise we might duplicate)?

return label_image


def expand_box(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would something in this package help avoid any code here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants