dgp.annotations package

dgp.annotations.base_annotation module

class dgp.annotations.base_annotation.Annotation(ontology=None)

Bases: ABC

Base annotation type. All other annotations should inherit from this type and implement member functions.

ontology: Ontology, default: None

Ontology object for the annotation key

abstract property hexdigest

Reproducible hash of annotation.

abstract classmethod load(annotation_file, ontology)

Loads annotation from file into a canonical format for consumption in __getitem__ function in BaseDataset. Format/data structure for annotations will vary based on task.

annotation_file: str

Full path to annotation

ontology: Ontology

Ontology for given annotation

property ontology
abstract render()

Return a rendering of the annotation. Expected format is a PIL.Image or np.array

abstract save(save_dir)

Serialize annotation object if possible, and saved to specified directory. Annotations are saved in format <save_dir>/<sha>.<ext>

save_dir: str

Path to directory to saved annotation

dgp.annotations.bounding_box_2d_annotation module

class dgp.annotations.bounding_box_2d_annotation.BoundingBox2DAnnotationList(ontology, boxlist)

Bases: Annotation

Container for 2D bounding box annotations.

ontology: BoundingBoxOntology

Ontology for 2D bounding box tasks.

boxlist: list[BoundingBox2D]

List of BoundingBox2D objects. See dgp/utils/structures/bounding_box_2d for more details.

property attributes

Return a list of dictionaries of attribute name to value.

property class_ids

Return class ID for each box, with ontology applied: 0 is background, class IDs mapped to a contiguous set.

property hexdigest

Reproducible hash of annotation.

property instance_ids
classmethod load(annotation_file, ontology)

Load annotation from annotation file and ontology.

annotation_file: str or bytes

Full path to annotation or bytestring

ontology: BoundingBoxOntology

Ontology for 2D bounding box tasks.

BoundingBox2DAnnotationList

Annotation object instantiated from file.

property ltrb

Return boxes as (N, 4) np.ndarray in format ([left, top, right, bottom])

property ltwh

Return boxes as (N, 4) np.ndarray in format ([left, top, width, height])

render()

TODO: Batch rendering function for bounding boxes.

save(save_dir)

Serialize Annotation object and saved to specified directory. Annotations are saved in format <save_dir>/<sha>.<ext>

save_dir: str

Directory in which annotation is saved.

output_annotation_file: str

Full path to saved annotation

to_proto()

Return annotation as pb object.

BoundingBox2DAnnotations

Annotation as defined in proto/annotations.proto

dgp.annotations.bounding_box_3d_annotation module

class dgp.annotations.bounding_box_3d_annotation.BoundingBox3DAnnotationList(ontology, boxlist)

Bases: Annotation

Container for 3D bounding box annotations.

ontology: BoundingBoxOntology

Ontology for 3D bounding box tasks.

boxlist: list[BoundingBox3D]

List of BoundingBox3D objects. See utils/structures/bounding_box_3d for more details.

property attributes

Return a list of dictionaries of attribute name to value.

property class_ids

Return class ID for each box, with ontology applied: 0 is background, class IDs mapped to a contiguous set.

property hexdigest

Reproducible hash of annotation.

property instance_ids
classmethod load(annotation_file, ontology)

Load annotation from annotation file and ontology.

annotation_file: str or bytes

Full path to annotation or bytestring

ontology: BoundingBoxOntology

Ontology for 3D bounding box tasks.

BoundingBox3DAnnotationList

Annotation object instantiated from file.

property poses

Get poses for bounding boxes in annotation.

project(camera)

Project bounding boxes into a camera and get back 2D bounding boxes in the frustum.

camera: Camera

The Camera instance to project into.

NotImplementedError

Unconditionally.

render(image, camera, line_thickness=2, font_scale=0.5)

Render the 3D boxes in this annotation on the image in place

image: np.uint8

Image (H, W, C) to render the bounding box onto. We assume the input image is in RGB format. Element type must be uint8.

camera: dgp.utils.camera.Camera

Camera used to render the bounding box.

line_thickness: int, optional

Thickness of bounding box lines. Default: 2.

font_scale: float, optional

Font scale used in text labels. Default: 0.5.

ValueError

Raised if image is not a 3-channel uint8 numpy array.

TypeError

Raised if camera is not an instance of Camera.

save(save_dir)

Serialize Annotation object and saved to specified directory. Annotations are saved in format <save_dir>/<sha>.<ext>

save_dir: str

A pathname to a directory to save the annotation object into.

output_annotation_file: str

Full path to saved annotation

property sizes
to_proto()

Return annotation as pb object.

BoundingBox3DAnnotations

Annotation as defined proto/annotations.proto

dgp.annotations.depth_annotation module

class dgp.annotations.depth_annotation.DenseDepthAnnotation(depth)

Bases: Annotation

Container for per-pixel depth annotation.

depth: np.ndarray

2D numpy float array that stores per-pixel depth.

property depth
property hexdigest

Reproducible hash of annotation.

classmethod load(annotation_file, ontology=None)

Loads annotation from file into a canonical format for consumption in __getitem__ function in BaseDataset.

annotation_file: str

Full path to NPZ file that stores 2D depth array.

ontology: None

Dummy ontology argument to meet the usage in BaseDataset.load_annotation().

render()

TODO: Rendering function for per-pixel depth.

save(save_dir)

Serialize annotation object if possible, and saved to specified directory. Annotations are saved in format <save_dir>/<sha>.<ext>

save_dir: str

Path to directory to saved annotation

pointcloud_path: str

Full path to the output NPZ file.

dgp.annotations.key_point_2d_annotation module

class dgp.annotations.key_point_2d_annotation.KeyPoint2DAnnotationList(ontology, pointlist)

Bases: Annotation

Container for 2D keypoint annotations.

ontology: KeyPointOntology

Ontology for 2D keypoint tasks.

pointlist: list[KeyPoint2D]

List of KeyPoint2D objects. See dgp/utils/structures/key_point_2d for more details.

property attributes

Return a list of dictionaries of attribut name to value.

property class_ids

Return class ID for each point, with ontology applied: 0 is background, class IDs mapped to a contiguous set.

property hexdigest

Reproducible hash of annotation.

property instance_ids
classmethod load(annotation_file, ontology)

Load annotation from annotation file and ontology.

annotation_file: str or bytes

Full path to annotation or bytestring

ontology: KeyPointOntology

Ontology for 2D keypoint tasks.

KeyPoint2DAnnotationList

Annotation object instantiated from file.

render()

TODO: Batch rendering function for keypoints.

save(save_dir)

Serialize Annotation object and saved to specified directory. Annotations are saved in format <save_dir>/<sha>.<ext>

save_dir: str

Directory in which annotation is saved.

output_annotation_file: str

Full path to saved annotation

to_proto()

Return annotation as pb object.

KeyPoint2DAnnotations

Annotation as defined in proto/annotations.proto

property xy

Return points as (N, 2) np.ndarray in format ([x, y])

dgp.annotations.ontology module

class dgp.annotations.ontology.AgentBehaviorOntology(ontology_pb2)

Bases: BoundingBoxOntology

Agent behavior ontologies derive directly from bounding box ontologies

class dgp.annotations.ontology.BoundingBoxOntology(ontology_pb2)

Bases: Ontology

Implements lookup tables specific to 2D bounding box tasks.

ontology_pb2: [OntologyV1Pb2,OntologyV2Pb2]

Deserialized ontology object.

property class_id_to_contiguous_id
property class_names
property contiguous_id_colormap
property contiguous_id_to_class_id
property contiguous_id_to_name
property name_to_contiguous_id
property num_classes
property thing_class_ids
class dgp.annotations.ontology.InstanceSegmentationOntology(ontology_pb2)

Bases: BoundingBoxOntology

Instance segmentation ontologies derive directly from bounding box ontologies

class dgp.annotations.ontology.KeyLineOntology(ontology_pb2)

Bases: BoundingBoxOntology

Keyline ontologies derive directly from bounding box ontologies

class dgp.annotations.ontology.KeyPointOntology(ontology_pb2)

Bases: BoundingBoxOntology

Keypoint ontologies derive directly from bounding box ontologies

class dgp.annotations.ontology.Ontology(ontology_pb2)

Bases: object

Ontology object. At bare minimum, we expect ontologies to provide:

ID: (int) identifier for class Name: (str) string identifier for class Color: (tuple) color RGB tuple

Based on the task, additional fields may be populated. Refer to dataset.proto and ontology.proto specifications for more details. Can be constructed from file or from deserialized proto object.

ontology_pb2: [OntologyV1Pb2,OntologyV2Pb2]

Deserialized ontology object.

VOID_CLASS = 'Void'
VOID_ID = 255
property class_ids
property class_names
property colormap
property hexdigest

Hash object

property id_to_name
property isthing
classmethod load(ontology_file)

Construct an ontology from an ontology JSON.

ontology_file: str

Path to ontology JSON

FileNotFoundError

Raised if ontology_file does not exist.

Exception

Raised if we could not open the ontology file for some reason.

property name_to_id
property num_classes
save(save_dir)

Write out ontology items to <sha>.json. SHA generated from Ontology proto object.

save_dir: str

Directory in which to save serialized ontology.

output_ontology_file: str

Path to serialized ontology file.

to_proto()

Serialize ontology. Only supports exporting in OntologyV2.

OntologyV2Pb2

Serialized ontology

class dgp.annotations.ontology.SemanticSegmentationOntology(ontology_pb2)

Bases: Ontology

Implements lookup tables for semantic segmentation

ontology_pb2: [OntologyV1Pb2,OntologyV2Pb2]

Deserialized ontology object.

property class_id_to_contiguous_id
property contiguous_id_colormap
property contiguous_id_to_class_id
property contiguous_id_to_name
property label_lookup
property name_to_contiguous_id

dgp.annotations.panoptic_segmentation_2d_annotation module

class dgp.annotations.panoptic_segmentation_2d_annotation.PanopticSegmentation2DAnnotation(ontology, panoptic_image, index_to_label, panoptic_image_dtype=<class 'numpy.uint16'>)

Bases: Annotation

Container for 2D panoptic segmentation annotations

ontology: dgp.annotations.BoundingBoxOntology

Bounding box ontology that will be used to load annotations

panoptic_image: np.uint16 array

Single-channel image with value at [i, j] corresponding to the instance ID of the object the pixel belongs to for thing pixels and the class ID for stuff pixels. Shape (H, W)

index_to_label: dict[str->Union(dict, List[dict])]
Maps from each class name to either:
  1. If class is stuff:

    a single dict - there is only one (potentially empty) segment in an image for each stuff class, with fields:

    ‘index’: int ‘attributes’: dict

  2. If class is thing:

    a list of such dicts, one for each instance of the thing class

For example, an entry in annotation[‘index_to_label’][‘Car’], which is a list, can look like:
{

‘index’: 21, ‘attributes’: {

‘EmergencyVehicle’: ‘No’, ‘IsTowing’: ‘No’

}

}

Then if we load the image at image_uri, we would expect all pixels with value 21 to belong to this one instance of the ‘Car’ class.

panoptic_image_dtype: type, default: np.uint16

Numpy data type (e.g. np.uint16, np.uint32, etc.) of panoptic image.

For now only using this annotation object for instance segmentation, so a BoundingBoxOntology is sufficient

In the future, we probably want to wrap a panoptic annotation into a PanopticSegmentation2DAnnotationPB(panoptic_image=<image_path>, index_to_label=<json_path>) proto message and then we can .load from this proto (and serialize to it in .save).

For now we simply assume, by convention, that a JSON index_to_label file exists along with the panoptic_image file, and in this way stay a bit more flexible about what the PanopticSegmentation2DAnnotationPB object should look like (e.g. if index_to_label is defined as a proto message).

DEFAULT_PANOPTIC_IMAGE_DTYPE

alias of uint16

property class_ids
List[int]

Contiguous class ID for each instance in panoptic annotation

property class_names
List[str]

Class name for each instance in panoptic annotation

classmethod from_masklist(masklist, ontology, mask_shape=None, panoptic_image_dtype=<class 'numpy.uint16'>)

Instantiate PanopticSegmentation2DAnnotation from a list of InstanceMask2D.

CAVEAT: This constructs instance segmentation annotation, not panoptic annotation. In the following example, ` annotation_1 = PanopticSegmentation2DAnnotation.load(PANOPTIC_LABEL_IMAGE, ontology) annotation_2 = PanopticSegmentation2DAnnotation.from_masklist(annotation_1.masklist, ontology) ` - all pixels of “stuff” classes in annotation_1.panoptic_image are replaced with ontology.VOID_ID

in annotation_2.panoptic_image, and

  • all “stuff” classes in annotation_1.index_to_label are removed in annotation_2.index_to_label.

masklist: list[InstanceMask2D]

Instance masks used to create an annotation object.

ontology: dgp.annotations.BoundingBoxOntology

Bounding box ontology used to load annotations.

mask_shape: list[int]

Height and width of the mask. Only used to create an empty panoptic image when masklist is empty.

panoptic_image_dtype: type, optional

Numpy data type (e.g. np.uint16, np.uint32, etc) of panoptic image. Default: np.uint16.

property hexdigest

Reproducible hash of annotation.

property instance_ids
instance_ids: List[int]

Instance IDs for each instance in panoptic annotation

property instances
np.ndarray:

(N, H, W) bool array for each instance in panoptic annotation. N is the number of instances; H, W are the height and width of the image.

classmethod load(annotation_file, ontology, panoptic_image_dtype=<class 'numpy.uint16'>)

Loads annotation from file into a canonical format for consumption in __getitem__ function in BaseDataset. Format/data structure for annotations will vary based on task.

annotation_file: str

Full path to panoptic image. index_to_label JSON is expected to live at the same path with ‘.json’ ending

ontology: Ontology

Ontology for given annotation

panoptic_image_dtype: type, optional

Numpy data type (e.g. np.uint16, np.uint32, etc) of panoptic image. Default: np.uint16.

property masklist
property panoptic_image_dtype
parse_panoptic_image()

Parses self.panoptic_image to produce instance_masks, class_names, and instance_ids

instance_masks: list[InstanceMask2D]

Instance mask for each instance in panoptic annotation.

ValueError

Raised if an instance ID, parsed from a label, is negative.

render()

TODO: Return a rendering of the annotation

save(save_dir, datum=None)

Serialize Annotation object and save into a specified datum.

save_dir: str

If datum is given, then annotations will be saved to <save_dir>/<datum.id.name>/<hexdigest>.{png,json}. Otherwise, annotations will be saved to <save_dir>/<hexdigest>.{png,json}.

datum: dgp.proto.sample_pb2.Datum

Datum to which we will append annotation

panoptic_image_path: str

Full path to the output panoptic image file.

dgp.annotations.semantic_segmentation_2d_annotation module

class dgp.annotations.semantic_segmentation_2d_annotation.SemanticSegmentation2DAnnotation(ontology, segmentation_image)

Bases: Annotation

Container for semantic segmentation annotation.

ontology: SemanticSegmentationOntology

Ontology for semantic segmentation tasks.

segmentation_image: np.array

Numpy uint8 array encoding segmentation labels.

property hexdigest

Reproducible hash of annotation.

property label
classmethod load(annotation_file, ontology)

Load annotation from annotation file and ontology.

annotation_file: str or bytes

Full path to annotation or bytestring

ontology: SemanticSegmentationOntology

Ontology for semantic segmentation tasks.

SemanticSegmentation2DAnnotation

Annotation object instantiated from file.

render()

TODO: Rendering function for semantic segmentation images.

save(save_dir)

Serialize Annotation object and saved to specified directory. Annotations are saved in format <save_dir>/<sha>.<ext>

save_dir: str

Directory in which annotation is saved.

output_annotation_file: str

Full path to saved annotation

dgp.annotations.transform_utils module

dgp.annotations.transform_utils.construct_remapped_ontology(ontology, lookup, annotation_key)

Given an Ontology object and a lookup from old class names to new class names, construct an ontology proto for the new ontology that results

ontology: dgp.annotations.Ontology

Ontology we are trying to remap using lookup eg. ontology.id_to_name = {0: ‘Car’, 1: ‘Truck’, 2: ‘Motrocycle’}

lookup: dict

Lookup from old class names to new class names e.g.:

{

‘Car’: ‘Car’, ‘Truck’: ‘Car’, ‘Motorcycle’: ‘Motorcycle’

}

NOTE: lookup needs to be exhaustive; any classes that the user wants to have in returned ontology need to be remapped explicitly

annotation_key: str

Annotation key of Ontology e.g. bounding_box_2d

remapped_ontology_pb2: dgp.proto.ontology_pb2.Ontology

Ontology defined by applying lookup on original ontology

NOTE: This is constructed by iterating over class names in lookup.keys() in alphabetical order, so if both ‘Car’ and ‘Motorcycle’ get remapped to ‘DynamicObject’, the color for ‘DynamicObject’ will be the original color for ‘Car’

Any class names not in lookup are dropped

This could be a class function of Ontology

dgp.annotations.transform_utils.remap_bounding_box_annotations(bounding_box_annotations, lookup_table, original_ontology, remapped_ontology)
bounding_box_annotations: BoundingBox2DAnnotationList or BoundingBox3DAnnotationList

Annotations to remap

lookup_table: dict

Lookup from old class names to new class names e.g.:

{

‘Car’: ‘Car’, ‘Truck’: ‘Car’, ‘Motorcycle’: ‘Motorcycle’

}

original_ontology: BoundingBoxOntology

Ontology we are remapping annotations from

remapped_ontology: BoundingBoxOntology

Ontology we are mapping annotations to

remapped_bounding_box_annotations: BoundingBox2DAnnotationList or BoundingBox3DAnnotationList

Remapped annotations with the same type of bounding_box_annotations

dgp.annotations.transform_utils.remap_instance_segmentation_2d_annotation(instance_segmentation_annotation, lookup_table, original_ontology, remapped_ontology)
instance_segmentation_annotation: PanopticSegmentation2DAnnotation

Annotation to remap

lookup_table: dict

Lookup from old class names to new class names e.g.:

{

‘Car’: ‘Car’, ‘Truck’: ‘Car’, ‘Motorcycle’: ‘Motorcycle’

}

original_ontology: InstanceSegmentationOntology

Ontology we are remapping annotation from

remapped_ontology: InstanceSegmentationOntology

Ontology we are mapping annotation to

PanopticSegmentation2DAnnotation:

Remapped annotation

dgp.annotations.transform_utils.remap_semantic_segmentation_2d_annotation(semantic_segmentation_annotation, lookup_table, original_ontology, remapped_ontology)
semantic_segmentation_annotation: SemanticSegmentation2DAnnotation

Annotation to remap

lookup_table: dict

Lookup from old class names to new class names e.g.:

{

‘Car’: ‘Car’, ‘Truck’: ‘Car’, ‘Motorcycle’: ‘Motorcycle’

}

original_ontology: SemanticSegmentationOntology

Ontology we are remapping annotation from

remapped_ontology: SemanticSegmentationOntology

Ontology we are mapping annotation to

remapped_semantic_segmentation_2d_annotation: SemanticSegmentation2DAnnotation

Remapped annotation

dgp.annotations.transforms module

class dgp.annotations.transforms.AddLidarCuboidPoints(subsample: int = 1)

Bases: BaseTransform

Populate the num_points field for bounding_box_3d

transform_datum(datum: Dict[str, Any]) Dict[str, Any]

Populate the num_points field for bounding_box_3d Parameters ———- datum: Dict[str,Any]

A dgp lidar or point cloud datum. Must contain keys bounding_box_3d and point_cloud

datum: Dict[str,Any]

The datum with num_points added to the cuboids

class dgp.annotations.transforms.BaseTransform

Bases: object

Base transform class that other transforms should inherit from. Simply ensures that input type to __call__ is an OrderedDict (in general usage this dict will include keys such as ‘rgb’, ‘bounding_box_2d’, etc. i.e. raw data and annotations)

cf. OntologyMapper for an example

transform(data)
data: OrderedDict or list[list[OrderedDict]]

dataset item as returned by _SynchronizedDataset’ or `_FrameDataset.

OrderedDict or list[list[OrderedDict]]:

Same type with input with transformations applied to dataset item.

transform_datum(datum)
transform_sample(sample)
class dgp.annotations.transforms.Compose(transforms)

Bases: object

Composes several transforms together.

transforms

List of transforms to compose __call__ method that takes in an OrderedDict

Example:
>>> transforms.Compose([
>>>     transforms.CenterCrop(10),
>>>     transforms.ToTensor(),
>>> ])
class dgp.annotations.transforms.OntologyMapper(original_ontology_table, lookup_table, remapped_ontology_table=None)

Bases: BaseTransform

Mapping ontology based on a lookup_table. The remapped ontology will base on the remapped_ontology_table if provided. Otherwise, the remapped ontology will be automatically constructed based on the order of lookup_table.

original_ontology_table: dict[str->dgp.annotations.Ontology]

Ontology object per annotation type The original ontology table. {

“bounding_box_2d”: BoundingBoxOntology[<ontology_sha>], “autolabel_model_1/bounding_box_2d”: BoundingBoxOntology[<ontology_sha>], “semantic_segmentation_2d”: SemanticSegmentationOntology[<ontology_sha>] “bounding_box_3d”: BoundingBoxOntology[<ontology_sha>],

}

lookup_table: dict[str->dict]

Lookup table per annotation type for each of the classes the user wants to remap. Lookups are old class name to new class name

e.g.: {

‘bounding_box_2d’: {

‘Car’: ‘Car’, ‘Truck’: ‘Car’, ‘Motorcycle’: ‘Motorcycle’

}

remapped_ontology_table: dict[str->dgp.annotations.Ontology]

Ontology object per annotation type If specified, the ontology will be remapped to the given remapped_ontology_table. {

“bounding_box_2d”: BoundingBoxOntology[<ontology_sha>], “autolabel_model_1/bounding_box_2d”: BoundingBoxOntology[<ontology_sha>], “semantic_segmentation_2d”: SemanticSegmentationOntology[<ontology_sha>] “bounding_box_3d”: BoundingBoxOntology[<ontology_sha>],

}

SUPPORTED_ANNOTATION_TYPES = ('bounding_box_2d', 'semantic_segmentation_2d', 'bounding_box_3d', 'instance_segmentation_2d')
transform_datum(datum)
datum: OrderedDict

Dictionary containing raw data and annotations, with keys such as: ‘rgb’, ‘intrinsics’, ‘bounding_box_2d’. All annotation_keys in self.lookup_table (and self.remapped_ontology_table) are expected to be contained

datum: OrderedDict

Same dictionary but with annotations in self.lookup_table remapped to desired ontologies

ValueError

Raised if the datum to remap does not contain all expected annotations.

dgp.annotations.visibility_filter_transform module

class dgp.annotations.visibility_filter_transform.BoundingBox3DCoalescer(src_datum_names, dst_datum_name, drop_src_datums=True)

Bases: BaseTransform

Coalesce 3D bounding box annotation from multiple datums and use it as an annotation of target datum. The bounding boxes are brought into the target datum frame.

src_datum_names: list[str]

List of datum names used to create a list of coalesced bounding boxes.

dst_datum_name: str

Datum whose bounding_box_3d is replaced by the coelesced bounding boxes.

drop_src_datums: bool, default: True

If True, then remove the source datums in the transformed sample.

transform_sample(sample)

Main entry point for coalescing 3D bounding boxes.

sample: list[OrderedDict]

Multimodal sample as returned by __getitem__() of _SynchronizedDataset.

new_sample: list[OrderedDict]

Multimodal sample with updated 3D bounding box annotations.

ValueError

Raised if there are multiple instances of the same kind of datum in a sample.

class dgp.annotations.visibility_filter_transform.InstanceMaskVisibilityFilter(camera_datum_names, min_mask_size=300, use_amodal_bbox2d_annotations=False)

Bases: BaseTransform

Given a multi-modal camera data, select instances whose instance masks appear big enough at least in one camera.

For example, even when an object is mostly truncated in one camera, if it looks big enough in a neighboring camera in the multi-modal sample, it will be included in the annotations. In the transformed dataset item, all detection annotations (i.e. bounding_box_3d, bounding_box_2d, `instance_segmentation_2d’) contain a single set of instances.

camera_datum_names: list[str]

Names of camera datums to be used in visibility computation. The definition of “visible” is that an instance has large mask at least in one of these cameras.

min_mask_size: int, default: 300

Minimum number of foreground pixels in instance mask for an instance to be added to annotations.

use_amodal_bbox2d_annotations: bool, default: False

If True, then use “amodal” bounding box (i.e. the box includes occluded/truncated parts) for 2D bounding box annotation. If False, then use “modal” bounding box (i.e. tight bounding box of instance mask.)

transform_datum(datum)

Main entry point for filtering a single-modal datum using instance masks.

datum: OrderedDict

Single-modal datum as returned by __getitem__() of _FrameDataset.

new_datum: OrderedDict

Single-modal sample with all detection annotations are filtered.

transform_sample(sample)

Main entry point for filtering a multimodal sample using instance masks.

sample: list[OrderedDict]

Multimodal sample as returned by __getitem__() of _SynchronizedDataset.

new_sample: list[OrderedDict]

Multimodal sample with all detection annotations are filtered.

ValueError

Raised if a 2D or 3D bounding box instance lacks any required instance IDs.

Module contents