models.yolo.YoloObjectDetector#
- class maite.interop.models.yolo.YoloObjectDetector(model, metadata, yolo_inference_args=None)[source]#
MAITE-wrapped object detection YOLO model.
Wrapped YOLO model which adheres to MAITE protocols. This wrapped model is intended to be used as-is for basic use cases of object detection.
Notes
Only Ultralytics YOLOv5 and YOLOv8 models are currently supported.
Examples
Import relevant Python libraries.
>>> import numpy as np >>> from typing_extensions import Sequence >>> from ultralytics import YOLO >>> from maite.interop.models.yolo import YoloObjectDetector >>> from maite.protocols import object_detection as od
Load an Ultralytics-hosted YOLOv5 model, ‘yolov5nu’,
>>> yolov5_model = YOLO("yolov5nu") >>> metadata = ModelMetadata(id="YOLOv5nu", index2label=yolov5_model.names) >>> wrapped_yolov5_model = YoloObjectDetector(yolov5_model, metadata)
or, load an Ultralytics YOLOv5 model from a local filepath.
>>> yolov5_model = YOLO("./yolov5nu.pt") >>> metadata = ModelMetadata(id="YOLOv5nu", index2label=yolov5_model.names) >>> wrapped_yolov5_model = YoloObjectDetector(yolov5_model, metadata)
Load an Ultralytics-hosted YOLOv8 model, ‘yolov8n’,
>>> yolov8_model = YOLO("yolov8n") >>> metadata = ModelMetadata(id="YOLOv8n", index2label=yolov8_model.names) >>> wrapped_yolov8_model = YoloObjectDetector(yolov8_model, metadata)
or, load an Ultralytics YOLOv8 model from a local filepath.
>>> yolov8_model = YOLO("./yolov8n.pt") >>> metadata = ModelMetadata(id="YOLOv8n", index2label=yolov8_model.names) >>> wrapped_yolov8_model = YoloObjectDetector(yolov8_model, metadata)
Perform object detection inference with the model.
>>> N_DATAPOINTS = 5 # datapoints in dataset >>> C = 3 # number of color channels >>> H = 5 # img height >>> W = 6 # img width >>> batch_data: Sequence[od.InputType] = list(np.random.rand(N_DATAPOINTS, C, H, W)) >>> model_results: Sequence[od.TargetType] = wrapped_yolov8_model(batch_data) >>> print(model_results) [ObjectDetectionTargets(boxes=array([], shape=(0, 4), dtype=float32), labels=array([], dtype=uint8), scores=array([], dtype=float32)), ObjectDetectionTargets(boxes=array([], shape=(0, 4), dtype=float32), labels=array([], dtype=uint8), scores=array([], dtype=float32)), ObjectDetectionTargets(boxes=array([], shape=(0, 4), dtype=float32), labels=array([], dtype=uint8), scores=array([], dtype=float32)), ObjectDetectionTargets(boxes=array([], shape=(0, 4), dtype=float32), labels=array([], dtype=uint8), scores=array([], dtype=float32)), ObjectDetectionTargets(boxes=array([], shape=(0, 4), dtype=float32), labels=array([], dtype=uint8), scores=array([], dtype=float32))]
- Parameters:
- modelYOLO | AutoShape
A loaded Ultralytics YOLO model. The model can be loaded via the
ultralytics
oryolov5
Python Library. Models must be either YOLOv5 or YOLOv8 and must be designed for the object detection task.- metadataModelMetadata
A typed dictionary containing at least an ‘id’ field of type str.
- yolo_inference_argsOptional[dict[str, Any]], (default=None)
Additional keyword arguments for configuring the model’s prediction process. These arguments (such as
verbose
,conf
, anddevice
for YOLOv8), will be passed at inference time to the underlying native model.For
ultralytics
loaded models, refer to the Ultralytics Docs for allowed keyword arguments.For
yolov5
loaded legacy models, refer to the YOLOv5 model <https://github.com/ultralytics/yolov5/blob/30e4c4f09297b67afedf8b2bcd851833ddc9dead/models/common.py#L243-L252>_ for allowed keyword arguments, as stated in the `Ultralytics YOLOv5 Docs.