evaluate_from_predictions#

maite.tasks.evaluate_from_predictions(*, metric, pred_batches, target_batches, metadata_batches)[source]#

Evaluate pre-calculated predictions against target (truth) data for some specified metric.

Parameters:
metricSomeMetric

Compatible MAITE Metric.

pred_batchesSequence[Sequence[SomeTargetType]]

Sequence of batches of predictions generated by running inference on some model.

target_batchesSequence[Sequence[SomeTargetType]]

Sequence of batches of ground-truth targets that correspond to provided predictions argument.

metadata_batchesSequence[Sequence[SomeDatumMetadataType]]

Sequence of batches of datum metadata type.

Returns:
metric_calculation: MetricComputeReturnType

The resulting metric value.

Raises:
ValueError

If predictions or targets arguments have zero length (i.e. no batches), differing lengths, or corresponding elements (batches) have differing lengths.