evaluate_from_predictions#

maite.tasks.evaluate_from_predictions(*, metric, predictions, targets)[source]#

Evaluate pre-calculated predictions against target (truth) data for some specified metric.

Parameters:
metricSomeMetric

Compatible MAITE Metric.

predictionsSequence[Sequence[SomeTargetType]]

Sequence of batches of predictions generated by running inference on some model.

targetsSequence[Sequence[SomeTargetType]]

Sequence of batches of ground-truth targets that correspond to provided predictions argument.

Returns:
metric_calculation: MetricComputeReturnType

The resulting metric value.

Raises:
InvalidArgument

If predictions or targets arguments have zero length (i.e. no batches), differing lengths, or corresponding elements (batches) have differing lengths.