evaluate_from_predictions#
- maite.tasks.evaluate_from_predictions(*, metric, pred_batches, target_batches, metadata_batches)[source]#
Evaluate pre-calculated predictions against target (truth) data for some specified metric.
- Parameters:
- metric
SomeMetric Compatible MAITE Metric.
- pred_batches
Sequence[Sequence[SomeTargetType]] Sequence of batches of predictions generated by running inference on some model.
- target_batches
Sequence[Sequence[SomeTargetType]] Sequence of batches of ground-truth targets that correspond to provided predictions argument.
- metadata_batches
Sequence[Sequence[SomeDatumMetadataType]] Sequence of batches of datum metadata type.
- metric
- Returns:
- metric_calculation:
MetricComputeReturnType The resulting metric value.
- metric_calculation:
- Raises:
ValueErrorIf predictions or targets arguments have zero length (i.e. no batches), differing lengths, or corresponding elements (batches) have differing lengths.