Skip to main content

Inference Explorer

The Inference Explorer tab offers a detailed visual and tabular exploration of all predictions made during a validation run. It is primarily used to assess individual segmentation outcomes and compare them against available ground truth annotations for each study.


Model and Dataset Cards

At the top of the page, two summary cards provide metadata on the model and dataset involved in the selected validation:

  • Model Card: Displays details such as model name, version, modality, anatomy, class mapping, framework (e.g., PyTorch), and deployment status. The date of the last validation run is also shown.
  • Dataset Card: Lists the dataset name, source institution, image modality, format (e.g., DICOM), annotation ID, dataset size, and number of studies included in the validation run.

Prediction Table

Below the summary cards, a table lists all studies processed in the validation:

  • Columns include:
    • Transaction ID
    • Status (e.g., Completed)
    • Study ID
    • Modality
    • SOP Classes
    • Instance / Series count
    • Tags and Source Tags

Each row represents a unique prediction result tied to a specific study.


Study Viewer

Clicking a row expands a dual-panel viewer:

  • Left Panel: Displays the predicted segmentation result in color overlay. Each anatomical structure is listed with toggleable visibility.
  • Right Panel: Shows the original medical image (e.g., MRI or CT scan) without overlay.

Additional options include:

  • Open in the Playground: Launches the study in the Playground module for advanced interaction.
  • Select Annotation: Allows you to load ground truth annotations, enabling side-by-side comparison with model predictions.

This interface is especially useful for manual error inspection, class-wise quality checks, and visual validation.


Key Use Cases

  • Review segmentation results per study
  • Compare model output with ground truth
  • Navigate predictions within a validation batch
  • Identify and isolate underperforming predictions visually

✅ Tip: Use this page to spot edge cases or labeling inconsistencies by visually comparing automatic segmentations against the ground truth.