Validation Summary
The Validation Summary tab provides a centralized dashboard to track the performance of AI models across datasets. It displays key statistics, a searchable and filterable table of validation runs, and a guided workflow to start new validations.
1. Key Metrics Overview
At the top of the page, high-level statistics offer quick insights into validation activity:
- In Progress: Number of currently running validations
- Completed Runs: Total number of completed validation runs
- Model Counts: Number of distinct models used in validations
- Dataset Counts: Number of datasets used across all runs
- Most Used Model: The model used in the most validation runs
- Most Used Dataset: The dataset selected most frequently for validation
These metrics help assess overall coverage, reuse, and workload distribution.
2. Validation Runs Table
The main table lists all validation runs with the following details:
- Validation Name: User-defined label for the run
- Model Name and Model Type: Which model was used, and for what kind of task (e.g., segmentation)
- Progress: Real-time status such as Completed or In Progress
- Sample: Displayed as Total / Done / Failed for each run
- Dataset: The dataset used, including metadata like sample source
- Annotation: The annotation set used to compute validation metrics
- User: The user who initiated the run
- Begin / End Time: Timestamps for execution
3. Filters and Run Management
Clicking the Filters button opens a side panel where you can refine the list by:
- Model or Model Type
- Run Status (Completed, In Progress)
- Dataset or Annotation
- User or Model Tags
- Whether metrics were generated
- Favorited status
- Active vs. Archived runs
Filters are useful for narrowing down large-scale validations and identifying specific runs.
4. Starting a New Validation
Click the Start Validation button to begin a new validation workflow. The guided setup includes:
4.1 Model Selection
- Choose from deployed models compatible with your use case.
- Review model details such as framework, task type, and version.
4.2 Dataset Selection
- Select an available dataset or upload your own.
- The system checks compatibility between dataset and model.
4.3 Annotation Selection
- Pick the ground truth annotation set used to evaluate model predictions.
4.4 Configure Metrics (Optional)
- Customize which metrics to generate, such as Dice, IoU, or F1-score.
- Skip this step to use default metric settings.
4.5 Run Settings
- Add a run name and optional tags.
- Choose to validate the full dataset or a sample subset.
4.6 Launch
- Click Start to trigger the run.
- The new validation will appear in the table and update in real time.
5. Monitoring Progress
As validations proceed, you can:
- Track completion rates (Done vs. Total)
- View failures instantly
- Click into any run to review detailed metrics
- Stop active runs with the Stop Validations button if needed
Note: The Summary tab is ideal for validation tracking at scale. Use it to explore model behavior across multiple datasets, benchmark progress, and manage validation workflows with transparency.