Model Robustness Score
Score
Model Robustness Score measures an AI model's ability to maintain performance when exposed to new, unseen data or adversarial conditions. It's important for assessing how well the model generalizes and performs under varied or unexpected inputs.
Formula
Robustness Testing Methodology
Example
If an AI model maintains 90% of its accuracy on a new dataset designed to test its limits, it may be given a robustness score of 90/100.