1. Software Engineering
  2. Artificial Intelligence

Model Robustness Score


Model Robustness Score measures an AI model's ability to maintain performance when exposed to new, unseen data or adversarial conditions. It's important for assessing how well the model generalizes and performs under varied or unexpected inputs.


Robustness Testing Methodology


If an AI model maintains 90% of its accuracy on a new dataset designed to test its limits, it may be given a robustness score of 90/100.