A Geometric Approach to Predicting Bounds of Downstream Model Performance
Brian J. Goode: Virginia Tech; Debanjan Datta: Virginia Tech
This paper presents the motivation and methodology for including model application criteria into baseline analysis. We will focus on detailing the interplay between the common measures of mean square error (MSE) and accuracy as it relates to perceived model performance. MSE is a common aggregate measure for the performance of predictive regression models. The advantages are numerous. MSE is agnostic to the choice of model given that the set of possible outcome values are defined on the appropriate metric space. In practice, decisions on how to subsequently use a trained model are based on predictive performance, relative to a baseline where input features are not used - colloquially a “random model”. However, the relative performance gains of a model in terms of MSE to the baseline does not guarantee commensurate gains when deployed in downstream applications, systems, or processes. This paper demonstrates one derivation of a distribution to qualify MSE performance for multi-class decision making systems desiring a certain level of accuracy. The model error is qualified through comparison to relevant baselines tied to the application suited to evaluating individual outcome performance criteria.
How can we assist you?
We'll be updating the website as information becomes available. If you have a question that requires immediate attention, please feel free to contact us. Thank you!
Please enter the word you see in the image below: