Learning Credible Models
Jiaxuan Wang (University of Michigan); Jeeheh Oh (University of Michigan); Haozhu Wang (University of Michigan); Jenna Wiens (University of Michigan)
In many settings, it is important that a model be capable of providing reasons for its predictions (ıe, the model must be interpretable). However, the model’s reasoning may not conform with well-established knowledge. In such cases, while interpretable, the model lacks credibility. In this work, we formally define credibility in the linear setting and focus on techniques for learning models that are both accurate and credible. In particular, we propose a regularization penalty, expert yielded estimates (EYE), that incorporates expert knowledge about well-known relationships among covariates and the outcome of interest. We give both theoretical and empirical results comparing our proposed method to several other regularization techniques. Across a range of settings, experiments on both synthetic and real data show that models learned using the EYE penalty are significantly more credible than those learned using other penalties. Applied to two large-scale patient risk stratification task, our proposed technique results in a model whose top features overlap significantly with known clinical risk factors, while still achieving good predictive performance.
How can we assist you?
We'll be updating the website as information becomes available. If you have a question that requires immediate attention, please feel free to contact us. Thank you!
If you are experiencing any issue related to registrations (confirmation, payment problem etc.) or have any questions regarding registrations, please do not submit this form. Please send an email to Kelly Hughes (firstname.lastname@example.org) or call 1.888.526.1242 or 303.530.4683.
Please enter the word you see in the image below: