Modeling Uncertainty in Bayesian Neural Networks with Dropout: The effect of weight prior and network architecture selection
Ellis Brown*, Melanie Manko*, and Ethan Matlin*
In American Indian Science and Engineering Society National Conference, Oct 2019
🎖️ Third Place, Graduate Student Research Competition
While neural networks are quite successful at making predictions, these predictions are usually point estimates lacking any notion of uncertainty. However, when fed data very different from its training data, it is useful for a neural network to realize that its predictions could very well be wrong and encode that information through uncertainty bands around its point estimate prediction. Bayesian Neural Networks trained with Dropout are a natural way of modeling this uncertainty with theoretical foundations relating them to Variational Inference approximating Gaussian Process posteriors. In this paper, we investigate the effects of weight prior selection and network architecture on uncertainty estimates derived from Dropout Bayesian Neural Networks.