(Jeffrey & Wandelt: https://arxiv.org/abs/2305.11241)
Evidence Networks can enable Bayesian model comparison when state-of-the-art methods (e.g. nested sampling) fail and even when likelihoods or priors are intractable or unknown. Bayesian model comparison, i.e. the computation of Bayes factors or evidence ratios, can be cast as an optimization problem. Though the Bayesian interpretation of optimal classification is well-known, here we change perspective and present classes of loss functions that result in fast, amortized neural estimators that directly estimate convenient functions of the Bayes factor. We have also introduced the leaky parity-odd power (l-POP) transform, leading to the novel ``l-POP-Exponential'' loss function.