Bayesian Theory of Surprise to Quantify Degrees of Uncertainty
Speaker: Nelly Bencomo – Durham, United KingdomTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
Abstract
In the specific area of software engineering (SE) for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and artificial intelligence (AI). We are just starting. In this talk, we will talk about a novel and formal Bayesian definition of surprise as the basis for quantitative analysis to measure degrees of uncertainty and deviations of self-adaptive systems from normal behaviour. A surprise measures how observed data affects the models or assumptions of the world during runtime. The key idea is that a “surprising” event can be defined as one that causes a large divergence between the belief distributions prior to and posterior to the event occurring. In such a case the system may decide either to adapt accordingly or to flag that an abnormal situation is happening. We will discuss possible applications of Bayesian theory of surprise for the case of self-adaptive systems using Bayesian Inference and Partially Observable Markov Decision Processes (POMDPs). We will also discuss and cover different Surprise-based approaches to quantifying uncertainty (Bayesian Surprise, Shannon Surprise, Bayes Factor Surprise, and Bayes Factor Surprise) and work related to Digital Twins.About this Lecture
Number of Slides: 40 - 45Duration: 40 - 45 minutes
Languages Available: English, Spanish
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.