Seventh International Conference on Scalable Uncertainty Management (SUM 2013)
Washington DC Area, USA, September 16-18, 2013

Invited speakers

Besides regular paper presentations, the program of SUM 2013 will feature invited talks by Christos Faloutsos (Carnegie Mellon University) and Steve Eubank (Virginia Tech), and an invited tutorial by Rama Chellappa (University of Maryland).


Influence propagation in large graphs - theorems, algorithms, and case studies

Christos Faloutsos (Carnegie Mellon University)

Given the specifics of a virus (or product, or hashtag) how quickly will it propagate on a contact network? Will it create an epidemic, or will it quickly die out? The way a virus/product/meme propagates on a graph is important, because it can help us design immunization policies (if we want to stop it) or marketing policies (if we want it to succeed). We present some surprising results on the so-called 'epidemic threshold', we discuss the effects of time-varying contact networks, and we present fast algorithms to achieve near-optimal immunization.


Using network reliability polynomials to characterize contact networks for infectious disease epidemiology

Steve Eubank (Virginia Tech)

It is well-known that the structure of host-host contact networks can play an important role in determining the spread of infectious disease. Especially over the past decade, there have been many attempts to infer human contact networks at scales from urban regions to continents, and to simulate epidemics on the resulting networks. Moreover, since both pharmaceutical and non-pharmaceutical interventions can be represented as changes in network structure, simulated epidemics can be used to evaluate hypothetical combinations of interventions. Unfortunately, it is difficult to understand the simulated epidemics' sensitivity to details in the network structure. Results, for example those relating degree distribution to outbreak dynamics, typically make unwarranted assumptions about independence or symmetries in the network that introduce hard-to-control errors. Understanding this sensitivity to network structure is crucial for answering several related questions:

  • How closely must the inferred networks match the modeled system for inferences about interventions to be useful?
  • Can we take a short cut to evaluating interventions that eliminates the need for simulations by characterizing networks directly?
  • Given a network, what is the optimal intervention under constrained resources? If we cannot optimize, can we at least develop useful rules of thumb?
This talk will review 50+-year-old concepts of network reliability and describe how they can be extended and applied in the context of epidemiology. I will introduce a class of reliability polynomials and demonstrate several useful representations for them; discuss briefly the computational complexity of evaluating the polynomials exactly; and illustrate the use of scalable, distributed simulation for efficient approximation. I will show how to identify the contacts that are the most important targets for intervention and, more generally, how to characterize and compare networks in terms that are immediately relevant to epidemiology. Some representations of the reliability polynomial are well-suited to analytical reasoning about graph structure. I will illustrate this with a brief discussion of the phenomenon of "crossing reliability". In the context of outbreak interventions, the possibility that reliability polynomials cross implies that the relative ranking of interventions depends on the host-host transmissibility. I will discuss what kinds of structural changes induce crossing reliability, and the magnitude of the resulting difference in reliability.


The Evolution of Probabilistic Models and Uncertainty Analysis in Computer Vision Research

Rama Chellappa (University of Maryland)

During the past three decades, probabilistic methods and uncertainty analysis have been slowly but steadily integrated into computer vision research. During the early years, as more emphasis was given to geometry and probabilistic inference over geometric representations was challenging, the role of probabilistic inference was minimal. Since the introduction of Markov random fields, robust methods and error bounds, many computer vision problems are lending themselves for more rigorous analysis. In this talk, I will illustrate these ideas by highlighting the role played by MRFs in image analysis, error bounds for the structure from motion problem and some recent works on probabilistic inference on manifolds for activity recognition.