Uncertainty quantification

I give a brief summary of my UQ research below. A detailed list of my publications in this area can be found in the publications page.


Analysis of Bayesian inverse problems

During the last few decades the Bayesian methodology has attracted a lot of attention in the inverse problems community. The goal of this approach is to infer an unknown parameter from a set of noisy indirect measurements. A particularly challenging setting for Bayesian inverse problems is when the parameter belongs to an infinite dimensional Banach space. This is the case in inverse problems where the forward map involves the solution of a partial differential equation (PDE). A key question in infinite-dimensional Bayesian inverse problems is that of well-posedness: Is the solution to the inverse problem well-defined and does it depend continuously on the data?

I have a long term interest in the issue of well-posedness when the prior measure is not Gaussian and has heavy-tails. So far I have studied the cases of non-Gaussian priors with exponential tails and infinitely divisible priors but there are many remaining open questions. Most notably, continuous dependence of the posterior on perturbations of the prior. 

  • Bamdad Hosseini. “Well-posed Bayesian inverse problems with infinitely divisible and heavy-tailed prior measures”. SIAM/ASA Journal on Uncertainty Quantification 5 (1 2017), pp. 1024–1060. url: https://doi.org/10.1137/16M1096372
  • Bamdad Hosseini and Nilima Nigam. “Well-posed Bayesian inverse problems: priors with exponential tails”. SIAM/ASA Journal on Uncertainty Quantification 5 (1 2017), pp. 436–465. url: https://doi.org/10.1137/16M1076824.

Function space MCMC

The main challenge in practical applications of the Bayesian methodology for parameter estimation is the extraction of information from the posterior probability measure. The main workhorse of the Bayesian framework in this context is the Markov Chain Monte Carlo (MCMC) method.

I developed two Metropolis Hastings algorithms for sampling posterior measures with Laplace or Gamma type priors. The idea behind my algorithms is to design a prior reversible proposal that results in a posterior reversible MCMC kernel. The algorithms scale well with dimension as the proposal kernels are well-defined in the infinite-dimensional limit.

MCMC-1
Sampling the a posterior with a Laplace prior using the RCAR algorithm

More recently I have also become interested in the convergence analysis of MCMC algorithms in infinite-dimensions. This is a challenging topic in applied probability that has been left open for the most part. I have an active project on this topic that uses optimal transport and SPDE theory to prove uniform spectral gaps for certain Metropolis Hastings algorithms. 

  • Bamdad Hosseini. “Two Metropolis-Hastings algorithms for posterior measures with non-Gaussian priors in infinite dimensions”. SIAM/ASA Journal on Uncertainty Quantification 7 (4 2019), pp. 1185–1223. url: https://doi.org/10.1137/18M1183017
  • Bamdad Hosseini and James E. Johndrow “Spectral gaps and error estimates for infinite-dimensional Metropolis-Hastings with non-Gaussian priors” (2019). url:https://arxiv.org/abs/1810.00297.

Modelling with non-Gaussian priors

Estimation of sparse parameters is a central problem in different areas such as compressive sensing, inverse problems and statistics and has wide applications in image compression, medical and astronomical imaging and machine learning. I am interested in the case where the compressible parameter of interest belongs to an infinite dimensional Banach or Hilbert space. My goal is to develop a framework for estimation of compressible parameters as well as the uncertainties that are associated with the estimated values.

I introduced various non-Gaussian priors to model compressible parameters and studied the theoretical aspects of Bayesian inverse with these  priors and even showed that efficient algorithms can be designed to sample from the resulting posterior. The remaining question now is to rigorously classify whether a certain prior class is good for modelling sparsity and in what sense?

CS-prior-1
A non-convex prior resulting in a “compressible” posterior concentrated around certain coordinate axes.
  • Bamdad Hosseini. “Two Metropolis-Hastings algorithms for posterior measures with non-Gaussian priors in infinite dimensions”. SIAM/ASA Journal on Uncertainty Quantification 7 (4 2019), pp. 1185–1223. url: https://doi.org/10.1137/18M1183017
  • Bamdad Hosseini. “Well-posed Bayesian inverse problems with infinitely divisible and heavy-tailed prior measures”. SIAM/ASA Journal on Uncertainty Quantification 5 (1 2017), pp. 1024–1060. url: https://doi.org/10.1137/16M1096372
  • Bamdad Hosseini and Nilima Nigam. “Well-posed Bayesian inverse problems: priors with exponential tails”. SIAM/ASA Journal on Uncertainty Quantification 5 (1 2017), pp. 436–465. url: https://doi.org/10.1137/16M1076824.