Analysis of Bayesian inverse problems
During the last few decades the Bayesian methodology has attracted a lot of attention in the inverse problems community. The goal of this approach is to infer an unknown parameter from a set of noisy indirect measurements. A particularly challenging setting for Bayesian inverse problems is when the parameter belongs to an infinite dimensional Banach space. This is the case in inverse problems where the forward map involves the solution of a partial differential equation (PDE). A key question in infinite-dimensional Bayesian inverse problems is that of well-posedness: Is the solution to the inverse problem well-defined and does it depend continuously on the data?
I am interested in the issue of well-posedness when the prior measure is not Gaussian and has heavy-tails. I have studied the cases of non-Gaussian priors with exponential tails and infinitely divisible priors in these articles: BHNN’17 and BH’17.
Function space MCMC
The main challenge in practical applications of the Bayesian methodology for parameter estimation is the extraction of information from the posterior probability measure. The main workhorse of the Bayesian framework in this context is the Markov Chain Monte Carlo (MCMC) method.
In BH’19 I developed two Metropolis Hastings algorithms for sampling posterior measures with Laplace or Gamma type priors. They idea behind my algorithms is to design a prior reversible proposal that results in a posterior reversible MCMC kernel. The algorithms scale well with dimension as the proposal kernels are well-defined in the infinite-dimensional limit.
Modelling with non-Gaussian priors
Estimation of sparse parameters is a central problem in different areas such as compressive sensing, inverse problems and statistics and has wide applications in image compression, medical and astronomical imaging and machine learning. I am interested in the case where the compressible parameter of interest belongs to an infinite dimensional Banach or Hilbert space. My goal is to develop a framework for estimation of compressible parameters as well as the uncertainties that are associated with the estimated values.
In the articles BH’17 and BH’19, I introduced various non-Gaussian priors to model compressible parameters and studied the theoretical aspects of Bayesian inverse with these priors and even showed that efficient algorithms can be designed to sample from the resulting posterior. The remaining question now is to rigorously classify whether a certain prior class is good for modelling sparsity and in what sense?