Home / Research /

Asymptotically exact data augmentation (AXDA)

Data augmentation, by the introduction of auxiliary variables, has become an ubiquitous technique to improve mixing/convergence properties, simplify the implementation or reduce the computational time of inference methods such as Markov chain Monte Carlo. Nonetheless, introducing appropriate auxiliary variables while preserving the initial target probability distribution cannot be conducted in a systematic way but highly depends on the considered problem. To deal with such issues, we draw a unified framework, namely asymptotically exact data augmentation (AXDA), which encompasses several well-established but also more recent approximate augmented models. Benefiting from a much more general perspective, we deliver some additional qualitative and quantitative insights concerning these schemes. In particular, general properties of AXDA along with non-asymptotic theoretical results on the approximation that is made are stated. Close connections to existing Bayesian methods (e.g. mixture modeling, robust Bayesian models and approximate Bayesian computation) are also drawn. All the results are illustrated with examples and applied to standard statistical learning problems.

The AXDA framework, its connection to existing models and related algorithms (optimization, expectation-maximization, Monte Carlo sampling and variational Bayes) to infer from AXDA are detailed in the arXiv preprint:

In particular, Monte Carlo sampling from AXDA leads to the split-and-augmented Gibbs sampler (SPA) detailed in the paper published in IEEE Trans. Signal Processing:

The corresponding Matlab codes for SPA are available on Maxime Vono's GitHub.

Sparse logistic regression

As an illustration, an AXDA-based model and the corresponding split-and-augmented Gibbs sampler has been used to conduct sparse Bayesian logistic regression efficiently. The results are reported in the paper presented at IEEE Workshop on Machine Learning for Signal Processing (MLSP 2018):

More results are also available here. The corresponding Matlab codes are available on Maxime Vono's GitHub.

Image restoration under Poisson noise and log-concave prior

Another instance of the the proposed AXDA and split-and-augmented Gibbs sampler have been implemented to conduct Bayesian image restoration under Poisson noise with a log-concave prior (e.g., TV regularization or sparse frame-based synthesis regularization). The results are reported in the paper presented at IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP 2019):

sitemeter stats