Deep Amortized Inference for Probabilistic Programs using Adversarial Compilation

We propose an amortized inference strategy for probabilistic programs, one that learns from the past inferences to speed up the future inferences.

Our proposed inference strategy is to train neural guidance programs via a minimax game, with the probabilistic program as a correlation device. From a game-theoretical vantage point, the role of a correlation device is to enforce better outcomes by sharing information between players. The shared information, in our case, is the execution trace, which gets used for computation of payoffs in the minimax game.

Author: Mahdi Azarafrooz

The extended abstract is available at: pps18-adversarial-compilation

Poster: Adversarial Compilation

This entry was posted in Uncategorized. Bookmark the permalink.