FlowMC

FlowMC#

FlowMC combines a local MCMC kernel with a global normalizing-flow proposal trained on samples collected during a warm-up phase. It is well-suited to posteriors with complex geometry (strong correlations, multi-modality) where the random-walk kernel struggles to mix efficiently. The method is discussed at length in Ref. [1], and the software paper can be found in Ref. [2].

How it works. Sampling proceeds in two sequential phases.

Training phase (n_loop_training loops): Each loop runs n_local_steps local MCMC transitions followed by n_global_steps normalizing-flow proposals, then trains the flow for n_epochs epochs on all accepted samples so far. The flow is a masked coupling rational-quadratic spline (RQS) with num_layers coupling layers, each conditioned by a network of width hidden_size.

Production phase (n_loop_production loops): The flow is frozen; sampling alternates between local and global moves at the finer thinning output_thinning. Only production samples are returned.

The local kernel is a Gaussian Random Walk by default; MALA is also supported and uses gradient information for potentially better mixing.

Note

We are using an older version of FlowMC, namely v0.4.5. The main development of flowMC has migrated to a new clone of the repository: GW-JAX-Team/flowMC

Configuration#

sampler:
  type: flowmc
  n_chains: 1000             # number of parallel MCMC chains
  n_loop_training: 10        # number of training loops, combining MCMC sampling and flow training
  n_loop_production: 10      # production loops after training, with frozen flow
  n_local_steps: 100         # local MCMC steps per loop
  n_global_steps: 100        # normalizing-flow proposals per loop
  n_epochs: 30               # training epochs per training loop
  learning_rate: 0.001       # Adam learning rate for flow training
  train_thinning: 5          # downsample to store every N-th training sample
  output_thinning: 5         # downsample to store every N-th production sample

Total production samples = n_chains x (n_local_steps / output_thinning + n_global_steps / output_thinning) x n_loop_production.

train_thinning and output_thinning must not exceed n_local_steps or n_global_steps respectively; an error is raised at construction if this is violated.

API reference#

References

[1]

Marylou Gabrié, Grant M. Rotskoff, and Eric Vanden-Eijnden. Adaptive Monte Carlo augmented with normalizing flows. Proc. Nat. Acad. Sci., 119(10):e2109420119, 2022. arXiv:2105.12603, doi:10.1073/pnas.2109420119.

[2]

Kaze W. k. Wong, Marylou Gabrié, and Daniel Foreman-Mackey. flowMC: Normalizing flow enhanced sampling package for probabilistic inference in JAX. J. Open Source Softw., 8(83):5021, 2023. arXiv:2211.06397, doi:10.21105/joss.05021.