Neural Markov Jump Processes

Markov jump processes are continuous-time stochastic processes with a wide range of applications in both natural and social sciences. Despite their widespread use, inference in these models is highly non-trivial and typically proceeds via either Monte Carlo or expectation-maximization methods. In this work we introduce an alternative, variational inference algorithm for Markov jump processes which relies on neural ordinary differential equations, and is trainable via backpropagation. Our methodology learns neural, continuous-time representations of the observed data, that are used to approximate the initial distribution and time-dependent transition probability rates of the posterior Markov jump process. The time-independent rates of the prior process are in contrast trained akin to generative adversarial networks. We test our approach on synthetic data sampled from ground-truth Markov jump processes, experimental single-molecule ion channel data and molecular dynamics simulations. Source code to reproduce all our experiments is available online.

  • Published in:
    Proceedings of the 40th International Conference on Machine Learning
  • Type:
    Inproceedings
  • Authors:
    Seifner, Patrick; Sanchez, Ramses J.
  • Year:
    2023

Citation information

Seifner, Patrick; Sanchez, Ramses J.: Neural Markov Jump Processes, Proceedings of the 40th International Conference on Machine Learning, 2023, https://arxiv.org/abs/2305.19744, Seifner.Sanchez.2023a,

Associated Lamarr Researchers