Combining Variational Autoencoders and Transformer Language Models for Improved Password Generation

Password generation techniques have recently been explored by leveraging deep-learning natural language processing (NLP) algorithms. Previous work has raised the state of the art for password guessing algorithms significantly, by approaching the problem using either variational autoencoders with CNN-based encoder and decoder architectures or transformer-based architectures (namely GPT2) for text generation. In this work we aim to combine both paradigms, introducing a novel architecture that leverages the expressive power of transformers with the natural sampling approach to text generation of variational autoencoders. We show how our architecture generates state-of-the-art results in password matching performance across multiple benchmark datasets.

  • Published in:
    ARES '22: Proceedings of the 17th International Conference on Availability, Reliability and Securit International Conference on Availability, Reliability and Security (ARES)
  • Type:
    Inproceedings
  • Authors:
    D. Biesner, K. Cvejoski, R. Sifa
  • Year:
    2022

Citation information

D. Biesner, K. Cvejoski, R. Sifa: Combining Variational Autoencoders and Transformer Language Models for Improved Password Generation, International Conference on Availability, Reliability and Security (ARES), ARES '22: Proceedings of the 17th International Conference on Availability, Reliability and Securit, 2022, https://doi.org/10.1145/3538969.3539000, Biesner.etal.2022,