Password generation techniques have recently been explored by leveraging deep-learning natural language processing (NLP) algorithms. Previous work has raised the state of the art for password guessing algorithms significantly, by approaching the problem using either variational autoencoders with CNN-based encoder and decoder architectures or transformer-based architectures (namely GPT2) for text generation. In this work we aim to combine both paradigms, introducing a novel architecture that leverages the expressive power of transformers with the natural sampling approach to text generation of variational autoencoders. We show how our architecture generates state-of-the-art results in password matching performance across multiple benchmark datasets.
Combining Variational Autoencoders and Transformer Language Models for Improved Password Generation
Type: Inproceedings
Author: D. Biesner, K. Cvejoski, R. Sifa
Journal: ARES '22: Proceedings of the 17th International Conference on Availability, Reliability and Securit
Year: 2022
Citation information
D. Biesner, K. Cvejoski, R. Sifa:
Combining Variational Autoencoders and Transformer Language Models for Improved Password Generation.
ARES '22: Proceedings of the 17th International Conference on Availability, Reliability and Securit,
2022,
37,
1-6,
August,
https://doi.org/10.1145/3538969.3539000