Object-centric Video Prediction via Decoupling of Object Dynamics and Interactions
We propose a novel framework for the task of object-centric video prediction, i.e., extracting the compositional structure of a video sequence, as well as modeling objects dynamics and interactions from visual observations in order to predict the future object states, from which we can then generate subsequent video frames. With the goal of learning meaningful spatio-temporal object representations and accurately forecasting object states, we propose two novel object-centric video predictor (OCVP) transformer modules, which decouple the processing of temporal dynamics and object interactions, thus presenting an improved prediction performance. In our experiments, we show how our object-centric prediction framework utilizing our OCVP predictors outperforms object-agnostic video prediction models on two different datasets, while maintaining consistent and accurate object representations.
- Published in:
IEEE International Conference on Image Processing - Type:
Inproceedings - Authors:
Villar-Corrales, Angel; Wahdan, Ismail; Behnke, Sven - Year:
2023
Citation information
Villar-Corrales, Angel; Wahdan, Ismail; Behnke, Sven: Object-centric Video Prediction via Decoupling of Object Dynamics and Interactions, IEEE International Conference on Image Processing, 2023, September, https://ais.uni-bonn.de/papers/ICIP_2023_Villar.pdf, VillarCorrales.etal.2023a,
@Inproceedings{VillarCorrales.etal.2023a,
author={Villar-Corrales, Angel; Wahdan, Ismail; Behnke, Sven},
title={Object-centric Video Prediction via Decoupling of Object Dynamics and Interactions},
booktitle={IEEE International Conference on Image Processing},
month={September},
url={https://ais.uni-bonn.de/papers/ICIP_2023_Villar.pdf},
year={2023},
abstract={We propose a novel framework for the task of object-centric video prediction, i.e., extracting the compositional structure of a video sequence, as well as modeling objects dynamics and interactions from visual observations in order to predict the future object states, from which we can then generate subsequent video frames. With the goal of learning meaningful spatio-temporal object...}}