Papers
arxiv:2406.19320

Efficient World Models with Context-Aware Tokenization

Published on Jun 27
· Submitted by vmicheli on Jul 1

Abstract

Scaling up deep Reinforcement Learning (RL) methods presents a significant challenge. Following developments in generative modelling, model-based RL positions itself as a strong contender. Recent advances in sequence modelling have led to effective transformer-based world models, albeit at the price of heavy computations due to the long sequences of tokens required to accurately simulate environments. In this work, we propose Delta-IRIS, a new agent with a world model architecture composed of a discrete autoencoder that encodes stochastic deltas between time steps and an autoregressive transformer that predicts future deltas by summarizing the current state of the world with continuous tokens. In the Crafter benchmark, Delta-IRIS sets a new state of the art at multiple frame budgets, while being an order of magnitude faster to train than previous attention-based approaches. We release our code and models at https://github.com/vmicheli/delta-iris.

Community

Paper author Paper submitter

Δ-IRIS, an agent that learns behaviors by imagining millions of trajectories in its world model. The world model is composed of a discrete autoencoder that encodes stochastic deltas between time steps and an autoregressive Transformer that predicts future deltas by summarizing the current state of the world with continuous tokens.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.19320 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.19320 in a Space README.md to link it from this page.

Collections including this paper 3