Invited talk given at the 2024 NeurIPS workshop “Time Series in the Age of Large Models”. The talk was also given as a department seminar to Dept. DSAI, Faculty of IT, Monash University. The video on youtube is a slightly modified re-recorded version.
Abstract
In this talk, we will discuss some fundamental limitations we perceive in the current operation of foundational models in the context of time series forecasting. We’ll argue that training on ever more data is not always beneficial, and we’ll illustrate how common yet often inadequate evaluation practices can obscure these limitations. Ultimately, we will argue that the way to address these challenges is multimodality.
Papers
Full text pdfs are available through the pdf icon through the following links:
-
Christoph Bergmeir (2024) LLMs and Foundational Models: Not (Yet) as Good as Hoped. In: Foresight: The International Journal of Applied Forecasting, (73). A preliminary version of this paper was published on LinkedIn