Skip to content

Commit

Permalink
update data
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Nov 11, 2024
1 parent e219057 commit 42f6c20
Show file tree
Hide file tree
Showing 64 changed files with 961 additions and 1,039 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-04 06:06:05 UTC</i>
<i class="footer">This page was last updated on 2024-11-11 06:05:45 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/0acd117521ef5aafb09fed02ab415523b330b058.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-04 06:06:15 UTC</i>
<i class="footer">This page was last updated on 2024-11-11 06:05:51 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -74,8 +74,8 @@ hide:
</td>
<td>2015-09-11</td>
<td>Proceedings of the National Academy of Sciences</td>
<td>3383</td>
<td>66</td>
<td>3408</td>
<td>67</td>
</tr>

<tr id="None">
Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>282</td>
<td>289</td>
<td>13</td>
</tr>

Expand Down Expand Up @@ -135,7 +135,7 @@ hide:
<td>2017-12-01</td>
<td>2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)</td>
<td>12</td>
<td>66</td>
<td>67</td>
</tr>

</tbody>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-04 06:05:36 UTC</i>
<i class="footer">This page was last updated on 2024-11-11 06:05:10 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2023-10-24</td>
<td>ArXiv</td>
<td>8</td>
<td>9</td>
<td>50</td>
</tr>

Expand Down Expand Up @@ -109,7 +109,7 @@ hide:
Hongyuan Yu, Ting Li, Weichen Yu, Jianguo Li, Yan Huang, Liang Wang, A. Liu
</td>
<td>2022-07-01</td>
<td>DBLP, ArXiv</td>
<td>ArXiv, DBLP</td>
<td>44</td>
<td>35</td>
</tr>
Expand Down
18 changes: 9 additions & 9 deletions docs/recommendations/123acfbccca0460171b6b06a4012dbb991cde55b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-04 06:05:37 UTC</i>
<i class="footer">This page was last updated on 2024-11-11 06:05:12 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -44,13 +44,13 @@ hide:

<tr id="We introduce Chronos, a simple yet effective framework for pretrained probabilistic time series models. Chronos tokenizes time series values using scaling and quantization into a fixed vocabulary and trains existing transformer-based language model architectures on these tokenized time series via the cross-entropy loss. We pretrained Chronos models based on the T5 family (ranging from 20M to 710M parameters) on a large collection of publicly available datasets, complemented by a synthetic dataset that we generated via Gaussian processes to improve generalization. In a comprehensive benchmark consisting of 42 datasets, and comprising both classical local models and deep learning methods, we show that Chronos models: (a) significantly outperform other methods on datasets that were part of the training corpus; and (b) have comparable and occasionally superior zero-shot performance on new datasets, relative to methods that were trained specifically on them. Our results demonstrate that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks, positioning pretrained models as a viable tool to greatly simplify forecasting pipelines.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/4fb78450650894091b0a55b5504a8b0a6f3dec37" target='_blank'>Chronos: Learning the Language of Time Series</a></td>
<td><a href="https://www.semanticscholar.org/paper/02fa77e4f355198cb4270f6d4a07517bf09c46dd" target='_blank'>Chronos: Learning the Language of Time Series</a></td>
<td>
Abdul Fatir Ansari, Lorenzo Stella, Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Sundar Rangapuram, Sebastian Pineda Arango, Shubham Kapoor, Jasper Zschiegner, Danielle C. Maddix, Michael W. Mahoney, Kari Torkkola, Andrew Gordon Wilson, Michael Bohlke-Schneider, Yuyang Wang
</td>
<td>2024-03-12</td>
<td>ArXiv</td>
<td>63</td>
<td>69</td>
<td>18</td>
</tr>

Expand Down Expand Up @@ -86,7 +86,7 @@ hide:
</td>
<td>2024-06-22</td>
<td>ArXiv</td>
<td>11</td>
<td>13</td>
<td>3</td>
</tr>

Expand All @@ -97,8 +97,8 @@ hide:
Yong Liu, Haoran Zhang, Chenyu Li, Xiangdong Huang, Jianmin Wang, Mingsheng Long
</td>
<td>2024-02-04</td>
<td>DBLP, ArXiv</td>
<td>18</td>
<td>ArXiv, DBLP</td>
<td>21</td>
<td>67</td>
</tr>

Expand All @@ -110,7 +110,7 @@ hide:
</td>
<td>2023-10-05</td>
<td>ArXiv</td>
<td>57</td>
<td>60</td>
<td>5</td>
</tr>

Expand All @@ -134,7 +134,7 @@ hide:
</td>
<td>2024-02-16</td>
<td>ArXiv</td>
<td>4</td>
<td>5</td>
<td>8</td>
</tr>

Expand All @@ -146,7 +146,7 @@ hide:
</td>
<td>2023-10-03</td>
<td>ArXiv</td>
<td>184</td>
<td>195</td>
<td>9</td>
</tr>

Expand Down
12 changes: 6 additions & 6 deletions docs/recommendations/16f01c1b3ddd0b2abd5ddfe4fdb3f74767607277.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-04 06:05:41 UTC</i>
<i class="footer">This page was last updated on 2024-11-11 06:05:18 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -62,7 +62,7 @@ hide:
</td>
<td>2024-02-16</td>
<td>ArXiv</td>
<td>4</td>
<td>5</td>
<td>8</td>
</tr>

Expand All @@ -74,7 +74,7 @@ hide:
</td>
<td>2024-02-25</td>
<td>ArXiv</td>
<td>8</td>
<td>11</td>
<td>8</td>
</tr>

Expand All @@ -86,7 +86,7 @@ hide:
</td>
<td>2022-09-20</td>
<td>IEEE Transactions on Knowledge and Data Engineering</td>
<td>77</td>
<td>80</td>
<td>17</td>
</tr>

Expand Down Expand Up @@ -134,7 +134,7 @@ hide:
</td>
<td>2023-08-16</td>
<td>ArXiv</td>
<td>21</td>
<td>22</td>
<td>3</td>
</tr>

Expand All @@ -146,7 +146,7 @@ hide:
</td>
<td>2024-09-17</td>
<td>ArXiv</td>
<td>1</td>
<td>2</td>
<td>1</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-04 06:05:38 UTC</i>
<i class="footer">This page was last updated on 2024-11-11 06:05:13 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -61,8 +61,8 @@ hide:
Ali Behrouz, Farnoosh Hashemi
</td>
<td>2024-02-13</td>
<td>DBLP, ArXiv</td>
<td>37</td>
<td>ArXiv, DBLP</td>
<td>39</td>
<td>9</td>
</tr>

Expand Down
56 changes: 26 additions & 30 deletions docs/recommendations/279cd637b7e38bba1dd8915b5ce68cbcacecbe68.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-04 06:05:46 UTC</i>
<i class="footer">This page was last updated on 2024-11-11 06:05:28 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -49,7 +49,7 @@ hide:
Andreas Doerr, Christian Daniel, Martin Schiegg, D. Nguyen-Tuong, S. Schaal, Marc Toussaint, Sebastian Trimpe
</td>
<td>2018-01-31</td>
<td>DBLP, MAG, ArXiv</td>
<td>ArXiv, MAG, DBLP</td>
<td>113</td>
<td>93</td>
</tr>
Expand All @@ -66,6 +66,18 @@ hide:
<td>40</td>
</tr>

<tr id="World modelling is essential for understanding and predicting the dynamics of complex systems by learning both spatial and temporal dependencies. However, current frameworks, such as Transformers and selective state-space models like Mambas, exhibit limitations in efficiently encoding spatial and temporal structures, particularly in scenarios requiring long-term high-dimensional sequence modelling. To address these issues, we propose a novel recurrent framework, the \textbf{FACT}ored \textbf{S}tate-space (\textbf{FACTS}) model, for spatial-temporal world modelling. The FACTS framework constructs a graph-structured memory with a routing mechanism that learns permutable memory representations, ensuring invariance to input permutations while adapting through selective state-space propagation. Furthermore, FACTS supports parallel computation of high-dimensional sequences. We empirically evaluate FACTS across diverse tasks, including multivariate time series forecasting and object-centric world modelling, demonstrating that it consistently outperforms or matches specialised state-of-the-art models, despite its general-purpose world modelling design.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/ec840755867d3f5cf175cb57de963f042297f4ef" target='_blank'>FACTS: A Factored State-Space Framework For World Modelling</a></td>
<td>
Li Nanbo, Firas Laakom, Yucheng Xu, Wenyi Wang, Jurgen Schmidhuber
</td>
<td>2024-10-28</td>
<td>ArXiv</td>
<td>0</td>
<td>7</td>
</tr>

<tr id="Time series with long-term structure arise in a variety of contexts and capturing this temporal structure is a critical challenge in time series analysis for both inference and forecasting settings. Traditionally, state space models have been successful in providing uncertainty estimates of trajectories in the latent space. More recently, deep learning, attention-based approaches have achieved state of the art performance for sequence modeling, though often require large amounts of data and parameters to do so. We propose Stanza, a nonlinear, non-stationary state space model as an intermediate approach to fill the gap between traditional models and modern deep learning approaches for complex time series. Stanza strikes a balance between competitive forecasting accuracy and probabilistic, interpretable inference for highly structured time series. In particular, Stanza achieves forecasting accuracy competitive with deep LSTMs on real-world datasets, especially for multi-step ahead forecasting.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/9c619e58d073772ff298228d47afcab625c7f37a" target='_blank'>Stanza: A Nonlinear State Space Model for Probabilistic Inference in Non-Stationary Time Series</a></td>
Expand All @@ -90,6 +102,18 @@ hide:
<td>50</td>
</tr>

<tr id="Forecasting the behaviour of complex dynamical systems such as interconnected sensor networks characterized by high-dimensional multivariate time series(MTS) is of paramount importance for making informed decisions and planning for the future in a broad spectrum of applications. Graph forecasting networks(GFNs) are well-suited for forecasting MTS data that exhibit spatio-temporal dependencies. However, most prior works of GFN-based methods on MTS forecasting rely on domain-expertise to model the nonlinear dynamics of the system, but neglect the potential to leverage the inherent relational-structural dependencies among time series variables underlying MTS data. On the other hand, contemporary works attempt to infer the relational structure of the complex dependencies between the variables and simultaneously learn the nonlinear dynamics of the interconnected system but neglect the possibility of incorporating domain-specific prior knowledge to improve forecast accuracy. To this end, we propose a hybrid architecture that combines explicit prior knowledge with implicit knowledge of the relational structure within the MTS data. It jointly learns intra-series temporal dependencies and inter-series spatial dependencies by encoding time-conditioned structural spatio-temporal inductive biases to provide more accurate and reliable forecasts. It also models the time-varying uncertainty of the multi-horizon forecasts to support decision-making by providing estimates of prediction uncertainty. The proposed architecture has shown promising results on multiple benchmark datasets and outperforms state-of-the-art forecasting methods by a significant margin. We report and discuss the ablation studies to validate our forecasting architecture.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/05cd3afc8208f1c0dd61b09a90f35dd42497e175" target='_blank'>Multi-Knowledge Fusion Network for Time Series Representation Learning</a></td>
<td>
Sakhinana Sagar Srinivas, Shivam Gupta, Krishna Sai Sudhir Aripirala, Venkataramana Runkana
</td>
<td>2024-08-22</td>
<td>ArXiv</td>
<td>0</td>
<td>3</td>
</tr>

<tr id="Real-world dynamical systems often consist of multiple stochastic subsystems that interact with each other. Modeling and forecasting the behavior of such dynamics are generally not easy, due to the inherent hardness in understanding the complicated interactions and evolutions of their constituents. This paper introduces the relational state-space model (R-SSM), a sequential hierarchical latent variable model that makes use of graph neural networks (GNNs) to simulate the joint state transitions of multiple correlated objects. By letting GNNs cooperate with SSM, R-SSM provides a flexible way to incorporate relational information into the modeling of multi-object dynamics. We further suggest augmenting the model with normalizing flows instantiated for vertex-indexed random variables and propose two auxiliary contrastive objectives to facilitate the learning. The utility of R-SSM is empirically evaluated on synthetic and real time series datasets.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/7a1e5377b08489c2969f73c56efc557e34f578e1" target='_blank'>Relational State-Space Model for Stochastic Multi-Object Systems</a></td>
Expand All @@ -114,34 +138,6 @@ hide:
<td>12</td>
</tr>

<tr id="Time series modeling is a well-established problem, which often requires that methods (1) expressively represent complicated dependencies, (2) forecast long horizons, and (3) efficiently train over long sequences. State-space models (SSMs) are classical models for time series, and prior works combine SSMs with deep learning layers for efficient sequence modeling. However, we find fundamental limitations with these prior approaches, proving their SSM representations cannot express autoregressive time series processes. We thus introduce SpaceTime, a new state-space time series architecture that improves all three criteria. For expressivity, we propose a new SSM parameterization based on the companion matrix -- a canonical representation for discrete-time processes -- which enables SpaceTime's SSM layers to learn desirable autoregressive processes. For long horizon forecasting, we introduce a"closed-loop"variation of the companion SSM, which enables SpaceTime to predict many future time-steps by generating its own layer-wise inputs. For efficient training and inference, we introduce an algorithm that reduces the memory and compute of a forward pass with the companion matrix. With sequence length $\ell$ and state-space size $d$, we go from $\tilde{O}(d \ell)$ na\"ively to $\tilde{O}(d + \ell)$. In experiments, our contributions lead to state-of-the-art results on extensive and diverse benchmarks, with best or second-best AUROC on 6 / 7 ECG and speech time series classification, and best MSE on 14 / 16 Informer forecasting tasks. Furthermore, we find SpaceTime (1) fits AR($p$) processes that prior deep SSMs fail on, (2) forecasts notably more accurately on longer horizons than prior state-of-the-art, and (3) speeds up training on real-world ETTh1 data by 73% and 80% relative wall-clock time over Transformers and LSTMs.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/a7d68b1702af08ce4dbbf2cd0b083e744ae5c6be" target='_blank'>Effectively Modeling Time Series with Simple Discrete State Spaces</a></td>
<td>
Michael Zhang, Khaled Kamal Saab, Michael Poli, Tri Dao, Karan Goel, Christopher Ré
</td>
<td>2023-03-16</td>
<td>ArXiv</td>
<td>31</td>
<td>45</td>
</tr>

<tr id="

Gaussian state space models have been used for decades as generative models of sequential data. They admit an intuitive probabilistic interpretation, have a simple functional form, and enjoy widespread adoption. We introduce a unified algorithm to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks. Our learning algorithm simultaneously learns a compiled inference network and the generative model, leveraging a structured variational approximation parameterized by recurrent neural networks to mimic the posterior distribution. We apply the learning algorithm to both synthetic and real-world datasets, demonstrating its scalability and versatility. We find that using the structured approximation to the posterior results in models with significantly higher held-out likelihood.

">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/2af17f153e3fd71e15db9216b972aef222f46617" target='_blank'>Structured Inference Networks for Nonlinear State Space Models</a></td>
<td>
R. G. Krishnan, Uri Shalit, D. Sontag
</td>
<td>2016-09-30</td>
<td>DBLP, MAG, ArXiv</td>
<td>435</td>
<td>48</td>
</tr>

</tbody>
<tfoot>
<tr>
Expand Down
Loading

0 comments on commit 42f6c20

Please sign in to comment.