You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that positional encoding is not used in the timeseries_classification_transformer.py example. Given the importance of sequence order in time series data, why was this omitted? Does this impact the model's effectiveness for time series classification? I'd appreciate any insights on this design choice. Thank you.
Standalone code to reproduce the issue or tutorial link
I’m still waiting for a response on my question about the lack of positional encoding in the timeseries_classification_transformer.py example. Can someone clarify why it was omitted and its impact on model performance?
Issue Type
Bug
Source
source
Keras Version
Keras 2.14
Custom Code
Yes
OS Platform and Distribution
No response
Python version
No response
GPU model and memory
No response
Current Behavior?
I noticed that positional encoding is not used in the timeseries_classification_transformer.py example. Given the importance of sequence order in time series data, why was this omitted? Does this impact the model's effectiveness for time series classification? I'd appreciate any insights on this design choice. Thank you.
Standalone code to reproduce the issue or tutorial link
Relevant log output
No response
The text was updated successfully, but these errors were encountered: