Need for Explicit Assumptions #394
Replies: 2 comments
-
One thing that I wanted to propose is to set
This is something that everyone says but is not clear to me. Personally, I consider the multi/single-head setup part of the model's architecture. The presence of labels at train/test time is a property of the scenario (necessary for multi-head to be possible). Catching if the task label is used only at train/test time is something that we can't do right now. However, we can define a strategy that masks the task label at training/test time optionally (or do it in the
Yes, this is confusing because we need two labels for each experience. In general you can have multiple experiences with the same task labels (MIT). At the same time, when you don't have task labels, you still need a label to identify experiences for logging purposes. This label must not be used by strategies/models at all but it's necessary for logging. This is where the confusion comes from. Honestly, this is difficult to enforce at the moment and I don't have a solution right now except imposing severe limitations to plugins. |
Beta Was this translation helpful? Give feedback.
-
Moving this to Discussions, the "feature-request" label will be dismissed soon! |
Beta Was this translation helpful? Give feedback.
-
Right now it is very confusing what assumptions are being made on the data stream.
I think it would be very convenient, when defining your setup, you can just explicitly define your set of assumptions (e.g. Assumption objects like plugins) both in training and evaluation. We can then define default sets of assumptions for the task/class/domain/data incremental scenarios.
When defining your data stream, e.g. SplitMNIST and set SplitMNIST attribute 'return_task_id' either True/False, this completely influences the output of the evaluator plugin. The Scenario should always return the task-id, but the assumptions should determine whether it is used for training/evaluation or not. This makes it also much more clear if we are working in a multi/single-head setup, as now defining 'return_task_id=True' automatically transfers to the task-incremental setup.
When 'return_task_id=False' I also found that on the experience object, the
task_label
remains 0. while thecurrent_experience
gives the actual task count. This makes implementation very confusing, as it depends on the assumptions when to use 'task_label' or 'current_experience'.Beta Was this translation helpful? Give feedback.
All reactions