You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Congrats for for code. I have a doubt regarding two of the hyperparameters: as far as I know, the layers of your model would be equivalent to each of the rows that we can see on the Figure 2 of the paper, but I don´t really see why do we need the blocks for the implementation. Could you clarify this to me?
Thanks. Kind regards.
The text was updated successfully, but these errors were encountered:
Congrats for for code. I have a doubt regarding two of the hyperparameters: as far as I know, the layers of your model would be equivalent to each of the rows that we can see on the Figure 2 of the paper, but I don´t really see why do we need the blocks for the implementation. Could you clarify this to me?
Hi @fmorenopino!
I think I know the answer to your question.
The number of layers means how many dilated convolutions are stacked after each other with increasing receptive fields; while the number of blocks means how many of those layer sequences are in the model.
With a concrete example: if you have 3 layers in 2 block, then the dilations are:
Hi Vincent,
Congrats for for code. I have a doubt regarding two of the hyperparameters: as far as I know, the layers of your model would be equivalent to each of the rows that we can see on the Figure 2 of the paper, but I don´t really see why do we need the blocks for the implementation. Could you clarify this to me?
Thanks. Kind regards.
The text was updated successfully, but these errors were encountered: