-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Academic advising #4
Comments
Hello there,
Thanks for your interest in our work. Please find the following pseudocode code for MLP design
def __init__(self, input_size, n_output, nb_units, **kwargs):
super(…, self).__init__()
self.fc = nn.Sequential(
nn.Linear(input_size, nb_units),
torch.nn.ReLU(inplace=True),
nn.Linear(nb_units, n_output)
)
def forward(self, x):
output = self.fc(x)
return output
Regards,
Bing
From: Xyanpian ***@***.***>
Sent: Monday, June 12, 2023 5:53 AM
To: bingrao/ctin ***@***.***>
Cc: Subscribed ***@***.***>
Subject: [bingrao/ctin] Academic advising (Issue #4)
How is the final MLP layer designed? The decoding generates a tensor of [batch,200,dim], do you use the view function to change it linearly after it becomes [batch,200×dim]? Or do you only make a linear change for the vector of that 200th dim dimension? Because it is the speed of generating the ith moment.
—
Reply to this email directly, view it on GitHub<#4>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADNJF7SBXOQ4AOIP23CL6R3XK3RGFANCNFSM6AAAAAAZDDYGIQ>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.******@***.***>>
|
Hello, thank you very much for replying, what bothers me more is the input of this mlp layer, is it a tensor of [batch, 200, dim] and then the view becomes [batch, 200*dim] after entering mlp? Because the decoder decodes [batch, 200, dim], or do you enter this tensor into mlp? |
Hello, thank you very much for replying, what bothers me more is the input of this mlp layer, is it a tensor of [batch, 200, dim] and then the view becomes [batch, 200*dim] after entering mlp? Because the decoder decodes [batch, 200, dim], or do you enter this tensor into mlp? |
2 similar comments
Hello, thank you very much for replying, what bothers me more is the input of this mlp layer, is it a tensor of [batch, 200, dim] and then the view becomes [batch, 200*dim] after entering mlp? Because the decoder decodes [batch, 200, dim], or do you enter this tensor into mlp? |
Hello, thank you very much for replying, what bothers me more is the input of this mlp layer, is it a tensor of [batch, 200, dim] and then the view becomes [batch, 200*dim] after entering mlp? Because the decoder decodes [batch, 200, dim], or do you enter this tensor into mlp? |
您好,我CTIN这篇论文的解码器有一点疑惑,希望您能帮我解答困惑。
按照我的理解,temporal embeding的输出应该是一个[200,dim]的一个tensor,
解码器对这个tensor进行操作生成一个2d的向量。那您在原文中提到的使用masked self-attention是直接对[200,200]的权重矩阵进行mask吗?
因为这个窗口生成的就是t时刻的速度,如果说和上一时刻的有关系的话,那不应该是对整个窗口进行mask?您在本窗口进行mask的意义是不是不大?
麻烦您解答我的这几个困惑
| |
Yan
|
|
***@***.***
|
---- Replied Message ----
| From | Bingbing ***@***.***> |
| Date | 6/13/2023 11:42 |
| To | ***@***.***> |
| Cc | ***@***.***> ,
***@***.***> |
| Subject | Re: [bingrao/ctin] Academic advising (Issue #4) |
Hello there,
Thanks for your interest in our work. Please find the following pseudocode code for MLP design
def __init__(self, input_size, n_output, nb_units, **kwargs):
super(…, self).__init__()
self.fc = nn.Sequential(
nn.Linear(input_size, nb_units),
torch.nn.ReLU(inplace=True),
nn.Linear(nb_units, n_output)
)
def forward(self, x):
output = self.fc(x)
return output
Regards,
Bing
From: Xyanpian ***@***.***>
Sent: Monday, June 12, 2023 5:53 AM
To: bingrao/ctin ***@***.***>
Cc: Subscribed ***@***.***>
Subject: [bingrao/ctin] Academic advising (Issue #4)
How is the final MLP layer designed? The decoding generates a tensor of [batch,200,dim], do you use the view function to change it linearly after it becomes [batch,200×dim]? Or do you only make a linear change for the vector of that 200th dim dimension? Because it is the speed of generating the ith moment.
—
Reply to this email directly, view it on GitHub<#4>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADNJF7SBXOQ4AOIP23CL6R3XK3RGFANCNFSM6AAAAAAZDDYGIQ>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.******@***.***>>
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
How is the final MLP layer designed? The decoding generates a tensor of [batch,200,dim], do you use the view function to change it linearly after it becomes [batch,200×dim]? Or do you only make a linear change for the vector of that 200th dim dimension? Because it is the speed of generating the ith moment.
The text was updated successfully, but these errors were encountered: