We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(1)Grad-CAM,即梯度加权类激活映射 (Gradient-weighted Class Activation Mapping),是一种用于解释卷积神经网络决策的方法。它通过可视化模型对于给定输入的关注区域来提供洞察。关键思想是将输出类别的梯度(相对于特定卷积层的输出)与该层的输出相乘,然后取平均,得到一个"粗糙"的热力图。这个热力图可以被放大并叠加到原始图像上,以显示模型在分类时最关注的区域(如图1所示)。 (2)我将这一可视化方法用于微调的Unimol模型(预测分子的HOMO能级),得到图2。 (3)但我疑惑的点在于,Grad-CAM技术它通常是用于CNN模型的图像分类模型,是否可以用于回归模型的可解释性分析呢?因此,想向老师和大家请教一下
The text was updated successfully, but these errors were encountered:
这是colorbar
Sorry, something went wrong.
No branches or pull requests
(1)Grad-CAM,即梯度加权类激活映射 (Gradient-weighted Class Activation Mapping),是一种用于解释卷积神经网络决策的方法。它通过可视化模型对于给定输入的关注区域来提供洞察。关键思想是将输出类别的梯度(相对于特定卷积层的输出)与该层的输出相乘,然后取平均,得到一个"粗糙"的热力图。这个热力图可以被放大并叠加到原始图像上,以显示模型在分类时最关注的区域(如图1所示)。
(2)我将这一可视化方法用于微调的Unimol模型(预测分子的HOMO能级),得到图2。
(3)但我疑惑的点在于,Grad-CAM技术它通常是用于CNN模型的图像分类模型,是否可以用于回归模型的可解释性分析呢?因此,想向老师和大家请教一下
The text was updated successfully, but these errors were encountered: