Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dgl error? #115

Open
fglaser opened this issue May 22, 2024 · 2 comments
Open

dgl error? #115

fglaser opened this issue May 22, 2024 · 2 comments

Comments

@fglaser
Copy link

fglaser commented May 22, 2024

Hi,

I installed RoseTTAFold-All-Atom exactly as instructed, and then completed missing packages,

2005 pip install hydra-core --upgrade
2007 pip install icecream
2009 pip install assertpy
2011 pip install openbabel
2014 pip3 install openbabel
2015 pip install openbabel-wheel
2017 pip install dgl==0.9.1
2019 pip install deepdiff

My test run starts to run correctly, but it dies after hhblit and hhfilter end, I installed everything as suggested and added all missing packages, but some where missing and I installed manually, like hydra and dgl.

Maybe dgl package it's not the correct version?

How should I try to install it exactly?

Thanks A lot,
Fabian

python -m rf2aa.run_inference --config-name protein.yaml
/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/hydra/_internal/defaults_list.py:251: UserWarning: In 'protein.yaml': Defaults list is missing _self_. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/default_composition_order for more information
warnings.warn(msg, UserWarning)
Using the cif atom ordering for TRP.
/home/fabian/RoseTTAFold-All-Atom/rf2aa/chemical.py:20: UserWarning: Using torch.cross without specifying the dim arg is deprecated.
Please either pass the dim explicitly or simply use torch.linalg.cross.
The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at ../aten/src/ATen/native/Cross.cpp:62.)
Z = torch.cross(Xn,Yn)
./make_msa.sh examples/protein/7u7w_A.fasta 7u7w_protein/A 4 64 pdb100_2021Mar03/pdb100_2021Mar03
signalp6 6.0g has not been installed yet.

Due to license restrictions, this recipe cannot distribute signalp6 directly.
Please download signalp-6.0g.fast.tar.gz from:
https://services.healthtech.dtu.dk/services/SignalP-6.0g/9-Downloads.php#

and run the following command to complete the installation:
$ signalp6-register signalp-6.0g.fast.tar.gz

This will copy signalp6 into your conda environment.
Running HHblits against UniRef30 with E-value cutoff 1e-10

  • 21:34:15.672 INFO: Input file = 7u7w_protein/A/hhblits/t000_.1e-10.a3m

  • 21:34:15.672 INFO: Output file = 7u7w_protein/A/hhblits/t000_.1e-10.id90cov75.a3m

  • 21:34:15.900 WARNING: Maximum number 100000 of sequences exceeded in file 7u7w_protein/A/hhblits/t000_.1e-10.a3m

  • 21:34:42.416 INFO: Input file = 7u7w_protein/A/hhblits/t000_.1e-10.a3m

  • 21:34:42.416 INFO: Output file = 7u7w_protein/A/hhblits/t000_.1e-10.id90cov50.a3m

  • 21:34:42.609 WARNING: Maximum number 100000 of sequences exceeded in file 7u7w_protein/A/hhblits/t000_.1e-10.a3m

Running PSIPRED
Running hhsearch
cat: 7u7w_protein/A/t000_.ss2: No such file or directory
Error executing job with overrides: []
Traceback (most recent call last):
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/run_inference.py", line 206, in main
runner.infer()
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/run_inference.py", line 155, in infer
outputs = self.run_model_forward(input_feats)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/run_inference.py", line 121, in run_model_forward
outputs = recycle_step_legacy(self.model,
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/training/recycling.py", line 30, in recycle_step_legacy
output_i = ddp_model(**input_i)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/model/RoseTTAFoldModel.py", line 368, in forward
msa, pair, xyz, alpha_s, xyz_allatom, state, symmsub, quat = self.simulator(
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/model/Track_module.py", line 1109, in forward
msa_full, pair, xyz, state, alpha, symmsub, quat = self.extra_block[i_m](msa_full, pair,
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/model/Track_module.py", line 963, in forward
xyz, state, alpha, quat = self.str2str(
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
return func(args, **kwargs)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/model/Track_module.py", line 545, in forward
shift = self.se3(G, node.reshape(B
L, -1, 1), l1_feats, edge_feats)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/model/layers/SE3_network.py", line 99, in forward
return self.se3(G, node_features, edge_features)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/SE3Transformer/se3_transformer/model/transformer.py", line 185, in forward
node_feats = self.graph_modules(node_feats, edge_feats, graph=graph, basis=basis)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/SE3Transformer/se3_transformer/model/transformer.py", line 47, in forward
input = module(input, *args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/SE3Transformer/se3_transformer/model/layers/attention.py", line 162, in forward
fused_key_value = self.to_key_value(node_features, edge_features, graph, basis)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fabian/RoseTTAFold-All-Atom/rf2aa/SE3Transformer/se3_transformer/model/layers/convolution.py", line 319, in forward
src, dst = graph.edges()
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/view.py", line 166, in call
return self._graph.all_edges(*args, **kwargs)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/heterograph.py", line 3417, in all_edges
src, dst, eid = self._graph.edges(self.get_etype_id(etype), order)
File "/home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/heterograph_index.py", line 609, in edges
edge_array = _CAPI_DGLHeteroEdges(self, int(etype), order)
File "dgl/_ffi/_cython/./function.pxi", line 293, in dgl._ffi._cy3.core.FunctionBase.call
File "dgl/_ffi/_cython/./function.pxi", line 225, in dgl._ffi._cy3.core.FuncCall
File "dgl/_ffi/_cython/./function.pxi", line 215, in dgl._ffi._cy3.core.FuncCall3
dgl._ffi.base.DGLError: [21:39:54] /opt/dgl/src/array/array.cc:34: Operator Range does not support cuda device.
Stack trace:
[bt] (0) /home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/libdgl.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4f) [0x7f58be7686ef]
[bt] (1) /home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/libdgl.so(dgl::aten::Range(long, long, unsigned char, DLContext)+0xc1) [0x7f58be740b61]
[bt] (2) /home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/libdgl.so(dgl::UnitGraph::COO::Edges(unsigned long, std::string const&) const+0x9b) [0x7f58bebaa22b]
[bt] (3) /home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/libdgl.so(dgl::UnitGraph::Edges(unsigned long, std::string const&) const+0xa1) [0x7f58beba4f31]
[bt] (4) /home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/libdgl.so(dgl::HeteroGraph::Edges(unsigned long, std::string const&) const+0x2a) [0x7f58beaa44ba]
[bt] (5) /home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/libdgl.so(+0x4e6e2c) [0x7f58beaade2c]
[bt] (6) /home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/libdgl.so(DGLFuncCall+0x48) [0x7f58bea30118]
[bt] (7) /home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/_ffi/_cy3/core.cpython-310-x86_64-linux-gnu.so(+0x1525c) [0x7f58be1bf25c]
[bt] (8) /home/fabian/localcolabfold/colabfold-conda/lib/python3.10/site-packages/dgl/_ffi/_cy3/core.cpython-310-x86_64-linux-gnu.so(+0x1578b) [0x7f58be1bf78b]

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

@fglaser
Copy link
Author

fglaser commented May 23, 2024

Can somebody please send a protocol of how exactly to install hydra and dgl?
I don't understand why is trying to run hydra from colabfold environment...

Thanks!
Fabian

@kimdn
Copy link

kimdn commented Jul 26, 2024

I met "dgl._ffi.base.DGLError: [21:39:54] /opt/dgl/src/array/array.cc:34: Operator Range does not support cuda device." error as well.

My solution was
pip install dgl==1.1.3+cu118 -f https://data.dgl.ai/wheels/cu118/repo.html

This is logical given that the error concerns DGL’s lack of support for CUDA. The specification dgl==1.1.3+cu118 indicates that this version of DGL is intended to be installed with CUDA support, unlike the general pip install dgl==1.1.3 which installs the version without CUDA enhancements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants