Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: __call__() got an unexpected keyword argument 'hazards' #10

Open
aletolia opened this issue Aug 22, 2023 · 0 comments
Open

Comments

@aletolia
Copy link

Hello, I have used CLAM before, so I am very interested in this project. However, this issue has been troubling me for several days. I attempted to replicate the process using TCGA-STAD data on PORPOISE. I used CLAM for feature extraction, and followed all the other steps as described in the Readme.md. I utilized the tcga_stad_all_clean.csv.zip file from the dataset_csv_mutsig folder, and I also selected the corresponding tcga_stad split. Despite this, I am still encountering the following error.

Below is the error log:

CUDA_VISIBLE_DEVICES=0 python main.py --which_splits 5foldcv --split_dir tcga_stad --mode coattn --reg_type pathomic --model_type mcat --apply_sig --fusion bilinear
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Experiment Name: tcga_stad_MCAT_nll_surv_a0.0_pathomicreg1e-05_5foldcv_gc32_bilinear

Load Dataset
(0, 0) : 0
(0, 1) : 1
(1, 0) : 2
(1, 1) : 3
(2, 0) : 4
(2, 1) : 5
(3, 0) : 6
(3, 1) : 7
label column: survival_months
label dictionary: {(0, 0): 0, (0, 1): 1, (1, 0): 2, (1, 1): 3, (2, 0): 4, (2, 1): 5, (3, 0): 6, (3, 1): 7}
number of classes: 8
slide-level counts:
7 117
5 51
4 35
6 35
2 35
3 13
0 35
1 28
Name: label, dtype: int64
Patient-LVL; Number of samples registered in class 0: 35
Slide-LVL; Number of samples registered in class 0: 35
Patient-LVL; Number of samples registered in class 1: 28
Slide-LVL; Number of samples registered in class 1: 28
Patient-LVL; Number of samples registered in class 2: 35
Slide-LVL; Number of samples registered in class 2: 35
Patient-LVL; Number of samples registered in class 3: 13
Slide-LVL; Number of samples registered in class 3: 13
Patient-LVL; Number of samples registered in class 4: 35
Slide-LVL; Number of samples registered in class 4: 35
Patient-LVL; Number of samples registered in class 5: 51
Slide-LVL; Number of samples registered in class 5: 51
Patient-LVL; Number of samples registered in class 6: 35
Slide-LVL; Number of samples registered in class 6: 35
Patient-LVL; Number of samples registered in class 7: 117
Slide-LVL; Number of samples registered in class 7: 117
label column: survival_months
label dictionary: {(0, 0): 0, (0, 1): 1, (1, 0): 2, (1, 1): 3, (2, 0): 4, (2, 1): 5, (3, 0): 6, (3, 1): 7}
number of classes: 8
slide-level counts:
7 117
5 51
4 35
6 35
2 35
3 13
0 35
1 28
Name: label, dtype: int64
Patient-LVL; Number of samples registered in class 0: 35
Slide-LVL; Number of samples registered in class 0: 35
Patient-LVL; Number of samples registered in class 1: 28
Slide-LVL; Number of samples registered in class 1: 28
Patient-LVL; Number of samples registered in class 2: 35
Slide-LVL; Number of samples registered in class 2: 35
Patient-LVL; Number of samples registered in class 3: 13
Slide-LVL; Number of samples registered in class 3: 13
Patient-LVL; Number of samples registered in class 4: 35
Slide-LVL; Number of samples registered in class 4: 35
Patient-LVL; Number of samples registered in class 5: 51
Slide-LVL; Number of samples registered in class 5: 51
Patient-LVL; Number of samples registered in class 6: 35
Slide-LVL; Number of samples registered in class 6: 35
Patient-LVL; Number of samples registered in class 7: 117
Slide-LVL; Number of samples registered in class 7: 117
split_dir ./splits/5foldcv/tcga_stad
################# Settings ###################
num_splits: 5
k_start: -1
k_end: -1
task: tcga_stad_survival
max_epochs: 20
results_dir: ./results_new
lr: 0.0002
experiment: tcga_stad_MCAT_nll_surv_a0.0_pathomicreg1e-05_5foldcv_gc32_bilinear
reg: 1e-05
label_frac: 1.0
bag_loss: nll_surv
seed: 1
model_type: mcat
model_size_wsi: small
model_size_omic: small
use_drop_out: True
weighted_sample: True
gc: 32
opt: adam
split_dir: ./splits/5foldcv/tcga_stad
Shape (279, 2533)
Shape (70, 2533)
****** Normalizing Data ******
training: 279, validation: 70
Genomic Dimensions [93, 341, 537, 436, 219, 437]

Training Fold 0!

Init train/val/test splits...
Done!
Training on 279 samples
Validating on 70 samples

Init loss function... Done!

Init Model... Done!
MCAT_Surv(
(wsi_net): Sequential(
(0): Linear(in_features=1024, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(sig_networks): ModuleList(
(0): Sequential(
(0): Sequential(
(0): Linear(in_features=93, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(1): Sequential(
(0): Sequential(
(0): Linear(in_features=341, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(2): Sequential(
(0): Sequential(
(0): Linear(in_features=537, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(3): Sequential(
(0): Sequential(
(0): Linear(in_features=436, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(4): Sequential(
(0): Sequential(
(0): Linear(in_features=219, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(5): Sequential(
(0): Sequential(
(0): Linear(in_features=437, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
)
(coattn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(path_transformer): TransformerEncoder(
(layers): ModuleList(
(0): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(linear1): Linear(in_features=256, out_features=512, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
(linear2): Linear(in_features=512, out_features=256, bias=True)
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.25, inplace=False)
(dropout2): Dropout(p=0.25, inplace=False)
)
(1): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(linear1): Linear(in_features=256, out_features=512, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
(linear2): Linear(in_features=512, out_features=256, bias=True)
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.25, inplace=False)
(dropout2): Dropout(p=0.25, inplace=False)
)
)
)
(path_attention_head): Attn_Net_Gated(
(attention_a): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Tanh()
(2): Dropout(p=0.25, inplace=False)
)
(attention_b): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.25, inplace=False)
)
(attention_c): Linear(in_features=256, out_features=1, bias=True)
)
(path_rho): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(omic_transformer): TransformerEncoder(
(layers): ModuleList(
(0): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(linear1): Linear(in_features=256, out_features=512, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
(linear2): Linear(in_features=512, out_features=256, bias=True)
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.25, inplace=False)
(dropout2): Dropout(p=0.25, inplace=False)
)
(1): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(linear1): Linear(in_features=256, out_features=512, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
(linear2): Linear(in_features=512, out_features=256, bias=True)
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.25, inplace=False)
(dropout2): Dropout(p=0.25, inplace=False)
)
)
)
(omic_attention_head): Attn_Net_Gated(
(attention_a): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Tanh()
(2): Dropout(p=0.25, inplace=False)
)
(attention_b): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.25, inplace=False)
)
(attention_c): Linear(in_features=256, out_features=1, bias=True)
)
(omic_rho): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(mm): BilinearFusion(
(linear_h1): Sequential(
(0): Linear(in_features=256, out_features=32, bias=True)
(1): ReLU()
)
(linear_z1): Sequential(
(0): Linear(in_features=512, out_features=32, bias=True)
)
(linear_o1): Sequential(
(0): Linear(in_features=32, out_features=32, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(linear_h2): Sequential(
(0): Linear(in_features=256, out_features=32, bias=True)
(1): ReLU()
)
(linear_z2): Sequential(
(0): Linear(in_features=512, out_features=32, bias=True)
)
(linear_o2): Sequential(
(0): Linear(in_features=32, out_features=32, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(post_fusion_dropout): Dropout(p=0.25, inplace=False)
(encoder1): Sequential(
(0): Linear(in_features=1089, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(encoder2): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
)
(classifier): Linear(in_features=256, out_features=4, bias=True)
)
Total number of parameters: 4350918
Total number of trainable parameters: 4350918

Init optimizer ... Done!

Init Loaders... Done!

Setup EarlyStopping...
Setup Validation C-Index Monitor... Done!

/home/aletolia/documents/GithubRepos/PORPOISE/utils/utils.py:77: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
event_time = np.array([item[8] for item in batch])
/home/aletolia/documents/GithubRepos/PORPOISE/utils/utils.py:77: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
event_time = np.array([item[8] for item in batch])
/home/aletolia/documents/GithubRepos/PORPOISE/utils/utils.py:77: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
event_time = np.array([item[8] for item in batch])
/home/aletolia/documents/GithubRepos/PORPOISE/utils/utils.py:77: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
event_time = np.array([item[8] for item in batch])
Traceback (most recent call last):
File "main.py", line 259, in
results = main(args)
File "main.py", line 76, in main
val_latest, cindex_latest = train(datasets, i, args)
File "/home/aletolia/documents/GithubRepos/PORPOISE/utils/core_utils.py", line 204, in train
train_loop_survival_coattn(epoch, model, train_loader, optimizer, args.n_classes, writer, loss_fn, reg_fn, args.lambda_reg, args.gc)
File "/home/aletolia/documents/GithubRepos/PORPOISE/utils/coattn_train_utils.py", line 36, in train_loop_survival_coattn
loss = loss_fn(hazards=hazards, S=S, Y=label, c=c)
TypeError: call() got an unexpected keyword argument 'hazards'

pip freeze
absl-py==1.4.0
ase==3.22.1
astor==0.8.1
autograd==1.6.2
autograd-gamma==0.5.0
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1687772187254/work
Bottleneck @ file:///opt/conda/conda-bld/bottleneck_1657175564434/work
brotlipy==0.7.0
cached-property==1.5.2
captum==0.2.0
certifi==2023.7.22
cffi @ file:///croot/cffi_1670423208954/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1632508026186/work
cryptography @ file:///croot/cryptography_1677533068310/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
dataclasses==0.6
debugpy==1.6.7.post1
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
ecos==2.0.12
entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
formulaic==0.6.4
future==0.18.3
gast==0.5.4
googledrivedownloader==0.4
graphlib-backport==1.0.3
grpcio==1.57.0
h5py==2.10.0
idna @ file:///croot/idna_1666125576474/work
importlib-metadata==4.13.0
interface-meta==1.3.0
ipykernel==6.16.2
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1651240553635/work
ipython-genutils==0.2.0
isodate==0.6.1
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1690896916983/work
Jinja2==3.1.2
joblib==1.3.2
jupyter_client==7.4.9
jupyter_core==4.12.0
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.2
kiwisolver @ file:///opt/conda/conda-bld/kiwisolver_1638569886207/work
lifelines==0.27.7
llvmlite==0.39.1
Markdown==3.4.4
MarkupSafe==2.1.3
matplotlib==3.1.1
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1660814786464/work
mkl-fft==1.3.1
mkl-random @ file:///tmp/build/80754af9/mkl_random_1626179032232/work
mkl-service==2.4.0
mock==5.1.0
nest-asyncio==1.5.7
networkx==2.6.3
numba @ file:///croot/numba_1670258325998/work
numexpr @ file:///croot/numexpr_1668713893690/work
numpy==1.21.6
opencv-python==4.1.1.26
openslide-python @ file:///home/conda/feedstock_root/build_artifacts/openslide-python_1623554159772/work
osqp==0.6.3
packaging @ file:///croot/packaging_1671697413597/work
pandas==1.3.5
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1667297516076/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow==9.4.0
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1688565951714/work
protobuf==3.19.6
psutil==5.9.5
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1691408637400/work
pyOpenSSL @ file:///croot/pyopenssl_1677607685877/work
pyparsing @ file:///opt/conda/conda-bld/pyparsing_1661452539315/work
PySocks @ file:///tmp/build/80754af9/pysocks_1594394576006/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
python-louvain==0.16
pytz @ file:///croot/pytz_1671697431263/work
pyzmq==25.1.1
qdldl==0.1.7.post0
rdflib==6.3.2
requests @ file:///opt/conda/conda-bld/requests_1657734628632/work
scikit-learn @ file:///tmp/build/80754af9/scikit-learn_1642601761909/work
scikit-survival==0.17.2
scipy @ file:///opt/conda/conda-bld/scipy_1661390393401/work
shap @ file:///croot/shap_1668715257344/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
slicer @ file:///tmp/build/80754af9/slicer_1633422823758/work
tensorboard==1.13.1
tensorboardX==1.9
tensorflow==1.13.1
tensorflow-estimator==1.13.0
termcolor==2.3.0
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
torch==1.7.0
torch-geometric==1.6.3
torch-scatter @ file:///home/aletolia/documents/torch_scatter-2.0.5-cp37-cp37m-linux_x86_64.whl
torch-sparse @ file:///home/aletolia/documents/torch_sparse-0.6.8-cp37-cp37m-linux_x86_64.whl
torchaudio==0.7.0a0+ac17b64
torchvision==0.8.0
tornado @ file:///opt/conda/conda-bld/tornado_1662061693373/work
tqdm==4.66.1
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1675110562325/work
typing_extensions @ file:///tmp/abs_ben9emwtky/croots/recipe/typing_extensions_1659638822008/work
urllib3 @ file:///croot/urllib3_1673575502006/work
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1673864653149/work
Werkzeug==2.2.3
wrapt==1.15.0
zipp==3.15.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant