-
Notifications
You must be signed in to change notification settings - Fork 23
Issues: intel/torch-xpu-ops
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Please fix mixed device types in input Tensors of torch.lerp on release 2.6
#1200
opened Dec 23, 2024 by
daisyden
gets nan with complex dtype
client
module: dependency bug
Problem is not caused by us, but caused by the library we use
#1195
opened Dec 23, 2024 by
Stonepia
UT cases which failed on rolling driver and passed on lts driver.
ut_triaged
#1193
opened Dec 23, 2024 by
PenghuiCheng
torch.nextafter has an incorrect result for bf16 on XPU
bug
Something isn't working
#1169
opened Dec 16, 2024 by
guangyey
torch._standard_gamma() has accuracy gap compared to scipy and torch.cpu
#1163
opened Dec 12, 2024 by
daisyden
What is the expected result of float64 div when divisor and dividend are the same?
#1160
opened Dec 11, 2024 by
daisyden
[LNL Windows][Test by CD Nightly Wheels] hugging face model - DebertaForQuestionAnswering && DebertaV2ForMaskedLM failed with RuntimeError: value cannot be converted to type at::BFloat16 without overflow
client
E2E
module: dependency bug
Problem is not caused by us, but caused by the library we use
ut_triaged
xpu: implement aten::_thnn_fused_lstm_cell for XPU backend #141539
#1157
opened Dec 11, 2024 by
yinghu5
softshrink is expected to return nan when the input is nan on ARC
#1152
opened Dec 9, 2024 by
daisyden
Investigate whether pad mm is useful on XPU
enhancement
New feature or request
#1129
opened Nov 29, 2024 by
jianyizh
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.