forked from Dao-AILab/flash-attention
-
Notifications
You must be signed in to change notification settings - Fork 45
Issues: ROCm/flash-attention
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Issue]: is scaled_dot_product_attention part of flash attention?
#79
opened Sep 2, 2024 by
unclemusclez
[Feature]: Support for newer flash-attention versions (e.g. ≥2.1.0)
#53
opened May 22, 2024 by
JiahuaZhao
aac.amd: MI210 - roberta-large with sequence length 8192 and batch_size 1 fails
#46
opened Feb 28, 2024 by
michaelfeil
[Feature]: Is there a Flash-Decoding algorithm implemented based on Composable kernel?
function
#45
opened Feb 28, 2024 by
zhangxiao-stack
[Issue]: RuntimeError: Expected dout_seq_stride == out_seq_stride to be true, but got false.
#41
opened Feb 3, 2024 by
donglixp
[Issue]: Expected dout_seq_stride == out_seq_stride to be true, but got false
#40
opened Jan 30, 2024 by
ehartford
Support for other modules (rotary, xentropy, layer_norm)
function
#34
opened Jan 15, 2024 by
bbartoldson
replace kernel implementation using CK tile-programming performant kernels
#33
opened Jan 10, 2024 by
carlushuang
4 tasks
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.