You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on using Phi-3-mini for high throughput inferencing using DeepSpeed-MII and came across with an issue of repeating tokens after generating some correct responses.
Facing this issue mostly with longer prompts (more than 1800 prompt tokens + 1000 generated). In my experiments, there are two type of repeated tokens happen.
Model start to generate output and complete expected response, and then repeat the same answer again and again until 1000 tokens get generated.
Model generate some random tokens from the beginning itself.
I tried to increase and decrease allocated KV-Cache blocks, along with max_tracked_sequences, max_ragged_batch_size and max_ragged_sequence_count but none of them helped me.
For my data, this issue occurs close to 50% of the cases as my prompts are mostly bigger in size.
Expected Behaviour:
Model should not repeat tokens and stop token generation even for larger prompt cases as well.
Tried same prompts with vLLM and Huggingface as well and it's working as expected with them, so there seems to be some issue with DeepSpeed only.
ds_report response
DeepSpeed C++/CUDA extension op report
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
I am working on using Phi-3-mini for high throughput inferencing using DeepSpeed-MII and came across with an issue of repeating tokens after generating some correct responses.
Facing this issue mostly with longer prompts (more than 1800 prompt tokens + 1000 generated). In my experiments, there are two type of repeated tokens happen.
I tried to increase and decrease allocated KV-Cache blocks, along with
max_tracked_sequences
,max_ragged_batch_size
andmax_ragged_sequence_count
but none of them helped me.For my data, this issue occurs close to 50% of the cases as my prompts are mostly bigger in size.
Expected Behaviour:
Model should not repeat tokens and stop token generation even for larger prompt cases as well.
Tried same prompts with vLLM and Huggingface as well and it's working as expected with them, so there seems to be some issue with DeepSpeed only.
ds_report response
DeepSpeed C++/CUDA extension op report
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
JIT compiled ops requires ninja
ninja .................. [OKAY]
op name ................ installed .. compatible
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
/home/ec2-user/anaconda3/envs/pytorch_p310/compiler_compat/ld: /usr/local/cuda-12.1/lib64/libcufile.so: undefined reference to
dlopen' /home/ec2-user/anaconda3/envs/pytorch_p310/compiler_compat/ld: /usr/local/cuda-12.1/lib64/libcufile.so: undefined reference to
dlclose'/home/ec2-user/anaconda3/envs/pytorch_p310/compiler_compat/ld: /usr/local/cuda-12.1/lib64/libcufile.so: undefined reference to
dlerror' /home/ec2-user/anaconda3/envs/pytorch_p310/compiler_compat/ld: /usr/local/cuda-12.1/lib64/libcufile.so: undefined reference to
dlsym'collect2: error: ld returned 1 exit status
gds .................... [NO] ....... [NO]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
DeepSpeed general environment info:
torch install path ............... ['/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch']
torch version .................... 2.4.1+cu121
deepspeed install path ........... ['/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.15.1, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.4, cuda 12.1
shared memory (/dev/shm) size .... 7.72 GB
The text was updated successfully, but these errors were encountered: