Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Embedding only works on the first GPU #2974

Open
kovern opened this issue Oct 6, 2024 · 1 comment
Open

Embedding only works on the first GPU #2974

kovern opened this issue Oct 6, 2024 · 1 comment

Comments

@kovern
Copy link

kovern commented Oct 6, 2024

Using the latest transformers and sentence-transformers, on a multi-gpu system.
When I try to run this, the results are correct:

device=torch.device('cuda:0')
model=SentenceTransformer('danieleff/hubert-base-cc-sentence-transformer').to(dev)
testdata=['example','text','to','test','if','embedding','works']
embeddings=model.encode(sentences=testdata,device=device,convert_to_tensor=True)
print(embeddings)

tensor([[-0.0453, 0.0019, 0.1803, ..., 0.1711, 0.9855, 0.2834],
[-0.0889, -0.1105, -0.4963, ..., 0.0567, 0.8881, 0.5029],
[ 0.4389, -0.1606, 0.2297, ..., -0.1548, 0.1970, 0.5715],
...,
[ 0.4820, -0.7396, 0.2189, ..., 0.0417, 0.9316, 0.5099],
[ 0.3530, 1.0408, -0.4530, ..., -0.3674, 0.2982, 0.0062],
[-0.0248, -0.1467, -0.0671, ..., -0.3485, 0.7563, 0.5532]],
device='cuda:0')

But if I try to run this code, which only differs in the target device:

device=torch.device('cuda:0')
model=SentenceTransformer('danieleff/hubert-base-cc-sentence-transformer').to(dev)
testdata=['example','text','to','test','if','embedding','works']
embeddings=model.encode(sentences=testdata,device=device,convert_to_tensor=True)
print(embeddings)

The results are:
tensor([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], device='cuda:3')

Is it possible to use SentenceTransformer on GPU other than cuda:0?

@ir2718
Copy link
Contributor

ir2718 commented Oct 12, 2024

Hi,

I'm not able to reproduce this:

import torch
sentence_transformers import SentenceTransformer

device = torch.device('cuda:0')
model=SentenceTransformer('bert-base-cased').to(device)
testdata=['example','text','to','test','if','embedding','works']
embeddings=model.encode(sentences=testdata, device=device, convert_to_tensor=True)
print(embeddings)

tensor([[ 3.5326e-01,  1.2674e-01, -1.1257e-01,  ...,  1.8065e-01,
          4.4132e-01,  4.3305e-01],
        [ 5.0503e-01,  1.3007e-01, -4.7713e-02,  ..., -2.3654e-01,
          6.4203e-01,  1.6103e-01],
        [ 3.9437e-01, -2.4579e-04, -1.4649e-01,  ..., -9.1028e-02,
          3.4373e-01, -2.4087e-02],
        ...,
        [ 5.7721e-01, -3.1776e-01, -4.0712e-01,  ...,  2.8815e-01,
          4.5405e-01,  2.9256e-01],
        [ 7.4724e-01,  2.2237e-01, -2.1364e-01,  ...,  1.1014e-01,
          4.1424e-01,  1.3044e-01],
        [ 6.3131e-01, -6.9488e-02, -2.2630e-01,  ..., -3.6103e-01,
          4.1688e-01,  2.8164e-01]], device='cuda:0')

Running this on my other GPU gives this result:

device=torch.device('cuda:1')
model=SentenceTransformer('bert-base-cased').to(device)
testdata=['example','text','to','test','if','embedding','works']
embeddings=model.encode(sentences=testdata, device=device, convert_to_tensor=True)
print(embeddings)

tensor([[ 3.5326e-01,  1.2674e-01, -1.1257e-01,  ...,  1.8065e-01,
          4.4132e-01,  4.3305e-01],
        [ 5.0503e-01,  1.3007e-01, -4.7713e-02,  ..., -2.3654e-01,
          6.4203e-01,  1.6103e-01],
        [ 3.9437e-01, -2.4579e-04, -1.4649e-01,  ..., -9.1028e-02,
          3.4373e-01, -2.4087e-02],
        ...,
        [ 5.7721e-01, -3.1776e-01, -4.0712e-01,  ...,  2.8815e-01,
          4.5405e-01,  2.9256e-01],
        [ 7.4724e-01,  2.2237e-01, -2.1364e-01,  ...,  1.1014e-01,
          4.1424e-01,  1.3044e-01],
        [ 6.3131e-01, -6.9488e-02, -2.2630e-01,  ..., -3.6103e-01,
          4.1688e-01,  2.8164e-01]], device='cuda:1')

Also, you can initiate the SentenceTransformer by passing a device argument like so:

model=SentenceTransformer('bert-base-cased', device=device)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants