Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Pinecone Memory Adapter #291

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

cheesecake100201
Copy link
Contributor

Added Pinecone Memory Adapter implementation

Copy link
Contributor

@raghotham raghotham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for putting this out. Can you include tests? Also, we would like providers to be available through distributions. Are you able to run llama-stack-apps using pinecone as memory?

@cheesecake100201
Copy link
Contributor Author

cheesecake100201 commented Oct 23, 2024

Thanks for putting this out. Can you include tests? Also, we would like providers to be available through distributions. Are you able to run llama-stack-apps using pinecone as memory?

Will include tests and have to work out the distributions part. Can you point me to the distribution PR of weaviate so I can understand what changes to make and where?
Also for the tests, memory already has a test file in it, should I make one separately for pinecone, or does that file need any changes?
@raghotham

@ashwinb
Copy link
Contributor

ashwinb commented Oct 23, 2024

Also for the tests, memory already has a test file in it, should I make one separately for pinecone, or does that file need any changes?

@cheesecake100201 see the instructions in the file. you should not need a separate test file just update the provider yaml copy (your own) to put in the details (as might be necessary) for pinecone

@cheesecake100201
Copy link
Contributor Author

cheesecake100201 commented Oct 23, 2024

Need to make some more changes in this PR, for example, while creating index, pinecone expects you to give the dimensions of the index which should be the dimensions of the embedding which depends on the embedding model being used. Also in query method inside PineconeIndex length of embedding will be equal to that of the dimension of the index. Need to figure out how to make this dimension thing dynamic. Any idea or help will be appreciated for this since I have been wrecking my head over this issue for a while.
@ashwinb @raghotham

@cheesecake100201
Copy link
Contributor Author

Screenshot 2024-10-24 at 2 01 28 PM Tests have passed. On the point regarding hardcoding dimension of the index to 384, a similar process has been followed with PGVector where the dimension of the embedding has been hardcoded to 384, which is the embedding size of MINILM embedding model. Going ahead we might have to do something here such that other dimensions can be added to the system which opens pgvector and pinecone to accept other embedding models other than MINILM or models with dimension 384. @ashwinb @raghotham

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants