You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to know which slides are related to each other and group slides by content, it might be useful to explore text-embeddings. To minimize costs, I propose to try mxbai-embed-large and ollama-python.
We could reuse infrastructure developed in #8 to turn PDFs into images, then images into text, and text into embeddings. The embedding could then be explored interactively using a notebook similar to this one.
The text was updated successfully, but these errors were encountered:
Thanks! I am working on it. I just wanted to add that in the example notebook the code for the stackview.sliceplot is somehow not working correctly as you don't actually see the plot! But it worked after i tried it myself, so it should work actually.
Don't worry about the interactive plot. This doesn't render on the github website :-)
Alright! I also tried to do it in the Notebook I downloaded but still I can't really select the data points like it is supposed to be. I only can draw a circle but the data points are not chosen as a selection afterwards.. Have you ever encountered that problem?
In order to know which slides are related to each other and group slides by content, it might be useful to explore text-embeddings. To minimize costs, I propose to try mxbai-embed-large and ollama-python.
We could reuse infrastructure developed in #8 to turn PDFs into images, then images into text, and text into embeddings. The embedding could then be explored interactively using a notebook similar to this one.
The text was updated successfully, but these errors were encountered: