-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VectorDB hosted solution takes a lot of time to push vectors #51
Comments
Hey @TheSeriousProgrammer , Thanks for raising the issue. Let's see if this solution could work for you. Only for HNSW:
Do you think this could work for your use case? |
Btw, you do not need to add |
Could u give a try to this PR #52? It would not work in the cloud because it is not released, but locally and with |
Sure, will give it a shot |
Hello @TheSeriousProgrammer , Before jumping into this solution, I would like to explore a new solution that would make things simpler:
So, you are telling me that you are passing 64k documents in each call, so u must be doing something like this: from more_itertools import chunked
for batch in chunked(docs, 64000):
client.index(batch) This would indeed pass 64000 to the client, but the client internally will batch in requests of size 1000 (this means that the vectordb will try to build the index 64 times). And the client will not return until all the calls are successfull. What u can do is to pass So you can try: from more_itertools import chunked
for batch in chunked(docs, 64000):
client.index(batch, request_size=64000) # edit this number to the largest value that does not fail Could you give this approach a try to see if this would satisfy ur needs? This could allow us to avoid adding more methods and complicating the API/interface. If this you find successful, I would add this into the README for documentation. Thanks a lot for the help and patience. |
I tried the request_size workaround, the request timings still took a lot of time now around 35 hours, dont know why.. Will try the pr |
I will try to look into it |
may I ask how do u generate the embeddings? or they are already computed? |
its precomputed |
what are ur jina and docarray versions? |
Hey @TheSeriousProgrammer , Are you sure u are using I believe this huge poor performance was due to a bug solved in that version where the This ended up problematic because I believe most of the time is spent resizing the index (the max_elements parameter is not properly passed). Can you please check the version? Thanks |
I believe that with the latest |
Recenlty I was experimenting with other hosted db solutions, one of the db providers suggested us to upload the vectors from a vm in the same infra provider(aws , gcp, azure) and same region as where the db was hosted. I initially thought that this should not impact the perfomance of indexing much as my experiments has each push batch size of barely a 100 with 768 dimensional vectors. But I was wrong. Was able to see a indexing speedup of upto 9x by following the suggested solution. I know that this might be a no brainer for highly experienced cloud devs but may not be the case with growing AI devs like me. Adding this in the readme might help a lot for fellow ai devs Possible explanation : 8(bits per chareter) * 16 (guess of number of charecters required to represent a float integer including all additional charecters required by json format) * 768 * 100 = 1.2MB |
I tried to make use of vectordb's hosted provision from jina ai, using commands mentioned in the docs
and tried to push my vectors using the client interface
I have a collection 2.5M 768 dimensional vectors to be stored in the db, so I decided to make batched calls of db.index method with 64k vectors in each call. The code didnt respond to the same, so i tried to change the batch size to 2, the code was able to index at a speed of 5 s/it and the estimated time taken was 27 hours. ( I assume this is happening since the tree construction is happening during each index call)
It would be nice if we could speedup the process by asking the user to push all the documents at first and then perform tree construction upon another specific api call
which could replace the
and during the build process we could easily block the crud operations with a
is_building_tree
flag and throw an error named TreeCurrentlyBuildingError() when crud operations are being performedThe text was updated successfully, but these errors were encountered: