-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
omark_contextualize.py ERRORS: Max retries exceeded / Too many open files #25
Comments
Hi @nam-hoang , the error looks a lot like some temporary problems with connecting to the OMA browser API. could you provide us with some more detail how and when you run that script? Thanks Adrian |
Thanks Adrian @alpae for your reply, I ran the above separate commands on a Linux server (Ubuntu). Basically, the command was just like this Then, to trouble shoot, I also tested the Jupyter notebook file
I later found out that it could only successfully finish for about more than 1,000 HOG sequences before getting that error, while I have a total of 2,657 uniq_HOGs. So, the way I worked around this is that I split the uniq_HOGs set into 3 subsets, and run each to write out 3 fasta files, and eventually concatenated the 3 fasta files into 1 for miniprot mapping step. For each subset, it needed to be in a fresh python terminal, or else, it would throw an error (same error as above). Please let me know if you need any further information. |
Hi @nam-hoang indeed, it seems that the API client creates too many fresh sockets without properly cleaning them up. Fixing this requires a bit more time, but as a workaround you can simply increase the number of 'open files'. You can do this with
We will try to fix this properly in the future in the |
Thank you very much! Happy Holidays~ @alpae |
Dear OMArk team,
I am testing
omark_contextualize.py
using the provided example data and also my real data, and ran into an error which seems to have something to do with API connection Max retries exceeded/too many files opened.The
omark_contextualize.py fragment
oromark_contextualize.py missing
runs with the example data (less sequences) were completed, but those with my data (more sequences) were stopped midway. The same error also occurred when I triedomark_contextualize.py assembly
for both example data and my data.I wonder if you would be able to advise me in this case? What would be the cause of this, and is there anything I could change to make this work? I would like to use this tool to improve my genome annotation.
Thank you very much and I am looking forward to hearing from you.
Best regards,
Nam
The text was updated successfully, but these errors were encountered: