You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have downloaded all the genpept format from NCBI's refseq_protein database and concatenated them into one large file following the user manual and the size of the final database is 188G.
Therefore, when I use make_accession_to_taxonomy_map.pl, it always reports the error: out of memory. So I wonder whether I have followed the right step and if so, can I split the database into several sub-database and run the command for each database individually in order to avoid such an issue?
Many thanks,
James
The text was updated successfully, but these errors were encountered:
I have downloaded all the genpept format from NCBI's refseq_protein database and concatenated them into one large file following the user manual and the size of the final database is 188G.
Therefore, when I use make_accession_to_taxonomy_map.pl, it always reports the error: out of memory. So I wonder whether I have followed the right step and if so, can I split the database into several sub-database and run the command for each database individually in order to avoid such an issue?
Many thanks,
James
The text was updated successfully, but these errors were encountered: