how to best save ligand parametrisation #36
-
I'm working on a large ligand and the parametrisation, which I run with |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @dprovasi, great questions There are a few options for serialization
The slowest part of parameterization is probably partial charge assignment (for large molecules, everything else is dwarfed by AM1-BCC's roughly cubic scaling with molecule size). Interchange caches the result of charge assignment; roughly speaking, the full calculation is done the first time a molecule is seen by a Python thread and the result is stored, then the second, third, etc. time charges are requested the result is looked up from the cache and returned immediately. This sounds like what you're seeing from the same Jupyter kernel; re-running a cell with either of those method calls in it should be quicker the second time. Charge caching did lead to a critical-if-esoteric bug that has since been patched - we strongly recommend using version 0.3.18 or newer to avoid this, especially in applications that use multiple threads/processes. |
Beta Was this translation helpful? Give feedback.
Hi @dprovasi, great questions
There are a few options for serialization
openmm.System
to and from disk: http://docs.openmm.org/development/api-python/generated/openmm.openmm.XmlSerializer.html#openmm.openmm.XmlSerializer.serializeInterchange
objects can be serialized out to JSON with the.json()
method and then loaded back withInterchange.parse_raw
. Under the hood this uses Pydantic, with a little bit of custom behavior to handle unit-bearing quantities. (This uses dict representations, so there are probably ways to use other file formats, but we haven't really explored this since it's not been requeste…