You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently there is an upper limit to the size of signable data files, related to the servers available memory.
The problem is, that in byte[] org.digidoc4j.DataFile.calculateDigestInternal(DigestAlgorithm digestAlgorithm), the digest calculation function uses getBytes - internally, the whole datafile is loaded to memory, which is inefficient.
A more efficient solution would be to use the byte[] eu.europa.esig.dss.digest(final DigestAlgorithm digestAlgo, final InputStream inputStream).
In this case, the digest is calculated over a stream. Internally, they use a 4096 byte buffer.
We have released Digidoc4j 5.0.0 where we have improved the stream usage in Digidoc4J side. However DSS 5.9 still uses byte arrays internally meaning that the maximum data file size is limited to 2GB.
Currently there is an upper limit to the size of signable data files, related to the servers available memory.
The problem is, that in byte[] org.digidoc4j.DataFile.calculateDigestInternal(DigestAlgorithm digestAlgorithm), the digest calculation function uses getBytes - internally, the whole datafile is loaded to memory, which is inefficient.
A more efficient solution would be to use the byte[] eu.europa.esig.dss.digest(final DigestAlgorithm digestAlgo, final InputStream inputStream).
In this case, the digest is calculated over a stream. Internally, they use a 4096 byte buffer.
Attached is a zip of patch of possible fix (generated using git diff on develop branch).
digidoc4j_datafile_streams.zip
Required where:
Best regards,
Mart Simisker
The text was updated successfully, but these errors were encountered: