We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When an archive is seeked, the reader has to:
If another seek occurs, the uncompressed data and the decrypted data are lost, and the work must be done again.
This is not a problem when performing linear extraction, but some pattern suffers from this behavior, for instance:
[File 1 content 1][File 2 content 1][File 1 content 2][File 2 content 2]
File 1
File 2
In the worst case of n tiny part for n files , every block could be decrypted and decompress n-times.
To avoid it, a cache could be use between layers. The implementation:
Implementing #156 would be a way to check for the reality of performance increase
The text was updated successfully, but these errors were encountered:
No branches or pull requests
When an archive is seeked, the reader has to:
If another seek occurs, the uncompressed data and the decrypted data are lost, and the work must be done again.
This is not a problem when performing linear extraction, but some pattern suffers from this behavior, for instance:
[File 1 content 1][File 2 content 1][File 1 content 2][File 2 content 2]
etc.File 1
,File 2
) and read themIn the worst case of n tiny part for n files , every block could be decrypted and decompress n-times.
To avoid it, a cache could be use between layers. The implementation:
Implementing #156 would be a way to check for the reality of performance increase
The text was updated successfully, but these errors were encountered: