Replies: 1 comment
-
One customer (with maaaany hierarchies and super large yaml hashes) already mentioned that a performance optimization would be helpful. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We have been talking about performance a couple of times. Until now, hdm seems to do fine even on (large-ish) production data, but we know that parsing all those files on every request is at the very least quite wasteful.
While browsing puppet's source code I stumbled upon a clever yet simple mechanism they use: They cache file contents and the
mtime
of the file. If the file is to be read again, themtime
is checked and if it has not changed, the cached file contents are returned.We could go a step further and cache the parsed file contents. I had thought of something similar before, but was wary of the consequences: We run in a multi-threaded, possibly multi-process environment (puma). And caching large files, possibly multiplied with the number of threads/processes can take up a lot of memory. The usual solution in a web application is to use external storage like
memcached
orredis
for caching. But this would make hdm deployments more complex, which I do not believe we want.Discovering puppet's solution made me think about this some more. And I now think this might be viable, if we make it optional.
hdm.yml
, people in (memory-)constrained environments can simply decide not to use it at all.memcached
to store the cached files, it is simply a matter of adding another configuration option.(FWIW, as far as I am aware, there is absolutely no urgency for this. It is just something that occured to me and I did not want it to get lost.)
Beta Was this translation helpful? Give feedback.
All reactions