Replies: 1 comment 5 replies
-
Likely not because it goes against the original design goals of Kupo of being a lightweight and easy-to-use tool. At the moment, it works as a single binary with no system dependencies whatsoever and I intend to keep it that way.
Ouch. I will not repeat that to SQLite I promise. But more seriously, I think the keyword in your sentence is "I believe". Do you have any data points? As far as I've seen from different users, Kupo is actually (an order of magnitude) faster than popular Postgres alternatives for a fraction of the resources. Kupo is quite heavily optimized, supports concurrent reads and write, and has decent performances even for large queries.
How is that more expensive than running an auto-scaling Postgres cluster? (again, genuine question, if you have actual numbers I would definitely be interested). From my experience, Kupo can run on quite low hardware and thus is relatively cheap to scale. Having said that, you're right that this is currently the "proper way" of scaling it. Note that I have plan to include sharding backed-in in the future, but this is more of a way to deal with chain growth than to improve the current performances. |
Beta Was this translation helpful? Give feedback.
-
Hello,
The only options to store Kupo's database that I've found are: RAM (--in-memory) and disk (--workdir). Does (or will) Kupo support Postgres and/or MySQL to store the data?
The reason for the question is that I believe that queries could be a lot quicker if done against a proper database, rather than the SQLite file, because Postgres servers can be set up to auto scale, can be set up with redundancies, etc.
So far the solution we've come up with is to have many Kupo instances running in parallel, tied to a load balancer, which allows us to distribute the loads for queries but this is expensive in terms of resources and relatively hard to scale.
All insights are welcome.
Thank you all for what you do!
Beta Was this translation helpful? Give feedback.
All reactions