Replies: 1 comment 1 reply
-
Interesting . Here is how I think about it .. I run SQLite in origin ( 3 data Center’s in multi master mode ) and keep the version in Cloudflare as the read only. mutations go to origin, and the WAL pushes them to all the read only SQLite in CF. you can take it further and WAL off the CF SQLite to keep the browser service worker SQLite up to data as a read only copy of what’s in CF? Workers also run in both the cf workers and Web workers , using the SQLite locally . Migrations are an event that is sent up , just like data is sent up from Orion. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Because Durable Objects are limited to 1,000 RPS, having a built-in feature to scale reads via data replication could serve as a strong offering for scaling usage.
Things to consider would be where the logical components live, and which instance(s) get to decide on data handoff as they ingest incoming data. How do we declare a main source? Is it a primary write, distributing each read to the replicas perhaps by way of web sockets? A lot of questions, some with clear answers others needing to mull over in more depth.
Placing this discussion here as a placeholder to come back to as I chew through the thoughts.
Beta Was this translation helpful? Give feedback.
All reactions