Node.js example with RabbitMQ #2274
Replies: 2 comments 3 replies
-
KEDA is implementation agnostic and it doesn't manage the data workflow or delivery to your application. The app needs to consume the events and KEDA just does the scaling. What kind of queues are you going to consume? Which services? Kafka? RabbitMQ? If so, just implement a consumer app as you would usually do and then configure KEDA to scale this app with the proper trigger. https://keda.sh/docs/2.4/scalers/ |
Beta Was this translation helpful? Give feedback.
-
Hello Zybnek, Thank you for your reply. Perhaps KEDA is easier to use than I thought. But let me try to give you a clearer picture of the application. Deployed on Kubernetes, the application consists of several (8-15) pods each running a single Docker container. The pods/containers know nothing about each other. The 'share' only RabbitMQ by means of which messages move from one container to another. Depending on the outflow/ingest relationship between upstream container A and downstream container B, queues can build up at B. More generally, queues can build up at any container in the application. When this occurs I would like HPA to respond appropriately, in this example creating more instances of B to balance the outflow/ingest. Consequently, I have a separate component that periodically monitors the several queues to get a sense for this outflow/ingest relationship at each queue. Up until now, I assumed, wrongly it seems, that my monitor program would need to convey some info to KEDA. But now it looks to me as if the scaler trigger's queueLength specification might take the place of the monitor program. Am I correct in this? After looking at the scalers page, I still have these questions:
I'm sure you can tell that I'm interested in how dynamic the trigger is. This interest results from the certainty of a real-world dynamism characterized by 'bursty' behavior: suppose that in normal steady state, container A generates two messages/second, but that container B can consume/process only one message/second. Under these conditions, in which a message will queue at B every second, a queueLength parameter of 1 (or 2) seems right, i.e., KEDA should create another instance of B. But what if container A, who is receiving its input from the real world, suddenly gets a big burst of messages and starts publishing them at eleven messages/second? IF I understand how KEDA works (pretty sure I don't), then the seemingly fixed queueLength of two will fail to cause the creation of more instances of container B. The result will be ten messages queuing every second. How, if at all, does KEDA handle this? Thanks very much for your guidance. -Paul |
Beta Was this translation helpful? Give feedback.
-
Hello,
Having heard good things about KEDA, I am on the verge of trying to get going with it.
My K8S use case includes a Node.js component that periodically obtains the lengths of the application's several queues. Depending on these lengths, I'd like to tell the HPA to scale up (or down) instances of a specific pod.
Has anyone an example of how to do this using a Node.js program?
Thanks very much.
Cordially,
Paul
Beta Was this translation helpful? Give feedback.
All reactions