-
-
Notifications
You must be signed in to change notification settings - Fork 626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU usage spikes and remains at 100% #2080
Comments
Hi @rogerho1224 , if you could create a self contained repro that you can share with me that would really simplify debugging Alternatively, you can start your server with V8 profiler attached and we could have a look at the hot part of the profiler log together |
Thanks for the quick reply @sidorares. Let's try the second option before the first. It's difficult to duplicate the columns and rows in our db in a way that obscures customer data Could you go into a bit more detail on how you'd like me to start my server and record logs? During our debugging process we already tried profiling CPU / Heap in the method outlined here but the logs didn't match the CPU usage stated by Activity Monitor / kubernetes top |
try with See more at https://nodejs.org/en/docs/guides/simple-profiling |
We tried that as well during our debugging. The output wasn't useful to me but let me know what you think. My steps:
One thing I noticed while testing is that if we select the entire contents of our db (200,000 rows / 249 columns) then I can get it to spike to 100% with just one query |
Here is our schema with column names and indices removed |
Thanks, I'll have a look at the logs when I have a bit of free time!
is it just a temporary spike while data is received / deserialised or it stays like that for some time after? 200,000 rows / 249 columns is a lot of data, some CPU spike is expected. Also - how big are the columns typically? |
In the first pasted logfile:
This looks way too short, and mostly shows traces to load modules ( require / evaluate ) |
I can try to create a branch to test a possible fix, would you able to try it with your setup @rogerho1224 ? |
CPU usage stays spiked for some time. My server this time is very basic (express-generator) so I'm not sure if it'd be spinning up a child process. I'd be delighted to test your branch. Let me know when it's up |
The VARCHAR(255) columns are usually 10-20 characters probably. The text columns can be a bit longer |
@rogerho1224 I'm very confident the root cause is the same as in #2090 ( likely a bug in V8 optimiser ). The solution on mysql2 side will be to avoid JIT generation of parser for a results with large number of rows. Closing this issue as a duplicate, please track the progress in #2090 |
Flagging that our team experienced this issue recently - #1432
We reproduced this with mysql2 versions 2.3.1, 2.3.3, and 3.3.5.
Node Version: 18.16.0
Local Machine: Macbooks running Ventura 13.0.1.
Server Machine: Ubuntu (i can get the version if necessary)
Details:
One of our endpoints joins multiple tables and selects over 400 columns in aggregate from these tables. We're able to reproduce the issue of cpu usage going to 100% and staying there for some time pretty consistently by issuing 10+ network requests to our endpoint in quick succession. Some of the columns are TEXT columns if that's pertinent.
We've downgraded to 2.3.0 in the meantime, but would love to be able to get back on the latest versions. In particular, being able to use escape / escapeId through a pool without obtaining a connection is important to us.
Appreciate your help on this one!
The text was updated successfully, but these errors were encountered: