Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File limit is too low for larger playlists #30

Open
Zardozz opened this issue Sep 9, 2021 · 8 comments
Open

File limit is too low for larger playlists #30

Zardozz opened this issue Sep 9, 2021 · 8 comments
Labels

Comments

@Zardozz
Copy link

Zardozz commented Sep 9, 2021

The squeezeboxserver user has a open file limit of 1024 which causes lms to generate a lot of errors below when you have a larger playlist

Slim::Player::Protocols::File::open (101) Error: could not open ..... Too many open files

How to reproduce:-

  1. Start container
  2. run docker exec -it container-id bash
  3. run su - squeezeboxserver -c 'ulimit -n'

Result is 1024

Using the command line docker run --ulimit nofile=90000:90000 .... does not help as this changes the limit for the root user (which already has a high enough limit)

Please change it so that the number of files limit for the squeezeboxserver user is configurable from an environment variable (with the default as the current limit of 1024)

@mherger
Copy link
Contributor

mherger commented Sep 10, 2021

Hmm... there must be something else going wrong (in the container, or the music storage, I don't know): my Pi has the same ulimit of 1024, but I was able to add 1600 tracks to my queue without any problem. LMS shouldn't keep all those file handles open. What is your data store for the music files? And what kind of playlist did you try to play?

@Zardozz
Copy link
Author

Zardozz commented Sep 10, 2021

So the playlist that has the problem is at the maximum default possible of 2500 of large files (over 3000 hours).
And it does not occur always (usually when a player reconnects while playing the playlist)

The docker container is hosted on a Synology Nas and the volumes are mounted from it's local btrfs file system.
The version I'm using is Logitech Media Server Version: 8.2.0 - 1627922070

Digging on the internet for a solution gave me.

https://squeezecenter.slimdevices.narkive.com/XrcPhXXI/lms-7-8-0-nightly-too-many-open-files-on-large-playlists-crashes-lms

and later

LMS-Community/slimserver#556

Both those suggest the solution is to increase the open file limit

@michaelherger
Copy link
Member

So the playlist that has the problem is at the maximum default possible of 2500 of large files (over 3000 hours).

More than one hour per file? Ok, that's different from my test then. Could you check your NAS, whether it's busy on IO when you try to do this? You could also try to up the limit on the NAS, rather than in the container.

@Zardozz
Copy link
Author

Zardozz commented Sep 10, 2021

The Docker container process limits is below:-

sudo cat /proc/18958/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size unlimited unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 31015 31015 processes
Max open files 524288 1048576 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 31015 31015 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us

So the soft limit is 524288

If you exec a terminal in the container, the root user also has a limit of 524288

The CPU and memory usage of the NAS is very low (less than 1% cpu and 11% (of 8Gb) of ram, for all processes )
The disk light hardly flash when it happens, but I don't have any I/O data yet, I've enabled some utilisation logging, so I should be able to get some better data.

@Zardozz
Copy link
Author

Zardozz commented Sep 10, 2021

Some more data

Volume transfer rate
transfer
Volume IOPS
iops
Volume % utilisation rate
volume-util
Memory Usage
memory
CPU usage
cpu-util

I also tried manually hacking the container's limits.conf to via a interactive bash shell to up the nofile limit to 5000 files for the squeezeboxserver user.

This solves the "Slim::Player::Protocols::File::open (101) Error: could not open ..... Too many open files" issue

But I don't think upping the nofile limit really solves the underlying issue with LMS as still is a bit unresponsive while the event happens, but it seems a bit quicker as with the increased nofile limit it is also not spamming the log with error messages.

The old native install of LMS using 7.x never had this problem with this same playlist.

While in LMS-Community/slimserver#556 reporter said the nightly build of 8.2 did not fix the issue for them I might give 8.3 container a try.

@Zardozz
Copy link
Author

Zardozz commented Sep 14, 2021

Tried 8.3 container and I still get the problem with number of open files.

But as a hack work around lms needing more open files for large playlists than the container has configured for the squeezeboxserver user you can create a custom-init.sh script for 8.2 and above of:-

sed -i '$c squeezeboxserver hard nofile 5000' /etc/security/limits.conf

This replaces the last line of the limits.conf (normally blank) with an updated limit

@uudruid74
Copy link

uudruid74 commented Jan 4, 2022

I can instantly hit the limit using UPNP with certain flac files. Certain flacs end up loading more perl libraries and I bang the 1024 file limit instantly. It may be because I have a lot of devices and both upnp and chromecast bridges, but it all works until I play the right flac, which is ANYTHING from Tidal!

I copied slimserver to /tmp so I could change the limit (used 4096), and the track played right off with no issues.

This is a MUCH needed change!

UPDATE: I spoke too soon. Within about 2 tracks it exhausted the 4096 file limit. lsof shows them all open. Certainly a huge bug here in the UPNP plugin that ulimit can't fix.

Copy link

This issue is stale because it has been open for 720 days with no activity.

@github-actions github-actions bot added the stale label Oct 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants