-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File limit is too low for larger playlists #30
Comments
Hmm... there must be something else going wrong (in the container, or the music storage, I don't know): my Pi has the same ulimit of 1024, but I was able to add 1600 tracks to my queue without any problem. LMS shouldn't keep all those file handles open. What is your data store for the music files? And what kind of playlist did you try to play? |
So the playlist that has the problem is at the maximum default possible of 2500 of large files (over 3000 hours). The docker container is hosted on a Synology Nas and the volumes are mounted from it's local btrfs file system. Digging on the internet for a solution gave me. and later Both those suggest the solution is to increase the open file limit |
More than one hour per file? Ok, that's different from my test then. Could you check your NAS, whether it's busy on IO when you try to do this? You could also try to up the limit on the NAS, rather than in the container. |
The Docker container process limits is below:-
So the soft limit is 524288 If you exec a terminal in the container, the root user also has a limit of 524288 The CPU and memory usage of the NAS is very low (less than 1% cpu and 11% (of 8Gb) of ram, for all processes ) |
Some more data Volume transfer rate I also tried manually hacking the container's limits.conf to via a interactive bash shell to up the nofile limit to 5000 files for the This solves the "Slim::Player::Protocols::File::open (101) Error: could not open ..... Too many open files" issue But I don't think upping the nofile limit really solves the underlying issue with LMS as still is a bit unresponsive while the event happens, but it seems a bit quicker as with the increased nofile limit it is also not spamming the log with error messages. The old native install of LMS using 7.x never had this problem with this same playlist. While in LMS-Community/slimserver#556 reporter said the nightly build of 8.2 did not fix the issue for them I might give 8.3 container a try. |
Tried 8.3 container and I still get the problem with number of open files. But as a hack work around lms needing more open files for large playlists than the container has configured for the squeezeboxserver user you can create a
This replaces the last line of the limits.conf (normally blank) with an updated limit |
I can instantly hit the limit using UPNP with certain flac files. Certain flacs end up loading more perl libraries and I bang the 1024 file limit instantly. It may be because I have a lot of devices and both upnp and chromecast bridges, but it all works until I play the right flac, which is ANYTHING from Tidal! I copied slimserver to /tmp so I could change the limit (used 4096), and the track played right off with no issues. This is a MUCH needed change! UPDATE: I spoke too soon. Within about 2 tracks it exhausted the 4096 file limit. lsof shows them all open. Certainly a huge bug here in the UPNP plugin that ulimit can't fix. |
This issue is stale because it has been open for 720 days with no activity. |
The squeezeboxserver user has a open file limit of 1024 which causes lms to generate a lot of errors below when you have a larger playlist
Slim::Player::Protocols::File::open (101) Error: could not open ..... Too many open files
How to reproduce:-
docker exec -it container-id bash
su - squeezeboxserver -c 'ulimit -n'
Result is 1024
Using the command line
docker run --ulimit nofile=90000:90000 ....
does not help as this changes the limit for the root user (which already has a high enough limit)Please change it so that the number of files limit for the
squeezeboxserver
user is configurable from an environment variable (with the default as the current limit of 1024)The text was updated successfully, but these errors were encountered: