Check open file limit for Traccar process on your system.
@Adrian Ojeda - Have you found the solution, I am facing the same issue ?
Hey @Anisha Vishnoi, unfortunately I couldn't fix it yet. The only solution I found so far is restart the server once a week :(
I tried incrementing the value of ulimit on the systems, but didn't work.
This is what I did:
open the file /etc/security/limits.conf
append this:
root soft nofile 50000
root hard nofile 50000
and to verify
ulimit -Hn
ulimit -Sn
Maybe can help you, please. let me know if you find something else!
Just as extra information I'm using a Ec2 instance in aws.
Hello i have same issue (Too many open files)
I tried incrementing the value of ulimit on the systems, but didn't work.
This is what I did:
open the file /etc/security/limits.conf
append this:
root soft nofile 50000
root hard nofile 50000
also try
and to verify
ulimit -Hn
ulimit -Sn
And always get for -Hn 4096 value and for -Sn 1024
Im already try several ways searching in formus, and did not work.
My System is CentOS Linux 7.6.1810
Can any one give me some light about this? Thank you
Well, I have followed this steps:
1 - Setup configuration files following this -> https://gist.github.com/luckydev/b2a6ebe793aeacf50ff15331fb3b519d
2 - run htop
on the console to see the process running and get the traccar-server.jar process id
3- then run: cat /proc/<put the process id here>/limits
4- check the label "Max open files" if the limit is different than the one you specified on the step 1, try restarting traccar and go to step 2 again
One more thing, keep in mind you are changing the max files configuration for root user so when you are going to start/restart
the traccar server, be sure you are doing as root always with sudo e.g sudo /opt/traccar/bin/startDaemon.sh
Hi thanks for you answer, i try to open step 1 link but it doesn´t.
Can you please check if is ok?
Regards
weird.. let me copy here the content:
maximum capability of system
user@ubuntu:~$ cat /proc/sys/fs/file-max
708444
available limit
user@ubuntu:~$ ulimit -n
1024
To increase the available limit to say 200000
user@ubuntu:~$ sudo vim /etc/sysctl.conf
add the following line to it
fs.file-max = 200000
run this to refresh with new config
user@ubuntu:~$ sudo sysctl -p
edit the following file
user@ubuntu:~$ sudo vim /etc/security/limits.conf
add following lines to it
edit the following file
user@ubuntu:~$ sudo vim /etc/pam.d/common-session
add this line to it
session required pam_limits.so
logout and login and try the following command
user@ubuntu:~$ ulimit -n
200000
now you can increase no.of.connections per Nginx worker
in Nginx main config /etc/nginx/nginx.conf
worker_connections 200000;
worker_rlimit_nofile 200000;
I'm using an Amazon EC2 instance as well, with "Amazon Linux AMI release 2018.03" installed. I'm having the problem in just one of the two instances I have running. I don't really know what's the problem.
Hey there!
I'm having a weird issue on my traccar server. I have a client with almost 100 devices (tk103, gt06 and vt300). The platform works well but each 2 weeks the server hangs out and I have this error message on wrapper.log
If I restart the server I don't have issues until 2 weeks again.
I have optimized my OS following this link https://www.traccar.org/optimization/ but without luck, the same issue again.
I'm using a EC2 AWS instance (Amazon Linux AMI release 2017.03) m5.large (2 vCPU 8GB )
Thank you so much in advance!