You can expect these questions.
Would be a good idear to tell how much ram you have in your server.
What specs does your server have cpu / storage
What have you done so far to optimize ?
Thanks for your response,
Concerning the your questions:
We're deploying the whole stack - DB, traccar, UI - on the same server using docker compose and we've noticed that the stack didn't use too much cpu less than 5%, until it process a heavy task like clearing history data.
entrypoint:
- java
- -Xms3g
- -Xmx3g
- -Djava.net.prefer IPv4Stack=true
• 11.5 GB of ram to mysql - innodb -
innodb_buffer_pool_size = 11.5G
innodb_log_file_size = 512M
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 0
We also follow the instructions in the documentation.
We run the script to clear history ecery day at 06:00, the thing that make the server really slower for about 1 to 2 hours.
We have about 6xx to 7xx active devices, we'll attend 1xxx about the end of the year, every device report the position every 10 seconds.
I'll comment on your original questions:
When should we scale Traccar? - allocate more ram -
Memory is mostly needed for the database caching, so depending on your usage, you need more RAM when you don't have enough space to store what you use frequently.
Is Redis more beneficial than innodb cache for Traccar?
We don't use Redis for caching.
What the beneficial on horizontal scaling Traccar?
Traccar can scale vertically for a pretty large number of devices, but there's always a limit. When you get to maybe 10k-20k devices, depending on reporting frequency of course, a single server won't be able to handle all the data. So you would have to scale.
Another reason to "scale" horizontally is redundancy. Common setup is to have two instances, so if something happens to one instance you can switch to the second one.
Thanks a lot guys, that was very informative
Hi there,
First of all, I'm really thankfull to the team behind all the huge work in Traccar, using Traccar for a considerable time, I have some questions:
We ask all these question because we're facing a strange issue, when users use more heavy request - get history ( replay), get report ... - the ram usage grow little by little, until it reaches 100% of RAM - most of it due to mysql caching -, our server stop getting new positions from Teltonika devices until we restart the server and the database, the weird thing is that we have about 10 TK, that still work like a charm during the same time.