Our queries already have indexes.
I see alright so how can i optimise them for searching in the positions table?
I don't have much of a solution for you, except to mention that our own traccar server has approx 32 million rows in the positions table, and I don't notice any major slowdown for normal use, live tracking and notifications.
If I do a complex query in the web interface that returns a very large number of position results, then it can take some time to render the result.
Perhaps your DB server, or Web server requires more resources (CPU, RAM)? We are running low-middle range Amazon AWS services, separated DB host from the Webserver.
Hey Angus! thats for the comment! i actually just finished isolating the DB server from the traccar server in two different virtual machines using Google Cloud Compute and it seeems to have solved my issue. I also increased the available memory for traccar to use to 1Gb through the traccar.service file. Running an EXPLAIN ANALYZE SELECT * FROM public.tc_positions
yielded an execution time of 20 - 30 seconds before. Now with these current changes it has reduced from 0.7 to 0.4 seconds which is quite an improvement. I guess the problem was that i had traccar and the db on a single VM. so thanks for the comment. btw im running the db on 2 cores and 2gb of ram and its running smoothly now :)
what are your hardware specs on AWS Angus?
Nelio
It could pass me as it did to use in two machines I already tried everything to use my mysql bank in a separate vps and even then I did not succeed
What do you mean? do you mind explaining what your issue is and what you tried to do to solve it in detail
Nelio - I am using AWS Aurora for my DB host, using db.t2.medium level resources. There are a number of other DB's on the same resource, so it's not dedicated to traccar.
For the web host, it uses an AWS EC2 t2.small resource in the same region, which is running Ubuntu 16.04 and is dedicated to traccar.
We have several users constantly watching the website, and nearly 40 devices sending very regular updates.
Hi Anton,
I've been using traccar and its been great so far. However due to the fact that i have about 20 devices logging at 10 second intervals, i wind up having over 200k rows of data in a short period of time and find that querying the positions table takes very long.
In one of the blogs where you helped someone, you suggested that they create queries with indicies. Were you suggesting to edit the default queries found in the
default.xml
file ? if so how would i acheive this with postgres db?