I'm running traccar on a 1vcpu with 1Gb of ram and have no issues at all..sure it take a little time to generate the report (tho less than 1 minute) but it has a 100% succes rate.
Whats your db size?
i have a 4 gb database out of 10 GB. it is too slow to generate report. i am looking to increase performance.
I am using microsoft azure mysql for databse. it is remote my sql.
Have you looked at your database, to see if it's properly indexed and tuned ?
Wouldn't the Traccar installation create the appropriate indexes since it's creating the DB as well?
tengo una base de 35gb , 4 cpus y 12gb de ram , a los 5gb ya practicamente no arrojaba informes , cambie el disco mecanico por disco ssd y se resolvio el problema.. fundamental discos ssd... el 90% de los problemas de informes es la velocidad de lectura de los discos.
Estoy usando MySQL remoto en Microsoft Azure. Tiene 1 núcleo de PVC y 4 gb de almacenamiento de g10 gb. Y mi VPS tiene 2vcpu, 4 gb de ram y 30 gb de espacio. ¿Necesito cambiar el disco de VPS o base de datos (MySQL)
"Wouldn't the Traccar installation create the appropriate indexes since it's creating the DB as well?"
I suggest going into the database and looking for yourself. For a small installation, it probably won't make a huge difference. But 4GB data isn't all that small.
All the intersection tables i looked at (for example, tc_device_attributes) do not have indexes on the foreign key columns.
A table like tc_events which could grow quite large over time also doesn't have an FK index on deviceid, positionid or geofenceid.
The biggest table, tc_positions, does have an index on deviceid / devicetime. I have not looked in the java code, but I would expect to see some sort of index on the latitude / longitude tables as well.
Then there are the typical database tuning parameters which are more DB specific, like memory allocation, caches and so on, to be looked at.
As an example, I have my own reporting database on Postgresql which collects data from an older tracking (not Traccar) system, where I store the locations in indexed geography datatype columns and where I have implemented indexes on relevant columns.
I run monthly analytic reports (stop report, trip report, travel time and distance within and outside of authorised hours, and so on), written in Python.
Those run through in a fraction of the time that the Traccar online reports take, exporting to Excel spreadsheet format.
Of course that's not really a fair comparison, as I use stored functions (procedures) to do the grunt work and then return the result set to Python, but still...
We are still in trial on Traccar, so I have not done much on the indexes yet, but that is on the to do list.
As always, YMMV: I am still a Traccar newbie looking in, I am sure others have some better ideas.
Arvind, a quick question I have a similar problem which I am troubleshooting at the moment.
You said you have 4gb database out of 10gb (what server (plan) are you running on and the provider name).
Mi base es de 30gb.. el sistema esta otro disco.. el SSD lo utilizo solo para la base de datos..
I am using Microsoft Azure VM and database.
Hi Anton,
In Traccar version 4.5,I have 2vcpu and 4gb ram on my VPS. when i am trying to generate reports it takes too much time and success result is 20%.
most of time report get stuck.
same problem with android api and php api.
How can i improve report generation performance?
is there any settings that can help?