I'm not sure I understand the issue.
You can have the data written to the database first. This may require bypassing some kind of control mechanism. But eventually, all data is uploaded to the database and device memories are emptied.
Thanks guys for your responses, Anton I'll join a screenshot to demonstrate.
It's 06-08-2024 23:21 the moment I write that message and this device, after I started the server, is sending old data that was cached on the gps earlier in the day, and it has about a full day of cached data, is there a possibility to process more position instead of about 10 per second like shown in the screen?
That's a question to your device vendor.
If I understand, traccar could handle as many position as we sent simultaneously, - taking server resources in consideration -, per example if I send 1000 positions in a time to traccar he'll process them in the asynchronuously?
Yes, assuming protocol supports it.
Thanks a lot
Hi there,
We're facing an issue with our Traccar server and would appreciate your input on a potential solution/prevention.
We have approximately 1300 devices that can cache data internally when disconnected from the server. After a recent 5+ hour server outage, these devices accumulated cached data. Upon server restoration, they began sending this cached data in batches, each device sends a portion of its cached data, disconnects, gathers more data before having the possibility to connect again, and repeats the process, creating an endless loop.
Do you think one instance of Traccar could be tweaked to process multiple simultaneous requests/connections?
Do spread the task to multiple instance of Traccar using horizontal scaling prevent that kind of issue?
Thanks for your expertise.