Max memory should definitely limit the memory. Not sure why it doesn't work.
As for issue in general, I think it's kind of expected that a lot of memory will be used if you are doing a big report.
This is the wrapper.config
file:
wrapper.java.command=java
wrapper.java.app.jar=tracker-server.jar
wrapper.app.parameter.1=./conf/traccar.xml
wrapper.java.additional.1=-Dfile.encoding=UTF-8
wrapper.java.maxmemory=1024
wrapper.logfile=logs/wrapper.log.YYYYMMDD
wrapper.logfile.rollmode=DATE
wrapper.ntservice.name=traccar
wrapper.ntservice.displayname=Traccar
wrapper.ntservice.description=Traccar
wrapper.daemon.run_level_dir=${if (new File('/etc/rc0.d').exists()) return '/etc/rcX.d' else return '/etc/init.d/rcX.d'}
It is interesting, cause I have an active device which has several months of being working fine, so, I tried the filter in a particular date range, 4 months ago(a whole week) and it is retrieved with no problems, buuut, when I try modifying that date range from that particular valid week till today, the Web socket error is displayed, and it starts to drain the memory; on average, it drains 100MB every 25 seconds.
I agree with you, a lot of memory is used when doing a big report, but in case of failure, it seems to be like there is a missing "connection release"or something when the error web socket is raised.
Not teaching you or something, just trying to understand what could be causing the slow drain.
By the way, when this happens, the only way to get the service back to normal is to stop the daemon completely, and start it again, otherwise, the remaining memory becomes "unreachable", cause I am not even allowed to navigate to login page.
There is no cancellation of requests. If you requested report, Traccar will finish it.
I understand it, the cancellation of request is out of scope, thanks.
But what about that memory leak?. Could it be on Traccar side, or RDS side?.
Not quite sure how could I fix it from my side actually.
How do you measure memory? Usually Java doesn't release memory to the OS after it has allocated something.
The way I measure it, is on AWS dashboard; it displays the total memory and the available; I just run the filters and check over dashboard, I behold the slow drain there. Is that a valid answer to your question?.
Then I think it's expected. See my previous comment.
Got it sir, not quite sure why no more people struggling with this, anyway, I'll try to look for a workaround.
Keep you posted, thank you.
Hello, I have this AWS RDS MySql Database where I connect my Traccar service to; the problem is, that for one of my devices(Coban 303F), reported for sending no data, I try to get a report; since I do not know when was the last time the device wrote on db, I'm doing a filter from 90 days(more or less) till today, the problem starts in here, it indicated a "Error on WebSocket" message, then, I am unable to see the site any longer, I check over AWS memory, and it's been drained in less that 10 seconds!.
I did this test using a 2GB memory instance, at the very beginning, the memory starts on 1500MB, after running the aforementioned filter criteria 3 times in a row, the AWS memory goes down till 150MB or so.
I already tried to change the wrapper.config file, added the property
wrapper.java.maxmemory=1024
but no luck.Any idea, suggestion, comment?.