Server: Upgrade 5.7 to 5.8: loop with INFO: Waiting for changelog lock.... messages

rolscha year ago

Hi.

After the 5.8 Update i see this endless loop.
I use a plain H2 DB with 4 devices.

When i revert the 5.8 upgrade (= use the backed up DB, config and 5.7 installer)
traccar server comes online with no error messages.

So i have no idea how i can update to 5.8.

2023-06-29 00:00:25  INFO: Waiting for changelog lock....
2023-06-29 00:00:35 ERROR: Main method error - Could not acquire change log lock.  Currently locked by hidden (hidden) since 6/28/23, 6:01 PM ->
2023-06-29 00:00:46  INFO: Operating system name: Linux version: 5.10.0-23-amd64 architecture: amd64
2023-06-29 00:00:46  INFO: Java runtime name: OpenJDK 64-Bit Server VM vendor: Eclipse Adoptium version: 17.0.6+10
2023-06-29 00:00:46  INFO: Memory limit heap: 10240mb non-heap: 0mb
2023-06-29 00:00:46  INFO: Character encoding: UTF-8 charset: UTF-8
2023-06-29 00:00:46  INFO: Version: 5.8
2023-06-29 00:00:46  INFO: Starting server...
2023-06-29 00:00:46  INFO: HikariPool-1 - Starting...
2023-06-29 00:00:47  INFO: HikariPool-1 - Added connection conn0: url=jdbc:h2:./data/database user=SA
2023-06-29 00:00:47  INFO: HikariPool-1 - Start completed.
2023-06-29 00:00:47  INFO: Set default schema name to PUBLIC
2023-06-29 00:00:47  INFO: Clearing database change log checksums
2023-06-29 00:00:48  INFO: Waiting for changelog lock....
2023-06-29 00:00:58  INFO: Waiting for changelog lock....
Anton Tananaeva year ago
Andreasa year ago

Just additionally, because the other threads helped, but did not solve the issue. I had to increase the memory in the VM running Traccar to 16 GB, otherwise the java process ran OOM while adding geofenceids to tc_positions.

Anton Tananaeva year ago

I'm pretty sure Traccar doesn't use any memory for it. Probably the database engine does.

Andreasa year ago

@Anton

You're fully right, it's not Traccar itself (it happens, when the H2 database engine (in my case) is called from command line and the changes are executed manually. But it took quite some time to find out the reason because this is not in the log because the java process is simply killed (OOM).