Sounds reasonable, but it will require quite a few changes in the code. Let us know how it goes.
Any recommendations before starting to make changes?
Personally, I would tackle this at the database level, and have the DB insert happen as quickly as possible.
Various databases (for example Postgresql and it's various clones / extensions) supporting multiple sharding strategies including the traditional DB approach of spreading tables across multiple tablespaces and physical volumes.
Before making changes to the software, I would look for real metrics on the DB getting hung up during inserts, such as the number of inserts per second over specific time periods, plus DB type, DB tuning details, hardware specs and system software specs and configuration details.
We have a legacy tracking system, and DB performance is more on the query side than the insert side, especially for large multi-vehicle reports analyzing data over time. We have implemented various strategies to address those, all at the DB level.
I've been thinking about how to improve database performance and response times, and I came up with the idea of managing insertion queues and processing whatever is in them every 30 seconds, instead of inserting one by one. This way, performance could be improved. I'm open to your thoughts.