There are enough search Robots which ignore such code.
But anyway why should you hide it. Better focus on good security for your server. Because beside search engines there any malicious tools which roam the internet with port scans and find your open server ports anyway.
We aim to be excluded from Google search results for privacy reasons, as requested by a client.
You can use a not obvious subdomain, instruct robots to not crawl your site, and then provide a blank page if the site is accessed by ip (to deflect web scanners).
And remember your site sits behind a login screen, that should be sufficient.
I'm working to hide my Traccar instance from search engines and have taken the following steps:
head
section of theindex.html
file:<meta name="robots" content="noindex">
robots.txt
file in the root web directory with the following content:Are these steps sufficient? Moreover, will they interfere with Traccar's operation or are they safe to implement?