That may not do it. It's entirely possible that the Tor2web service (whatever that onion.to site calls itself) is something of a honey pot so that LE at least has some idea of how much volume is going through SR, etc.. My point though is that nothing says the onion.to proxy has to obey robots.txt -- they could literally ignore it and crawl the entire darkweb themselves, then not prevent Google from crawling their cache. That sort of thing.It's a polite convention to obey robots.txt, not a divine decree or anything.