They refer to the accuracy with which an attacker can say that you are visiting a certain website which they have fingerprinted. Size of the encrypted data is a common area where information leaks. Imagine the SR forum and all of its threads. Let's say an attacker has created a fingerprint of the entire forum. They know the size of every page, they know which pages are linked together. They can observe the traffic you get, but it is encrypted so they can not see the plaintext. They can however see that you accessed a page of a specific size. Then they can see that you accessed another page of a specific size. Let's say you follow a thread through from page one to page twenty. The attacker will see the size of each of the pages you have loaded, and then they will see that the sequence of pages you loaded have sizes that match up with the fingerprint they took of a thread on SR. They can perhaps use this information to infer that you are browsing through a thread on SR. The attacker may also be able to determine the sizing characteristics of individual objects on each of the pages you have loaded. They might see that these objects are of sizes that correspond to the sizes of objects you would load in order to browse through the many pages of the thread. I believe that with pipelining this becomes more difficult for the attacker as their ability to identify the sizes of individual objects being loaded is taken away. Hm, a quick google confirms my thoughts: https://blog.torproject.org/blog/experimental-defense-website-traffic-fingerprinting Unfortunately it is up to the person running the server to enable pipelining as well though, and it is not always possible to do so. Sorry I am not able to provide more useful information, it has been a while since I researched traffic fingerprinting and possible ways to counter it.