Right now there are three anonymity networks that are given any respect by the academic community. Of course Tor is the favorite of the academic world, but I2P and Freenet have both also had some professional analysis done on them. If we count remailer networks then we can also include Mixmaster and Mixminion. Tor was primarily designed for accessing the clearnet anonymously. It separates routing nodes and clients, meaning that being a client and being a relay are not mutually inclusive. Tor primarily derives its anonymity by routing layer encrypted client communications through three different nodes before they make it to the destination server. For the most part, attackers cannot deanonymize a Tor user unless they can watch the users traffic enter the network and exit the network. By having a very large network of volunteer routing nodes, Tor manages to significantly reduce the probability that any attacker will happen to control the entry and exit nodes selected by the client. Tor also pads all packets to the same size, 512 bytes. I2P was essentially exclusively designed for accessing Eepsites anonymously. 'Eepsite' is the I2P jargon for hidden services. In almost all cases, I2P clients are also I2P relays. I2P derives its anonymity in much the same way as Tor, outgoing communications are layered encrypted and routed through a series of user selected relays prior to reaching their destination. I2P has some key differences from Tor though. For one Tor uses bidirectional tunnels, meaning that forward and return traffic share the same circuit. I2P uses unidirectional tunnels, forward traffic and return traffic are routed through different tunnels. There is an argument that this helps to protect some from internal website fingerprinting, but I doubt it does much. I2P developers also argue that it helps protect from internal timing attacks, but I believe it most likely increases the vulnerability to such attacks. One thing that possibly protects I2P users from internal timing attacks is the fact that virtually all nodes are relays, and tunnels by default can be variable length. This means that an internal attacker watching traffic originate at me and end at the destination may not be able to actually tell that the traffic really originated at me, it is possible that I forwarded it on for someone else. Tor almost always uses three nodes and hardly any clients are also routing nodes, so it does not get this potential protection of I2P. Another distinguishing feature of I2P is that pretty much all of the users can be enumerated in little time, this is also due to the fact that virtually all nodes are routing nodes. This makes I2P particularly weak to intersection attacks, where an attacker monitors who is currently online and looks for correlations between this set of users and the set of hidden services / pseudonyms currently making use of the network. Tor is not particularly weak to this sort of attack, because the entire list of Tor users is not so readily available. I2P also pads all packets to the same size, I believe 1kb. Freenet is the most different of the three anonymity networks. Whereas Tor focuses on allowing users to access the clearnet anonymously, and whereas I2P focuses on allowing users to access 'Eepsites' anonymously, Freenet focuses on allowing users to publish and access files while maintaining plausible deniability. Freenet also focuses very strongly on being resistant to censorship. I2P and Tor allow clients to anonymize their access to websites, and websites to hide their location from clients. On the other hand, Freenet allows publishers to insert content and clients to retrieve it anonymously. Essentially always Tor hidden services are hosted on single servers, I2P allows multihomed Eepsites but frequently they will be hosted on single servers as well. On the other hand, Freenet content is always redundantly hosted across the Freenet network of volunteer nodes. Freenet is not really for hosting a website like SR, with php etc, it is more like an anonymous file sharing network. Of course, custom client side software can be made to give people the ability to use Freenet for various things, for example there are Freenet email software packages and Freenet forum software packages. As with I2P, virtually all Freenet clients are also Freenet relays, what is unique to Freenet is that essentially all Freenet clients also store data for the entire network. In addition to sharing their bandwidth with the network, Freenet clients also share their hard drive space. Freenet gains its anonymity entirely because of the fact that all clients are also relays, and data can travel through paths of vastly different lengths prior to arriving at its destination. This means that if a client requests a file through their neighboring nodes, the neighboring nodes cannot easily determine if the client submitted the request themselves or if they are just forwarding it on for somebody else. Likewise, the fact that essentially all clients donate drive space to the network, and hold arbitrary files, means that if a content publisher adds a file to the network through their neighboring nodes, the neighboring nodes have a difficult time determining if the publisher originally published the content or if they are just forwarding it on for someone else. Freenet uses two layers of encryption, one layer for the content and one layer for the links. An encrypted file on Freenet looks the same at all positions on the network, but as file chunks are transferred throughout the network they are sent through dynamically encrypted links. Freenet also swams files over many nodes, and pads all file fragments to the same size. Mixmaster and Mixminion are mix networks and a remailers. Remailer network are used for sending anonymous email messages. Remailer networks usually layer encrypt forward messages and pad all of them to the same size (they must be padded to the same size at every hop as well). They derive their anonymity by mixing messages, ie: time delaying messages until the mix gathers many messages, then reordering the messages prior to sending them out. This technique offers an extremely high degree of anonymity compared to the low latency networks like Tor and I2P. Even if an attacker watches the links between all mix nodes, the sender maintains their anonymity for some volume of messages sent. In fact, a single good mix on a messages path is usually all that is required to maintain anonymity. The academic literature regarding mix networks is vast and I cannot possibly hope to summarize it all here, but there are many different ways of constructing mixes as well as of trying to attack them. For example, one sort of mix network is called threshold mixing. In a threshold mixing scheme, the mix node gathers messages until a threshold is met (say, 100 messages), prior to reordering the messages and forwarding them on. An attack against a threshold mix is called a flushing attack. In a flushing attack, the attacker first waits for the targets message to enter the mix, at which point they flood the mix with their own messages up to the threshold. This forces the mix to send the targeted message prior to establishing a large enough crowd size to hide the message in, because the attackers messages can be filtered out by the attacker. An even more dangerous version of the flushing attack in the n-1 attack. In this case the attacker can delay the targets message, perhaps because they are the ISP of the target or they are the first node utilized by the target. In this case the attacker flushes the subsequent mix entirely, then releases the target message, then flushes the subsequent mix again. This causes the target to have a crowd size of 1 because the attacker manipulated the mix into only mixing the targets message with filterable attacker messages. Anyway now that I have explained the very basics of the techniques currently being utilized, I want to brainstorm a bit about what the ideal anonymity network would entail. First of all we would need to think about what our goals are: 1. Clients and routers mutually inclusive (as in I2P and Freenet) or not necessarily (as in Tor)? Networks where all clients are routers have advantages and disadvantages. The most obvious advantage is that they scale fantastically well and can deal with large volumes of bandwidth. The most obvious disadvantage is that not all users are able to route traffic for others. In the case of a network like Tor or I2P, it is very beneficial to have a large network, because the more nodes there are the smaller percentage of the nodes the attacker can control, and therefor the less likely it is that the attacker can monitor a users traffic at the network edges. In the case of mix networks it is actually a disadvantage to have a very large network. This is because the smaller the network is the more concentrated the messages traveling through it are, and therefor the larger crowd sizes messages can be mixed with. Another risk of all clients being routers is the possible added susceptibility to intersection attacks, we would want to think of a way to make it so the entire client list is not trivially enumerated, or else we would open up this risk. 2. For access to the clearnet or for hidden services? Allowing access to the clearnet has advantages and disadvantages. Two of the primary disadvantages are that for one the exit node utilized is always going to be capable of spying on the users traffic (although it could be user encrypted), and for two the exit nodes are always going to be abused. The advantages of allowing exiting to the clearnet are that a lot more people will use the network, a lot more traffic will be routed through the network and a much more diverse group of users will use the network. 3. For services on which content is put, or for content upon which services are built? In the case of I2P and Tor, anonymity is provided to servers and content can be put on the servers. In the case of Freenet deniability is provided to access to raw data, and any services that utilize Freenet need to be custom designed to work with the raw data retrieved through or published to Freenet. Both of these strategies have advnatages and disadvantages. In the case of systems such as Tor and I2P the advantage is that people are already accustomed to running services on top of servers, and there is already an enormous library of php scripts and such that can be run on anonymized servers. It takes a lot more work to provide a service on top of freenet. On the other hand, in the case of Freenet the security is arguably increased as people can move away from massively bloated all purpose browsers and to task specific bare-bone programs. Additionally, in the case of I2P and Tor, content is vulnerable to hackers who can penetrate the server that it is hosted on, in the case of Freenet the security of content is mostly dependent on the security of Freenet which is likely to itself be more secure than most newbies can ever hope to secure their own servers. Freenet is also essentially immune to content being censored or DDoSed because it is spread through potentially thousands of different servers around the world. 4. High or low latency? Maybe variable latency?? Of course there are advantages and disadvantages to high and low latency networks. Low latency networks are snappy, like Tor and I2P. You can use them for browsing the internet and, although they feel a bit sluggish, they offer more or less the same experience as surfing the internet without an anonymity network at all. Of course the down side to low latency networks is that they are severely crippled in the anonymity department. In the case of Tor you are certainly only safe until your attacker can watch a single packet originate at you and arrive at your destination. I2P might be a bit more resistant to this because of all clients also being routers + variable length tunnels, but it is still weak to a variety of attacks. On the other hand, mix networks are slow. The current mix networks can take hours and hours to deliver a single message. They are also unreliable because of this; as nodes go down before messages reach them, the messages are dropped and never arrive. I2P and Tor are so fast that reliability is much higher, your traffic goes from the first node you select all the way to your destination site in a matter of seconds. On the other hand, mix networks don't necessarily need to be slow, the only reason they are slow is so that each mix can gather an adequate amount of messages prior to reordering and firing. On a very heavily used mix network, very small time delays would be required to gather an adequate crowd size. On the plus side mix networks offer significant anonymity, even if an attacker can observe all links between mix nodes and the internal state of all but one mix node, some level of anonymity is still maintained.