It is more than just a desire to reduce bandwidth, although that does come into play as well. The primary reason why users need to be able to tell which of their friends have seen a payload, is so they know who to respond to when they make a response to the message in the payload. Picture it with E-mail and a mail archive: Alice sends a tagged encrypted message to a mail archive. This is the payload. Now she wants Bob and Carol to see the message and be able to respond to it. So she sends Bob an E-mail with the tag of the message and a key to decrypt the message, and she tells Bob that Carol can also see the message. Alice sends the same E-mail to Carol, but letting her know that Bob can see the message. The E-mails Alice sends to Bob and Carol can be seen as the metadata packets. Now, if Bob knows who Carol is, when he makes a response to the message he can know to tell Carol about it. If Alice never told Bob that Carol knew about the message, Bob could only make a response and tell Alice about it. But in this case there is not group communication taking place, rather it is like Alice holds a conversation with Bob and independently holds a conversation with Carol, about the same topic. So it is required for group communication that Bob knows Carol is part of the group communicating. Now there are cryptographic tricks we can do to make this more secure, for example we don't want Bob to learn anything about Carol if he doesn't already know who she is. Additionally, we don't want Bob to even know how many people Alice pointed to the message, unless he knows all of the people that she pointed to the message. So the most important reason for clients to know who all has seen a message is so that they know who to respond to when they make a response to the message. Saving bandwidth by not resending a ton of metadata packets is just an added advantage of this. I am not sure I understand this question. If there are duplicated metadata objects (although each one is a bit different, even if it points to the same message), that will increase the number of searches that need to be performed as well. If Alice points Bob to a message he already knows about, then Bob still needs to search for that metadata object, because he doesn't know what it points to until he downloads it, and downloading it requires him to search for it. He is capable of searching for it because he has a shared secret search string between him and Alice, but until he actually downloads it he has no idea what it is he is downloading. Here is an abstract from one of the papers that looks like a suitable candidate (however I still need to read the one way indexing paper, it looks like it might be better actually in that it could provide censorship resistance as well) : www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA456185 Note that we would need to use the extension in 5.2 of the paper which eliminates a globally public dictionary, as the keywords searched for are random, secret, and long. Sure Sort of, but the CPU-time required for a client to decrypt a search is actually quite large. Decryption of search results can take many minutes and a lot of processing power at the clients end. So not searching for things you already have actually is probably a significant reduction in required client CPU-time as well as certainly a decrease in storage at the PSS (in this case) database. Yes storage concerns me the most as well. Pretty much we need to have several PSS servers sharing the load to gain the benefit of decentralization (ie: taking down a single server has no effect on the overall system). But it seems very wasteful to have, say, two servers with 4TB of storage capacity, that need to be exact mirrors of each other. Adding more servers doesn't really increase storage capacity in this case, it only increases redundancy. But at the same time, having it so different messages go to different servers will very likely hurt the anonymity of the system and make it weak to traffic analysis. Just because PIR/PSS/OWI/Whatever makes it so the clients can download messages without the server knowing which message they downloaded, does not make it perfectly immune to traffic analysis. It sets a strong foundation upon which we need to build up our traffic analysis resistance. I think that we should certainly have proof of work to send messages to at least discourage spam and flooding of the PIR-like servers. It is trivial to implement hash based proof of work I could make such a system in half an hour tops. I don't much care for the idea of CAPTCHA , it is easy to circumvent and makes things much less nice for the legitimate users. For clients searching is very CPU intensive. It will likely take several minutes (depending on CPU of course) for the client to decrypt their obtained search results. I don't think searching is actually very CPU intensive for the servers, but let me read the entire document through again before I make that claim. The PIR for Pynchon Gate is extremely CPU intensive for the servers but not very much so for the clients. Most PIR systems I have read about are very CPU intensive for the servers, and there tends to be a direct trade off between CPU and bandwidth (for example, the easiest PIR is that of BitMessage, everybody gets everything, where no significant processing needs to be done by the server but it needs to send enormous amounts of data to every client. Pynchon Gate PIR is the opposite of this, in that the server needs to do enormous amounts of computation, but only needs to send small amounts of data to every client). But in the PSS papers and other PIR-Like systems I have read about, it seems that most of the load is put on the clients, which is actually fantastic. Do you think you are better programmer / know more about security than the BitMessage people? Do you know not to directly encrypt payloads with RSA? Are you pretty good at math? Do you know C programming? The thing is yeah it is best if only top experts program stuff like this. But to help people we only need to be better than the "competition". Right now the competition for systems like this is not that good. The only systems similar to this are Syndie, BitMessage, Frost (and I suppose Dissent as well, but I don't think anyone is using that yet). Syndie doesn't include a network but only an interface that can be used with any network:storage pair. BitMessage is horrible and obviously not made by people who know what they are doing. Frost I know little about, but it is essentially relying on the anonymity of Freenet, which we can improve upon. Dissent I don't know enough to comment on, but quite likely it is the best of the bunch considering it is coming from academics instead of hobbyists. Also we have solutions like Tor and php forums, and I2P and the same, but that is so far below the aspirations of this that it can hardly even be compared. We need to make this because the solution of Tor + php forum, or I2P and the same, is simply not anywhere near good enough. Even if Tor is programmed perfectly and the PHP forum has no flaws, by the very design it is just not going to be resistant enough to strong attackers. pretty much what I am trying to say is that even if you are not top security expert in the world, you can still help contribute to this and still look over code that is done. Peoples lives will not depend on you, they will depend on everybody who contributes to the project. The more people who contribute to writing code and to auditing code the better it is going to be. At first I tried doing this all by myself, and two years later I realized I bit off more than I could chew and other people became involved. Things improved! Not all of the people involved are experts, I consider myself only to be a hobbyist as well but I still implemented a cryptographic packet format, and the expert people who I asked to audit my code said it looked correct. That was the first cryptographic system I implemented, and it was hard work, but by sticking to the whitepaper and researching the shit out of things, I was able to produce a system that true experts in the field said I had produced correctly.