Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - kmfkewm

Pages: 1 2 3 [4] 5 6 ... 249
46
Security / Re: Tor update warning: Tor might not protect you from NSA
« on: September 21, 2013, 10:06 am »
Do we need to scrap Tor and put the efforts of the privacy/anonymity community behind a completely different system?

Yes.


Quote
where do we go from here

Mixnet + PIR

47
Security / Tor update warning: Tor might not protect you from NSA
« on: September 21, 2013, 08:46 am »
added to front page of Tor site

Quote
Update: it may be that Tor can't protect you against NSA's large-scale Internet surveillance, and it may be that no deployed anonymous communication tool can. We're also working on educational materials to explain the issues. Stay tuned!

48
Security / Re: Dissent: accountable anonymous group communication
« on: September 21, 2013, 08:25 am »
One attack can work against any sort of network with any sort of pseudonymity. Pretty much the only way to protect from it is for all posters to be anonymous in the true sense, meaning without a name. It simply consists of the attacker (who again, can see the links between all nodes on the network) taking note of all of the nodes who are on the network at a given time that the attacker receives a message from a user. Because messages are time delayed the attacker can not simply say that the sender of the message must be one of the clients currently on the network, but they can guess with high probability that the message was sent from a client they observed sending a message in the past week, or month, and realistically probably the last couple of hours or days. If the attacker is Alice, and she assumes that Bob has his messages delayed for no more than two days, after she gets a message from Bob she can cross out all of the clients she did not see send a message in the previous two days. Now ideally all clients send the same number of messages every day, but realistically clients will not be online every single day of the year. Maybe Carol went on vacation for two weeks and didn't connect her client at all, but Bob kept sending messages during this time period. Alice can find the maximum time delay of Bob's messages by simply seeing how long it takes her to get a response to one of her messages. If she gets a response to a message from Bob in one day, she knows that he did not delay that message for more than one day, and so he must be one of the nodes that sent a message in the previous day.

This sort of attack even works against otherwise information theoretically secure systems such as DC-nets. In a network that maintains ideal conditions it can be protected from, but in the real world networks don't maintain ideal conditions. Also, a very powerful attacker can force a network to not maintain ideal conditions, by cutting internet access to certain countries even if need be. Oh Bob still wrote me messages while I had cut internet to Iran? Bob is probably not in Iran after all!

The only way to really fully protect from this attack is to not have pseudonyms or other identifying characteristics, including writeprint. This attack only works if you can see at least two different snapshots of the network, associated with at least two actions of a target. If the target cannot have a second action identified, the attack cannot work. Other techniques could involve the use of covert channel networks to hide from Alice the fact that Bob's IP is in communication with an anonymity network, even if Alice can monitor the entire traffic of the entire internet. Note that once again the weakness is a result of variety, namely the clients that were online changed from action one to action two, and this led to an attack on anonymity. Invariety between actions, ie: the network has the same exact clients using it when both messages were sent, would protect from this attack.

Thankfully in practice this attack can be made to take a long time to carry out. Provided there is a network with a substantial number of users, and that it bootstraps itself into its initial anonymous state. But it depends on the up time of the clients using the network. Ideally clients would start sending dummy traffic to the network, but not posting on it, for a period of several days to a month before actually using the network to send messages. This allows them to blend in with other new clients who join in the same time frame, and other people who may already be using the network but decide to make new identities. Maybe there will be a hundred or so people in your anonymity set from this attack. It could take quite a while for them to go offline long enough for you to be identified as not them. But slowly over time it is likely that your initial anonymity set will be chipped away, and in the end it will be only you left in it. Even anonymity systems that are proven as being as anonymous as any system can possibly be are weak to this attack if pseudonyms are used.

49
Security / Re: Dissent: accountable anonymous group communication
« on: September 21, 2013, 07:55 am »
Seems like a lot of talk without any substance to it. I think everyone should look into this more to understand all the features of this program and how it is different from tor, and how those differences make it superior to tor. But "we guarantee anonymity" is the same thing many failed sites have claimed. I think we should be as skeptical as possible, although the idea is very nice.

Also the "bandwidth share" sentence was quite disconcerting.

Feel free to do some research

http://freehaven.net/anonbib/cache/alpha-mixing:pet2006.pdf <--- done

http://www.abditum.com/pynchon/sassaman-wpes2005.pdf <---- not doing, talks about the idea of using PIR + protecting from intersection attacks

http://www.esat.kuleuven.ac.be/~cdiaz/papers/cdiaz_inetsec.pdf <---- discusses mixes in general

http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA456185 <---- need to do

http://research.microsoft.com/en-us/um/people/gdane/papers/sphinx-eprint.pdf <---- done

http://spar.isi.jhu.edu/~mgreen/ZerocoinOakland.pdf <--- Talks about Zerocoin, the first distributed blind mix. Might integrate, others have coded an alpha version

http://www.cl.cam.ac.uk/~rja14/Papers/bear-lion.pdf <--- block cipher for Sphinx, done

that should get you started


Why it is superior to Tor:

A. Tor is vulnerable to end to end correlation attacks

Client <-> Client ISP <-> Entry Node ISP <-> Entry Node <-> Entry Node ISP <-> Middle Node ISP <-> Middle Node <-> Middle Node ISP <-> Exit Node ISP <-> Exit Node <-> Exit Node ISP <-> Destination ISP <-> Destination

if the attacker has control over any of the members of {Client ISP,  Entry Node ISP, Entry Node} AND any of the members of {Exit Node ISP, Exit Node, Destination ISP, Destination} then Tor offers no anonymity to the client. Tor tries to protect from this primarily by having a very large network of nodes, which makes it hard to own both the clients entry node and exit node. As the Tor network has more nodes added, it becomes increasingly difficult to do a fully internal attack, however it is still entirely possible to. The big thing is that it is easy to monitor destination site at its ISP, or even to seize the destination server and monitor from that point. Even hidden services have relatively crappy anonymity. So the Tor security model is little more than if you have a good entry node you might not be fucked but still could be, and if you have a bad entry node you probably will be fucked but might not be. Attackers like the NSA monitor traffic at the ISP level, quite intensely apparently, and they could very well be capable of watching your traffic at Client ISP and Destination ISP, which means they can link you to your destination. Attackers with a lot of nodes on the network still stand a significant chance of owning both Entry Node and Exit Node, especially if they have high bandwidth nodes. It is estimated that there are already groups of node owners who can deanonymize huge parts of the Tor network in little time, right now the hope is that they are not malicious. Adding more nodes to the Tor network, from diverse groups of node operators, can continue to protect from internal attackers nearly infinitely, as more nodes are added by good people it becomes less likely for bad people to have nodes on your circuit, unless bad people also keep adding more nodes. Also, even if bad people keep adding nodes, it makes it less likely that any given bad person will have a node on your circuit. Anyway, despite the ability to do a somewhat decent job at protecting from internal attackers, there is a hard limit to the protection Tor can provide from external attackers. There are only so many different networks making up the internet, and most of them are exchanging traffic through a much much much smaller number of exchange points. Monitoring a few hundred key points on the internet is enough to monitor most traffic on the internet, and no matter how many Tor nodes are added to the network it doesn't matter, because once all Client/Entry/Middle/Exit/Destination ISP's are being monitored by an attacker, they can break Tor in 100% of cases. But they don't even need to monitor all of these ISP's or even all of the IX's they exchange traffic through, in order to deanonymize a huge subset of Tor users, because Tor entry guards rotate over time for one and eventually you will use a bad entry or a good entry with a bad ISP, and for two if the attacker monitors your entry and your destination externally they can still deanonymize YOU even if they cannot deanonymize the entire internet. In addition to this keep in mind that a tremendous amount of traffic on the internet is crossing over the US, and most international traffic is going through multiple countries. The graph I drew above isn't even high enough resolution, in reality there are tons of points between these nodes and a compromise ANYWHERE between client to entry + a compromise ANYWHERE from exit to destination, is enough to deanonymize a user.

High latency mix networks are not nearly as vulnerable to this sort of attack. The goal of anonymity is to remove any variation between one user and their communications and another user and theirs. Any variation can be turned into an anonymity attack. Tor takes a few measures to add uniformity to communications. First of all, due to the encryption of onion routing, all streams are, at the content level, 'the same' in that they all are indistinguishable from random noise at each hop. If you send the message "Example" down the following path:

Client -> Node 1 -> Node 2 -> Node 3 -> Destination

an attacker who owns Node 1 and Node 3 has no trouble at all linking the message from Node 1 to Node 3, even though there is Node 2 in the middle. This is because the content of the message is exactly the same, and it sticks out from other messages. If onion encryption is used, the message "Example" might look like "I9aiPS1" at the first node, "9!@jU9A" at the second node, and "AHZ12(a" at the third node. Now the attacker at node 1 and 3 cannot so easily link the message, because it looks completely different at node 3 versus node 1. It blends in with all the other messages node 3 is handling, in that all of them look like completely random noise, and there is an extraordinarily low probability that the same pattern of noise has been seen anywhere else ever in the existence of the entire universe.

Tor also pads all information into packets that are exactly the same size, 512 bytes. If not for this, we would have problems like this:

Client -> Node 1 -> Node 2 -> Node 3 -> destination

sending the stream

[1byte][6bytes][3bytes][10bytes]

4 packets with different size characteristics. In reality we could expect many more packets with vast size differences, if not for the padding of Tor. The attacker at Node 1 and Node 3 can easily link the stream, because they see that there is a correlation in the packet size of the stream they are handling, and chances are it is pretty damn unique at that. With Tor it is like this

Client -> Node 1 -> Node 2 -> Node 3 -> destination

sending the stream

[512byte][512bytes][512bytes][512bytes]

Each packet has the same actual payload data as before, but now it is padded such that all packets are the same size. Now the attacker at node 1 and 3 cannot use this characteristic to link the streams, because all the packets are the same size, and all streams all the nodes handle have packets of the same size so variation has been removed and replaced with uniformity.

So Tor is good to do these two things, and they are indeed improvements over many VPN technologies, and they have in fact ended up making Tor harder to attack at a technical level (just look at fingerprinting of VPN traffic, which approaches 100% accuracy, versus fingerprinting of Tor traffic, which has taken a long time to get past 60% accuracy). But Tor still has many problems left over, and most of them are inherent to the design of Tor. There is still the possibility of variation in interpacket timing.

imagine each . equals a small unit of time, like 1ms

Client -> Node 1 -> Node 2 -> Node 3 -> destination

[packet].......[packet]..[packet].......[packet].........[packet]...[packet]

When the client sends a stream of packets down Tor, they are sent as they are constructed, not at fixed time intervals. Even if they were sent at fixed time intervals, any of the nodes could delay individual packets as much as they want before sending them forward, and in fact even though they don't need to do this they can still do it to speed up this type of correlation attack. Now we are right back to where we started, the attacker at node 1 and 3 can link the stream by analyzing interpacket arrival times and looking at the packet spacing. Tor does not protect from this attack at all, short of natural network jitter, which is not enough.

Mix networks protect from this attack because the nodes completely wipe interpacket timing characteristics between nodes.

Client -> Mix 1 -> Mix 2 -> Mix 3 -> destination

If Mix 1 sends a message to Mix 2 with the following packet properties (though on a mix network the entire message can be one very large padded packet):

[packet].......[packet]..[packet].......[packet].........[packet]...[packet]

it doesn't matter, because mix 2 waits until it has the entire message, and then it sends it forward. It doesn't send packets as it gets them. This removes all of the interpacket arrival time that mix 1 inserts into the message.

[packet].......[packet]..[packet].......[packet].........[packet]...[packet]

Becomes

[packet][packet][packet][packet][packet][packet], prior to node 2 sending it to node 3. The fingerprint is sanitized.

Another issue with Tor is that packet counting is not protected from at all.

Client -> Node 1 -> Node 2 -> Node 3 -> destination

[p1][p2][p3]

The client sends a total of 3 packets to the destination. Now the attacker at node 1 and 3 can use this to try to link the stream to the destination back to the client. Because if node 1 processes a 3 packet stream, and node 3 shortly after processes one, then there is a high probability that the streams are related. Node 3 knows the 5 packet stream it just processed is not the 3 packet stream that node 1 just processed. Tor very minimally protects from this by padding single packets to the same size, but it doesn't pad entire streams to the same number of packets. Mix networks protect from this because every message is just one really large packet, so all 'streams' are one packet.

These are just two of many examples of how higher latency mix networks are superior to Tor. I was going to make a list so I started with A, but after typing this all out I realize that A by itself should be enough to show you why mix networks are superior to Tor. I could go on to B, C, D, and E, but for right now I am tired of typing.

To summarize point A:

Tor adds packet size invariance (by padding packets) and payload content invariance (by onion encryption) but leaves stream size variance and interpacket arrival time variance. Any message variance leads to correlation attacks! Good mix networks remove stream size variance by having all messages as a single fixed size packet, and also removes timing variance by the same method as well as by introducing timing delays at each hop enough for mixes to remove interpacket timing delays and add interpacket timing invariance between mixes (even assuming multiple packets are used). A good mix network will remove *ALL* message variance at *EACH* mix. So it doesn't matter if:

Client -> Bad Node -> Good Node -> Bad Node -> Destination

happens, because the good node in the middle makes the message totally uniform and removes any possible fingerprint that could be identified between the first and last bad node. On a good mix network, every single message either has exactly the same characteristics, or essentially the same characteristics in that all messages are totally randomized at each hop. Anything that is not invariant between hops is randomized at each hop. This alone massively increases the anonymity of a good mix network over Tor, but many other things come into play as well. I was not talking out my ass when I said the attacks on mix networks are almost totally different from the attacks on Tor. This is because a good mix network fixes all the attacks on Tor that can be fixed, and also should take measures to protect from more advanced attacks that are in the realm of mix networks. The mix network threat model is totally beyond the Tor threat model, in that it fixes the problems of Tor that can be fixed, and moves on to the problems of mix networks.

And as for the problems of mix networks, let's start by saying that the threat model against mix networks assumes that the attacker can view *ALL* links between mix nodes. Keep in mind that the threat model for Tor assumes that an attacker who can see all links between all Tor nodes can 100% compromise Tor. So mix networks assumed attacker is the attacker that can deanonymize Tor traffic in real time.

One of the problems of older mix networks (all the currently implemented remailer networks, in and of themselves anyway) is the long term statistical disclosure intersection attack. Assume that the attacker can see all links between all mix nodes. Alice communicates with Bob over the mix network that has 1,000 members. The attacker can see how many messages every client sends and receives, due to the fact that he can see the links between all nodes. In the simplest example, the attacker is Alice. Alice sends 1,000 messages to Bob. Now Alice waits to see which nodes on the network receive 1,000 messages in the period of time she knows it will take for all of her messages to be delivered. Any node that doesn't receive at least 1,000 messages can be ruled out as Bob. Over enough (little) time, Alice can identify Bob in this way.

Note that like nearly all anonymity attacks (and many security attacks), the culprit here is variance. In this case, the variance is in the number of messages received. Not all nodes receive the same number of messages, and Alice can use this to her advantage in order to find Bob. This attack also allows third parties to link Alice to Bob, even if Alice and Bob are both good and not attacker, although that takes a little bit more math to explain. Anyway, the first solution presented to this problem was to use PIR.

Now Alice sends 1,000 messages to Bob's PIR server. She has no way to directly send 1,000 messages to Bob. Instead of the network pushing 1,000 messages to Bob (as happens in the old designs), Bob now pulls messages from a rendezvous node (via PIR to protect his anonymity). The key difference is that Bob will only download 50 messages every day, regardless of how many messages he has waiting for him. And every other member on the network will only download 50 messages per day as well. They can communicate with their PIR server over the mix network if they want to tell it to delete certain messages, or to prioritize certain messages over others, but no matter what they will only download 50 messages per day. Now it doesn't matter if Alice spams Bob with 1,000 messages, because in the best case Bob can just delete them and never end up downloading them at all, and in the worst case Bob will download 50 of them a day for 20 days, and end up obtaining 1,000 messages over a 20 day period, the same as every other user of the network.

This protects from long term statistical disclosure attacks from third parties (linking Alice to Bob) and from a malicious Alice trying to locate Bob. However, it doesn't prevent a malicious Bob from trying to locate a non-malicious Alice. Let's say that Bob notices he has obtained 25 messages from Alice over a period of ten days. Now, remember that Bob can also see the entire networks links. So now Bob can do the intersection attack himself, saying that Alice must be one of the nodes that sent at least 25 messages over this period of time. And again, over enough time, Bob can locate Alice with this attack. The only way to protect from this attack is for each client to send an invariant (yes, once again, variance was the culprit) number of messages per period of time. This can be accomplished via random dummy messages being sent when legitimate messages are not. Essentially Alice has 50 messages she can send in a 24 hour period, and every so often either one of the legitimate messages in her outgoing queue is sent, or if she has no messages in her outgoing queue, a dummy message is sent in its place. Now every 24 hour period of time, Alice and every other node on the network sends 50 messages, and the variance is erased, preventing Bob from carrying out his attack.

Both of these techniques are modern and neither has been implemented in a real network yet (although Usenet + Mix Network has been used, aka everybody gets everything PIR, to protect from malicious Alice and malicious third party carrying out this attack. However, everybody gets everything PIR does not scale), currently all mix networks are weak to this type of attack (in themselves, when everybody gets everything PIR is layered on via usenet or shared mail archives you can protect from the first attack). Pynchon Gate was the first whitepaper to describe a (scalable) way to prevent this attack from locating Bob or a third party linking Alice to Bob, I cannot name a specific paper that was the first to suggest using dummy traffic to prevent Bob from locating Alice with this attack, but probably one of the early papers on Dummy Messages mentions something about it.


50
Security / Re: completely removing tor from computer
« on: September 21, 2013, 05:38 am »
The only safe and simple choice is to completely format the hard-drive.

You may remove the downloaded files easily but you could forget download history that shows you downloaded tor, or unrar/unzip history that shows you extracted tor, or browser history that shows you acessed the tor download page, or documents history that shows you executed tor, or software firewall permissions for tor, or many other types of evidence you are unaware of.

Tails or another live OS is something you should look into.

format != wipe

51
Philosophy, Economics and Justice / Re: Let's define Freedom
« on: September 21, 2013, 04:16 am »
Also I would like to point out that none of us live in a free society, although some are more free than others in various regards.

The fact that drugs need to be approved by the FDA means that we are not free. If somebody wants to release a drug that has not been approved, it is in their right to do so, because they should be free, and being free means that they have the ability to not be forced to not do things that they want to do. They want to release a drug that has not been approved by the FDA, being prevented from doing this is a violation of their freedom. They are not violating the freedom of anybody by releasing such a drug, because they do not make anybody take the drug.

The fact that we are not free to use drugs means that our freedom has been violated. If somebody wants to use a drug, it is their right to do so, because they should be free, and being free means that they have the ability to not be forced to not do things that they want to do. They want to use a drug, being prevented from using the drug is a violation of their freedom. They are not violating the freedom of anybody by using a drug.

The fact that we are taxed means we are not free, for the same reasons as above. The fact that certain information is forbidden means that we are not free, for the same reasons as above. The fact that different countries have variance in their age of consent laws is a clear indication that not all countries are free, because either some countries have legalized child abuse (meaning the children are not free in those countries), or some countries have decided to enslave people who do not abuse children (meaning the people they enslave are not free).

In fact, we are very far from free. We are so far from free that we cannot be considered as anything other than slaves. And it is about time that we have a slave revolt, and kill those who have enslaved us, and take control of the world ourselves, and make sure that we globally protect the freedom of all people. This is why I highly suggest totalibertarianism, because under totalibertarianism freedom is totally protected with an iron fist, and anybody who goes against freedom is considered a dissident and crushed like a bug. If you support freedom, please feel free to call yourself a totalibertarian and to help overthrow the slave masters of the world.

52
Philosophy, Economics and Justice / Re: Let's define Freedom
« on: September 21, 2013, 04:07 am »
Freedom is the following things:

A. The ability to not be forced by others to do things that you do not want to do, and to not be forced by others to not do things that you do want to do
B. The ability to exercise total control over that which is yours, to the exclusion of all others

Your freedom ends where anothers begins. So it is not immoral for people to stop you from raping somebody, because someone else who is free has the ability to not be forced to do things they do not want to do, and if you try to violate their freedom then it is okay for people to violate yours, in that it is okay for people to force you to not do things that you do want to do if you doing those things would prevent another person from not being forced to do things they don't want to do.

The ability to have guns clearly falls under freedom, because having a gun may be something you want to have the ability to do, and it doesn't violate the freedom of anybody else , provided you do not bring a gun on the property of somebody who does not want you to (doing this would violate their ability to exercise total control over that which is theirs).

Freedom is not hard to define.

53
Philosophy, Economics and Justice / Re: little rant
« on: September 21, 2013, 12:49 am »
But why did Otto Warburg get a nobelprize for finding the cause of cancer 84 years ago, if the cure is not found already?

Are you so dumbed down you think the cure is not found when we have the cause?

i dont know but you seems kinda lost in the matrix.

If you blast off on a space ship into the sun you will die from burning to death. How do you propose you stop this from happening when blasting into the sun, the cause of death is known.

54
Security / Re: Opening files
« on: September 21, 2013, 12:44 am »
64 bit is more secure

55
Security / Re: VPN -> TOR -> VPN.....Am i creating a correlation attack?
« on: September 21, 2013, 12:27 am »
yes

56
Security / Re: Dissent: accountable anonymous group communication
« on: September 21, 2013, 12:26 am »
Quote
Group discussion is one-to-any communication at its core.   This means that payload encryption is decorative at best (if you're really going to give everyone access to decrypt it, why are you encrypting it?)  Verifying integrity, authenticity, and probably non-repudability through crypto are all great features for group discussion, but the payload itself doesn't support effective encryption.

Group discussion isn't always one-to-any, but public discussion is. There are plenty of private forums with screened memberbases, they might want to encrypt communications to prevent outsiders from being able to see them while allowing members to see them. Some forum have 80 members, others have 600. In the past there have even been groups that encrypted all of their messages with GPG with a group shared key, this would essentially do the same thing but automatically and better. Also, in some instances three or four people might want to talk together, they would consist of a group, but they don't want outsiders to see their communications. Group OTR would be an example of something that would solve this problem, and I think it is a problem worth solving and something that people would need. The only case where the encryption becomes decorative is in the case of public messages that anybody can see, but it still serves a purpose. Freenet message level encryption can be thought of as largely decorative as well, keys to decrypt the content on Freenet are widely available in many cases. In many cases nodes can identify the encrypted content passing through them if they want to, and they can decrypt it as well with the publicly available decryption keys. But the message level encryption still serves the purpose of plausible deniability, because even though a node could decrypt a message it doesn't mean it actually can, just like Freenet nodes could decrypt a lot of the content they host, but it doesn't mean they went out and got the publicly available decryption keys. So in this case public message encryption would serve the function of protecting the PIR servers by providing them some plausible deniability. Let's say the government knows CP has been uploaded to the network and then they seize a PIR node and say they found CP on it. They can say look it was right here in plaintext! You must have known about it! But if it is encrypted and the same thing happens, the person operating the PIR node can say well sure there is CP there but I am not involved with CP so I never looked for the keys to decrypt CP! It might not buy a lot to encrypt publicly viewable messages, but given how trivial it is to do, and the fact that it seems to buy a little, I think it is worth it.

Quote
Without some method of trivially usable read/write storage shared by all clients, many easy things become hard.  Namely, we can't share WoT easily.

Trivially usable read/write storage isn't anonymous or secure.

Quote
Because of PIR's strong anonymity protections against tying client actions to content, there's no effective mechanism to edit existing content.  Which means that whenever kmfkewm's WoT changes, he has to automatically post a full version with the update.  For me to inherit WoT properly, I have to periodically perform a search for the latest version of the WoT for each of the trusted members of my own WoT.   The more CPU-intensive each search is, the bigger of a deal this is (probably to the client more than the server, from the sounds of it).

I don't see a way around having to redownload my entire WoT in any scheme, short of me keeping track of who has my WoT and which version of it, and only sending them newly added people.

Quote
Implementing WoT requires a read/write location for the storage and public-visibility of individual Identities' Web of Trust databases.   If I trust kfmkewm, and extend some level of trust to whomever he trusts, I need a way to view his WoT database.   And tomorrow, when he adds astor to it, or just increases his level of trust for astor, I need to be able to see the change to his WoT to properly calculate my own inherited value of trust for astor.

Without some method of trivially usable read/write storage shared by all clients (that we totally take for granted in traditional architectures), some easy things become hard.  And I think WoT is one of those things.

Tons and tons of easy things become hard when you make something that is cryptographically secure. The solution is not to compromise security for ease of use and implementation. Nothing with trivial read/write storage is secure enough. The closest thing you get is hidden services with PHP scripts on them, similar to this forum. But in obtaining that trivial read/write, you lose all of the benefits of mixing and all of the benefits of PIR. It would be great if a trivial system like Tor existed with strong security and anonymity properties, but it is an open research question in academia if it is even possible and most people think it isn't. There is no known system that is as easy to use as Tor Hidden Services that comes anywhere near the level of anonymity that can provided by Mixing and PIR. Tor is a BB gun, Mixing and PIR are assault rifles. To put things into context, the attacks against Tor are almost entirely different from the attacks against mix networks, because mix networks have solved almost all of the attacks against Tor and then some. The only attacks that work against Tor that also work against Mix networks are attacks that are inherent to all anonymity networks and entirely impossible to defend against, such as long term pseudonym/IP intersection attacks carried out by a GPA, which also work against DC-nets.   

Quote
The problem with the second option is that it prevents me from downloading messages from unknown identities (so I can't choose to view messages from people I don't know).   It forces a quandary: "Either download everything then throw away stuff from people you distrust, or download only messages from people you know".  You either end up with a killfile, or end up living in a fishbowl.

People you know can point you to messages from people you don't know.

Quote
PIR is perfect for one-to-one messaging, and you can extend that to one-to-many with clever mechanisms, but I think for one-to-any messaging, it may not be as good of a fit.

PIR is one of the only highly anonymous way to receive data long term.

BTW there are updateable PIR schemes:

Private Keyword-Based Push and Pull with Applications to Anonymous Communication

Quote
We propose a new keyword-based Private Information Retrieval (PIR) model that allows private modification of the database from which information is requested. In our model, the database is distributed over n servers, any one of which can act as a transparent interface for clients. We present protocols that support operations for accessing data, focusing on privately appending labelled records to the database (push) and privately retrieving the next unseen record appended under a given label (pull). The communication complexity between the client and servers is independent of the number of records in the database (or more generally, the number of previous push and pull operations) and of the number of servers. Our scheme also supports access control oblivious to the database servers by implicitly including a public key in each push, so that only the party holding the private key can retrieve the record via pull. To our knowledge, this is the first system that achieves the following properties: private database modification, private retrieval of multiple records with the same keyword, and oblivious access control. We also provide a number of extensions to our protocols and, as a demonstrative application, an unlinkable anonymous communication service using them.

Another option would be to allow the forum to operate like a normal public forum for public messages. Meaning messages in plaintext uploaded through the mixnet to the PIR server, indexed by things such as the subforum they are in. Then I could make a message and post it to the security forum, and I would do that just by uploading it (through the mix net) to the PIR servers, indexed with a tag like ForumA::Security-Subforum-day. Then someone could obtain all messages in ForumA::Security-Subforum by searching for all messages tagged with ForumA::Security-Subforum-day , where day is the current day (or the last day since they got messages). This makes things easier for public forums, but it also has a few problems. For one it makes spamming easier, because nothing stops anybody from using the tag, and people wont know who a message is from until they download it. In the case of public messaging, the encryption is kind of decorative, but some level of deniability is lost from the PIR servers. The biggest problem I see is that it makes spamming much easier. Also it makes it harder to differentiate messages with the same tag, we don't always want to get every message tagged with ForumA::Security-Subforum-day , we might only want to get NEW messages with that tag, since we already downloaded half of the messages last cycle. Keep in mind that there is a limit to the number of messages a client can download per period of time, and that this is required to protect from a class of intersection attacks. If a client cannot download all ForumA::Security-Subforum-day messages in one go, how are they able to get the remaining messages the next time they try? It seems like they will end up getting the old messages they already got, and will miss all the messages for that time period that they cannot fit into one of their buffers. Solutions like tagging messages ForumA::Security-Subforum-day-a, ForumA::Security-Subforum-day-b, etc would work but they would introduce some anonymity attacks if there are collisions, and since it is high latency there will be (IE: Alice sends the first message of the day ForumA::Security-Subforum-day-a and has it mix for an hour before making it to the PIR server, at the same time Bob notices there are no new messages for the day so he makes the same post ForumA::Security-Subforum-day-a). One solution would be for the PIR servers to add the a variable, but since there are many servers getting messages at different times, what happens when server 1 gets its first message and labels it message A, but then another server gets its first message and labels it message A, when they are different messages?

Perhaps something like the push-pull PIR above would make the most sense. Pretty much no matter what it needs to be PIR based though, because nothing else is anonymous enough.

57
Quote
They are not supernotes if you or a bank could tell they are fake.  Supernotes are higher quality than real bills.

See, I took these two sentences to be related. I didn't realize you said

1. They are not supernotes if you or a bank could tell they are fake.
2. Supernotes have higher quality image than real bills.

I thought you said

They are not supernotes if you or a bank could tell they are fake, [because] supernotes are higher quality than real bills (after all if they have sharper images than real bills then a bank can tell they are fake by blowing them up right?)

58
high quality counterfeit would generally be taken to mean a counterfeit that is passable as the real thing, with a higher quality counterfeit being more passable than a lower quality counterfeit, and a higher quality counterfeit requiring more skilled experts / tools to differentiate it from real notes. Thus it seems nonsensical to say a counterfeit can be higher quality than a real note, since being anything that differentiates it from a real note would make it identifiable and thus not a perfect quality counterfeit. I didn't realize you were talking about the quality of the images printed onto the notes, I thought you meant the quality of the counterfeit notes.

The US government says that the image quality on supernotes is of higher quality and with more detail than real US notes.  Please check out the link that I posted and do your own research.

I do agree that it was not the best idea to make the counterfeit notes have a higher quality image.  I would assume that it was not done on purpose.  I would assume that they just accidentally etched the plates with higher detail than the US government did.

You didn't say the image quality is higher, you said that supernotes are higher quality than real notes. I took this to mean that it is more likely that a real note would be detected as counterfeit than it is that a supernote would be. I didn't know you were talking about image quality, because you never said the image is higher quality you said supernotes are higher quality than real notes.

59
high quality counterfeit would generally be taken to mean a counterfeit that is passable as the real thing, with a higher quality counterfeit being more passable than a lower quality counterfeit, and a higher quality counterfeit requiring more skilled experts / tools to differentiate it from real notes. Thus it seems nonsensical to say a counterfeit can be higher quality than a real note, since being anything that differentiates it from a real note would make it identifiable and thus not a perfect quality counterfeit. I didn't realize you were talking about the quality of the images printed onto the notes, I thought you meant the quality of the counterfeit notes.

60
There is no such thing as more high quality than a real note. The goal of counterfeit money is to pass as real money. If it is "better" than real money it can be told apart. Man made diamonds are less flawed than real diamonds, and this lets them be identified by skilled jewelers. Supernote can be identified as well, but only by top experts after very careful examination. Very unlikely this guy has supernotes if his bank identified them.

You are wrong.  Supernotes have higher detail.  The CIA blows them up very large at the details of the notes with magnifying glasses to tell them apart from legitimate notes.

quality != detail
sharper != higher quality

Pages: 1 2 3 [4] 5 6 ... 249