Silk Road forums

Discussion => Security => Topic started by: bitfool on May 28, 2013, 01:48 am

Title: Hidden services security doesn't look too good.
Post by: bitfool on May 28, 2013, 01:48 am
www.ieee-security.org/TC/SP2013/papers/4977a080.pdf
Title: Re: Hidden services security doesn't look too good.
Post by: Jack N Hoff on May 28, 2013, 02:01 am
Please copy it and paste it here.
Title: Re: Hidden services security doesn't look too good.
Post by: JackieChan on May 28, 2013, 02:11 am
Tell me more about security as you link a PDF to be read through TOR  :P
Title: Re: Hidden services security doesn't look too good.
Post by: bitfool on May 28, 2013, 02:12 am
You are pretty retarded, huh.
Title: Re: Hidden services security doesn't look too good.
Post by: bitfool on May 28, 2013, 02:14 am
IX. CONCLUSION

We have analyzed the security properties of Tor hidden
services and shown that attacks to deanonymize hidden
services at a large scale are practically possible with only
a moderate amount of resources. We have demonstrated
that collecting the descriptors of all Tor hidden services is
possible in approximately 2 days by spending less than USD
100 in Amazon EC2 resources. Running one or more guard
nodes then allows an attacker to correlate hidden services
to IP addresses using a primitive traffic analysis attack.
Furthermore, we have shown that attackers can impact the
availability and sample the popularity of arbitrary hidden
services not under their control by selectively becoming their
hidden service directories.

To address these vulnerabilities we have proposed coun-
termeasures. These prevent hidden service directories from
learning the content of any the descriptors unless they also
know their corresponding onion address and significantly
increase the resources required to selectively become a
hidden service directory for a targeted hidden service.
However, note that the above suggestions are nothing
more than stop-gap measures. We believe that the problems
we have shown are grave enough to warrant a careful
redesign of Tor’s hidden services.
Title: Re: Hidden services security doesn't look too good.
Post by: Jack N Hoff on May 28, 2013, 02:49 am
IX. CONCLUSION

We have analyzed the security properties of Tor hidden
services and shown that attacks to deanonymize hidden
services at a large scale are practically possible with only
a moderate amount of resources. We have demonstrated
that collecting the descriptors of all Tor hidden services is
possible in approximately 2 days by spending less than USD
100 in Amazon EC2 resources. Running one or more guard
nodes then allows an attacker to correlate hidden services
to IP addresses using a primitive traffic analysis attack.
Furthermore, we have shown that attackers can impact the
availability and sample the popularity of arbitrary hidden
services not under their control by selectively becoming their
hidden service directories.

To address these vulnerabilities we have proposed coun-
termeasures. These prevent hidden service directories from
learning the content of any the descriptors unless they also
know their corresponding onion address and significantly
increase the resources required to selectively become a
hidden service directory for a targeted hidden service.
However, note that the above suggestions are nothing
more than stop-gap measures. We believe that the problems
we have shown are grave enough to warrant a careful
redesign of Tor’s hidden services.

So either this is bullshit, law enforcement doesn't have $100 or law enforcement is down right ignorant.
Title: Re: Hidden services security doesn't look too good.
Post by: bitfool on May 28, 2013, 02:55 am
I read most of the paper, and I get the impression that in order to carry the attack you need to understand pretty well how the whole protocol works so your third option sounds likely.

Also now I see that part of this information has already been posted a few days ago. Eh...Here.

http://dkn255hz262ypmii.onion/index.php?topic=161391.0

Title: Re: Hidden services security doesn't look too good.
Post by: bitfool on May 28, 2013, 03:01 am
Furthermore, silkroad has been DoSed and that may well be the technique used.
Title: Re: Hidden services security doesn't look too good.
Post by: Miah on May 28, 2013, 03:51 am
You guys do realize that an actual cop doesn't have to be knowledgable in all this right? They have our lovely tax dollars to hire a super brain IT guy. Actually in this day and age I'd expect some LE do just IT like the cops that work more the admin side. Police stations have computers right? Well who's in charge of setting up the networks for that and maintaining it? I'm sure that person is more knowledgable than your average foot cop. There's databases in place and their own private network which has to be secure cause if it's not what's to stop me from breaking into their remotely and start fucking with info on the system. The point is there's someone that set that system in place at the Police station whether it be a contractor or some on the staff that would be their sole responsiblity.

 With that being said I'm still highly skeptical about what that article proposes. We can sit here all day and theorycraft but what counts is real world results. Like Jack N Hoff said so why haven't they done it yet? Now some conspiracy theorists will say 'Well they're collecting as much info as possible on everyone and then the big bust comes'. Again with that theory I'm highly skeptical. There's bigger issues in the world and things the US government has to deal with.

 Like terrorism or the so called war or terror, a depression, China's rising economic power, a debt in trillion of dollars, and lack of innovation and manufacturing.  Looking at that brief list I outlined if you were the US president and you had a budget where would you put money into? Would you really say Ok guys we need to get that fucking SR offline? I don't think it even makes it on that list. If SR was ever comprised it wouldn't be in the way that article propsed..well it would and it wouldn't. In my eyes the only way this whole thing would unravel is if DPR(or the conglomerates that represent DPR) ever slips up and makes one silly mistake. Like cashing out too many bitcoins at once or leaving a little trail. You'd be suprised at what one piece of info can lead you to but again with that I'm highly skeptical cause they're obviously pro's just from the fact they been going this long.

So that leads us back to the main question: If they can do it why don't they?

Title: Re: Hidden services security doesn't look too good.
Post by: bitfool on May 28, 2013, 04:03 am
>So that leads us back to the main question: If they can do it why don't they?

The real question is : if it can be done, do you think it won't happen?

>With that being said I'm still highly skeptical about what that article proposes.

But do you actually understand what's being proposed (and what has actually been done, at least according to the authors)?



Title: Re: Hidden services security doesn't look too good.
Post by: StExo on May 28, 2013, 04:12 am
The problem is this is another theoretical attack - if it was possible they sure as hell wouldn't publish it without taking advantage of it. Many reports have shown Tor and hidden services are vunerable to traffic analysis and the likes, the problem is proving them in an entire which isn't controlled in that of simulations and what's more, LEA then have to prove it in a court of law which is a whole new ballgame.
Title: Re: Hidden services security doesn't look too good.
Post by: bitfool on May 28, 2013, 04:34 am
>The problem is this is another theoretical attack - if it was possible they sure as hell wouldn't publish it without taking advantage of it.

Why? The authors belong to academia. They are not necessarily black hat hackers.

And let me mention again that silk road has been DoSed not long ago.

Anyway, in case it needs to be spelled out, let me spell it out :

That something hasn't happened yet is not a proof that it can't happen in the future. If anything that remark and  line of reasoning are pretty dumb.
Title: Re: Hidden services security doesn't look too good.
Post by: Rastaman Vibration on May 28, 2013, 04:54 am
The real question is how can we defend against our IP being compromised. Does anyone know if a VPN or VM will protect you?
Title: Re: Hidden services security doesn't look too good.
Post by: Miah on May 28, 2013, 05:05 am
Quote
That something hasn't happened yet is not a proof that it can't happen in the future. If anything that remark and  line of reasoning are pretty dumb.

Nobody mentioned that or brought up so not sure where you get that from.

Quote
And let me mention again that silk road has been DoSed not long ago.

I think you are getting two things confused. A DDoS attack is about bringing the site down. Sending it so many requests that it crashes. It has nothing to do with deanonymizing people that use ToR. Two totally seperate things. Any idiot can launch a Ddos attack proven by Anonymous which pretty much all their attacks are DDoS.

Quote
Anyway, in case it needs to be spelled out, let me spell it out :

Just sounds like fear mongering to me. I'm pretty sure everyone that buys from SR are aware of the risks but comparatively speaking I'd go out on a limb and say it's still safer than to buy on the streets? Wouldn't you agree?

Title: Re: Hidden services security doesn't look too good.
Post by: bitfool on May 28, 2013, 06:11 am
>Nobody mentioned that or brought up so not sure where you get that from.

From you perhaps?  "If they can do it why don't they?"

>I think you are getting two things confused.

I think you haven't read the paper, nor the other thread I linked.

>Sending it so many requests that it crashes.

Wrong. In this case  a denial of service is carried out by taking control of the HS directory  servers associated with a given site. Say, silk road.

> Any idiot can launch a Ddos attack

No, not against hidden services.

>It has nothing to do with deanonymizing people that use ToR.

That is another section of the paper. And they talk about finding the location of hidden services. Not sure if they can use the same attack against clients.

Title: Re: Hidden services security doesn't look too good.
Post by: Miah on May 28, 2013, 06:20 am
You're taking what I'm saying out of context.

Quote
>Nobody mentioned that or brought up so not sure where you get that from.

From you perhaps?  "If they can do it why don't they?"

I was responding to your statement that just because it hasn't been done in the past that it won't be done in the future. I don't think anyone disagrees with that fact.

Quote
Wrong. In this case  a denial of service is carried out by taking control of the HS directory  servers associated with a given site. Say, silk road.

You're wrong.

Quote
That is another section of the paper. And they talk about finding the location of hidden services. Not sure if they can use the same attack against clients.

Exactly, a paper.
Title: Re: Hidden services security doesn't look too good.
Post by: bitfool on May 28, 2013, 06:39 am
>>     In this case  a denial of service is carried out by taking control of the HS directory  servers associated with a given site. Say, silk road.

>You're wrong.

Now I also know that you don't have a clue about the subject at hand :-)

>Exactly, a paper.

Yes. A paper that is way over your head, and which probably describes the mechanism used for the DoS attacks against silk road that happened last month.
Title: Re: Hidden services security doesn't look too good.
Post by: Miah on May 28, 2013, 06:48 am
Ok well I'll just wait till someone that's more knowledgeable than me comment on this then.
Title: Re: Hidden services security doesn't look too good.
Post by: StExo on May 28, 2013, 11:09 am
@Bitfool - You are indeed a fool. The DDOS wasn't typical by overloading the server, it flooded the nodes which point to SilkRoad servers which is why we could not access it as opposed to slowing the server down. This is nothing new at all and bears no relevance to the attack laid out and Miah was right. I'm no technical expert, but I've been around here long enough to distinguish between the expert and somebody who is personally misunderstanding the discussion at hand completely.
Title: Re: Hidden services security doesn't look too good.
Post by: astor on May 28, 2013, 02:15 pm
With that being said I'm still highly skeptical about what that article proposes. We can sit here all day and theorycraft but what counts is real world results. Like Jack N Hoff said so why haven't they done it yet?

It's important to understand what this attack is, and what it is not. It is effective at deanonymizing random hidden services among a large collection of hidden services (like the 40K that exist on the Tor network). It is not effective at deanonymiznig a specific hidden service.

The attack works because the attacker can trawl for many HS descriptors in a short amount of time, and at relatively little cost. That's what most of the paper is about. But it relies on the fact that some hidden services randomly chose the attacker's node for their entry guard. That's why in the paper they deanonymized two hidden services controlled by them, which they configured to use their entry guard on purpose, and some bots in a botnet, because there are probably tens of thousands of these bots running as hidden services, so the chances were high that some of them chose the researchers' entry guard.

They didn't deanonymize SR or other specific, high value hidden services, nor could LE trivially do that with this attack, because the chances of those few hidden services randomly choosing LE's entry guard are extremely small. In fact, they could be running their own anonymously purchased, private entry guards, thus making the attack impossible. ;)

Title: Re: Hidden services security doesn't look too good.
Post by: astor on May 28, 2013, 02:26 pm
To direct this attack at a specific hidden service, they write:

Quote
In early 2012 we operated a guard node that we rented from a large European hosting company (Server4You, product EcoServer Large X5) for EUR 45 (approx. USD 60) per month. Averaging over a month and taking the bandwidth weights into account we calculated that the probability for this node to be chosen as a guard node was approximately 0.6% on average for each try a Tor client made that month. As each hidden service chooses three guard nodes initially, we expect over 450 hidden services to have chosen this node as a guard node10 . Running these numbers for a targeted (non-opportunistic) version of the attack described in Section VI-A shows us that by renting 23 servers of this same type would give us a chance of 13.8% for any of these servers to be chosen. This means that within 8 months, the probability to deanonymize a long-running hidden service by one of
these servers becoming its guard node is more than 90%, for a cost of EUR 8280 (approximately USD 11,000).

Granted, $11,000 and 8 months isn't impossible for some LEA to spend to identify a very high value hidden service, but it's also not a trivial attack, and it still depends on the hidden service having a normal entry guard configuration and rotation period.


Title: Re: Hidden services security doesn't look too good.
Post by: SOUTHPAW on May 28, 2013, 11:50 pm
IX. CONCLUSION

We have analyzed the security properties of Tor hidden
services and shown that attacks to deanonymize hidden
services at a large scale are practically possible with only
a moderate amount of resources. We have demonstrated
that collecting the descriptors of all Tor hidden services is
possible in approximately 2 days by spending less than USD
100 in Amazon EC2 resources. Running one or more guard
nodes then allows an attacker to correlate hidden services
to IP addresses using a primitive traffic analysis attack.
Furthermore, we have shown that attackers can impact the
availability and sample the popularity of arbitrary hidden
services not under their control by selectively becoming their
hidden service directories.

To address these vulnerabilities we have proposed coun-
termeasures. These prevent hidden service directories from
learning the content of any the descriptors unless they also
know their corresponding onion address and significantly
increase the resources required to selectively become a
hidden service directory for a targeted hidden service.
However, note that the above suggestions are nothing
more than stop-gap measures. We believe that the problems
we have shown are grave enough to warrant a careful
redesign of Tor’s hidden services.

So either this is bullshit, law enforcement doesn't have $100 or law enforcement is down right ignorant.

I'll go with your first option.
Title: Re: Hidden services security doesn't look too good.
Post by: jameslink2 on May 29, 2013, 12:27 am
To direct this attack at a specific hidden service, they write:

Quote
In early 2012 we operated a guard node that we rented from a large European hosting company (Server4You, product EcoServer Large X5) for EUR 45 (approx. USD 60) per month. Averaging over a month and taking the bandwidth weights into account we calculated that the probability for this node to be chosen as a guard node was approximately 0.6% on average for each try a Tor client made that month. As each hidden service chooses three guard nodes initially, we expect over 450 hidden services to have chosen this node as a guard node10 . Running these numbers for a targeted (non-opportunistic) version of the attack described in Section VI-A shows us that by renting 23 servers of this same type would give us a chance of 13.8% for any of these servers to be chosen. This means that within 8 months, the probability to deanonymize a long-running hidden service by one of
these servers becoming its guard node is more than 90%, for a cost of EUR 8280 (approximately USD 11,000).

Granted, $11,000 and 8 months isn't impossible for some LEA to spend to identify a very high value hidden service, but it's also not a trivial attack, and it still depends on the hidden service having a normal entry guard configuration and rotation period.

That makes a couple of assumptions.

 One is that they can observe the traffic from the guard node. It is possible to configure your own nodes for guard nodes which blocks this attack.

The second and more common configuration of a hidden service is to proxy the traffic through tor. Although it gets more complicated to explain it makes it much harder to De-anomize a hidden service. Think of it this way, the hidden service has to connect to other stuff. By proxying it through a tor client it hides the tor requests for a hidden service from the guard nodes.

You can also vpn, tunnel, or proxy the traffic for the hidden service, further protecting your hidden service. There are also VSP's that allow you to sign up via tor and pay via bitcoin. Meaning even if they find the server there is little chance of them finding the owner.

It really is interesting the more you dig into how this stuff works and every attack I have seen is only able to accomplish De-anomization by using the most basic of tor configurations.




Title: Re: Hidden services security doesn't look too good.
Post by: bitfool on May 29, 2013, 12:46 am
@jameslink2 & astor

What about the DoS attack based on taking over the hidden services directory? How difficult to implement, and protect against is that one?
Title: Re: Hidden services security doesn't look too good.
Post by: jameslink2 on May 29, 2013, 01:10 am
@jameslink2 & astor

What about the DoS attack based on taking over the hidden services directory? How difficult to implement, and protect against is that one?

My understanding of how this works is that the "hidden services directory" is not, it is a distributed hash table that is distributed amongst the nodes. I guess you could DDOS/DOS enough nodes but that would be difficult across the tor network as I understand it.

See the following for an explanation (External Wiki link)

http://en.wikipedia.org/wiki/Distributed_hash_table
Title: Re: Hidden services security doesn't look too good.
Post by: astor on May 29, 2013, 01:22 am
That makes a couple of assumptions.

One is that they can observe the traffic from the guard node.

Well, the attack is successful when they own the guard node, but becoming the guard node through random selection by the hidden service is what takes so long and costs so much money.

It is possible to configure your own nodes for guard nodes which blocks this attack.

Yep, that's basically what I was saying. Alternatively, if the hidden service operator didn't want to deal with anonymously purchasing extra servers for the entry guards, they could change the length of time that they keep entry guards:

Code: [Select]
/* Choose expiry time smudged over the past month. The goal here
* is to a) spread out when Tor clients rotate their guards, so they
* don't all select them on the same day, and b) avoid leaving a
* precise timestamp in the state file about when we first picked
* this guard. For details, see the Jan 2010 or-dev thread. */
entry->chosen_on_date = time(NULL) - crypto_rand_int(3600*24*30);

Change 30 to 180 and you've got entry guards for up to 6 months at a time, minus churn. Then it takes 4 years and $44,000 to achieve a 90% success rate with this attack. Adjust as needed.

The second and more common configuration of a hidden service is to proxy the traffic through tor. Although it gets more complicated to explain it makes it much harder to De-anomize a hidden service. Think of it this way, the hidden service has to connect to other stuff. By proxying it through a tor client it hides the tor requests for a hidden service from the guard nodes.

Are you talking about Tor over Tor, ie you run two Tor instances and proxy one instance (which serves the hidden service) through the SocksPort of the other? Because all hidden services work over Tor. In any case, I don't think Tor over Tor (or even layered entry guards) helps here, because the probability of the attacker becoming one of the first-layer entry guards is the same.

What about the DoS attack based on taking over the hidden services directory? How difficult to implement, and protect against is that one?

That is more worrying. It looks like it's easy for an attacker to pull of, and there's not much a hidden service can do to defend against it. It's not like messing with your entry guards or intro points, because in order for your visitors to figure out your configuration in the first place (like which intro points they can talk to you), they need to find your descriptor. That requires mutual assumptions made by both parties: for example, that I as your visitor can find your hidden service descriptor at a relay whose fingerprint is closest to the hash of your public key and the date.

They can make the descriptor ID unpredictable, for example by concatenating a random string to the hash of the public key and the date, and hashing that again, but that kind of solution needs to be implemented by the whole network, and new browser bundles must be distributed to users. They are working on it thought:

https://trac.torproject.org/projects/tor/ticket/8244
Title: Re: Hidden services security doesn't look too good.
Post by: kmfkewm on May 29, 2013, 02:49 am
Honestly, Tor security in general doesn't look too good.
Title: Re: Hidden services security doesn't look too good.
Post by: Jack N Hoff on May 29, 2013, 02:56 am
Honestly, Tor security in general doesn't look too good.

The US government is paying 1.3 million dollars a year to the developers of TOR and the US government relies on it so much so much so I'm sure the developers will keep ahead of the curve.  I have faith.
Title: Re: Hidden services security doesn't look too good.
Post by: kmfkewm on May 29, 2013, 03:07 am
Of course, the authors of this paper are being kind of hyperbolic, because the entry guard system itself protects from this attack being fully carried out to deanonymize all clients. However, there are attacks for deanonymizing hidden services with little resources, especially if the attacker is a LE level attacker and can use court orders against targeted guard nodes (or if they don't even need to use court orders, because of  figuratively prehistoric communication privacy laws, which were mostly crafted ages ago to protect our physical mail and most recently telephone communications, and which are almost completely obsoleted when faced with modern intercept techniques). Tor started out with the goal of being a system that distributes trust in such a way that any one of the nodes you use can be compromised, and it doesn't compromise your anonymity. Using a single proxy is a single point of failure, Tor attempted to remove the single point of failure via its encryption techniques and using three nodes. Tor was celebrated for accomplishing this, but really it only superficially accomplished it. In reality, the entry node is far, far, far more important than any of the other nodes. For clients, having a bad entry guard is almost as bad as using a bad single hop proxy, in some scenarios it is essentially the same exact thing. In the case of hidden services, having a bad entry guard is in pretty much all scenarios just as bad as using a malicious single hop reverse proxy. The middle node and exit node are far less important, and for hidden services they are essentially worthless nodes.

Quote
The US government is paying 1.3 million dollars a year to the developers of TOR and the US government relies on it so much so much so I'm sure the developers will keep ahead of the curve.  I have faith.

Tor is low latency anonymity. They are at the head of the curve for low latency anonymity. But being the smartest retard is hardly an accomplishment.
Title: Re: Hidden services security doesn't look too good.
Post by: Capslockian on May 29, 2013, 03:07 am
The real question is how can we defend against our IP being compromised. Does anyone know if a VPN or VM will protect you?

Your Tor shit should be on an encrypted machine or VM anyway. You need that at the very least. As far as a VPN goes, it really depends. I assume an overseas one would protect you from US LE to a certain extent. Your ISP releasing that there is Tor activity on your home network, the presence of any number of confiscated packages as well as the high likely hood that you have paraphernalia in your home is enough to arrest you for sure. You don't even need to sign for shit. Pick up the package from the mailbox, go inside, sit down, open it, and boom LE can knock down your door and hit you with felonies. We're all playing with fire weather we want to believe it or not.
Title: Re: Hidden services security doesn't look too good.
Post by: Miah on May 29, 2013, 03:31 am
Quote
Tor is low latency anonymity. They are at the head of the curve for low latency anonymity. But being the smartest retard is hardly an accomplishment.

HAHA I'd +1 that if I could... u mind if I use that for me my sig?
Title: Re: Hidden services security doesn't look too good.
Post by: kmfkewm on May 29, 2013, 03:33 am
The problem is this is another theoretical attack - if it was possible they sure as hell wouldn't publish it without taking advantage of it. Many reports have shown Tor and hidden services are vunerable to traffic analysis and the likes, the problem is proving them in an entire which isn't controlled in that of simulations and what's more, LEA then have to prove it in a court of law which is a whole new ballgame.

I hear people claim that attacks against Tor are only theoretical but I never quite understood this idea. Many of the theoretical attacks against Tor have been carried out against the live Tor network with success. For example, certainly timing attacks have been proven to work against Tor. This new attack is simply a timing attack in which the attacker positions themselves at the HSDIR and hopes to own one of the clients or hidden services entry guards. From the quote I have read here on the first page of posts, it seems like the researchers are taking the wrong angle when approaching this method of attack. If the hidden service has a bad entry guard it can be deanonymized by the owner of the entry guard so long as the entry guard owner knows the .onion address. It seems the researchers are enumerating hidden service .onion addresses and then carrying out a trivial timing attack to see if one of their entry guards was selected by any of the hidden services. This is interesting, but many of the interesting hidden services are already public knowledge, in which case the attack is a simple timing attack that has already been in literature for many years. I think that more importantly, this attack allows the attacker to position themselves such that they only need to own the entry guard of a client connecting to a hidden service in order to deanonymize the client. The client connects to a HSDIR that is attacker controlled, so the attacker has half of a timing attack, if the clients utilized entry guard is also attacker controlled then the attacker can link the client to the hidden service. That is a bit more interesting, it is nothing really ground breaking though. It is also clearly not simply a theoretical attack, and indeed it could be easily carried out against the live Tor network, the only issue is owning the entry guard utilized by the connecting client, which is the hard part.

I imagine that for the most part the Tor developers will say 'meh' about this paper. None of this is really new, except for perhaps the ability for an attacker to become the HSDIR of arbitrary hidden services. Entry guards protect from this attack to the extent that they can, and we are left again with what is essentially trusting a single hop proxy.
Title: Re: Hidden services security doesn't look too good.
Post by: Miah on May 29, 2013, 03:43 am
Quote
hear people claim that attacks against Tor are only theoretical but I never quite understood this idea. Many of the theoretical attacks against Tor have been carried out against the live Tor network with success. For example, certainly timing attacks have been proven to work against Tor. This new attack is simply a timing attack in which the attacker positions themselves at the HSDIR and hopes to own one of the clients or hidden services entry guards. From the quote I have read here on the first page of posts, it seems like the researchers are taking the wrong angle when approaching this method of attack. If the hidden service has a bad entry guard it can be deanonymized by the owner of the entry guard so long as the entry guard owner knows the .onion address. It seems the researchers are enumerating hidden service .onion addresses and then carrying out a trivial timing attack to see if one of their entry guards was selected by any of the hidden services. This is interesting, but many of the interesting hidden services are already public knowledge, in which case the attack is a simple timing attack that has already been in literature for many years. I think that more importantly, this attack allows the attacker to position themselves such that they only need to own the entry guard of a client connecting to a hidden service in order to deanonymize the client. The client connects to a HSDIR that is attacker controlled, so the attacker has half of a timing attack, if the clients utilized entry guard is also attacker controlled then the attacker can link the client to the hidden service. That is a bit more interesting, it is nothing really ground breaking though. It is also clearly not simply a theoretical attack, and indeed it could be easily carried out against the live Tor network, the only issue is owning the entry guard utilized by the connecting client, which is the hard part.

I imagine that for the most part the Tor developers will say 'meh' about this paper. None of this is really new, except for perhaps the ability for an attacker to become the HSDIR of arbitrary hidden services. Entry guards protect from this attack to the extent that they can, and we are left again with what is essentially trusting a single hop proxy.

Ok fair enough. If you were to put on your ToR developer hat what would you do to strengthen the integrity of the ToR network?
Title: Re: Hidden services security doesn't look too good.
Post by: kmfkewm on May 29, 2013, 03:52 am
That makes a couple of assumptions.

One is that they can observe the traffic from the guard node.

Well, the attack is successful when they own the guard node, but becoming the guard node through random selection by the hidden service is what takes so long and costs so much money.

It is possible to configure your own nodes for guard nodes which blocks this attack.

Yep, that's basically what I was saying. Alternatively, if the hidden service operator didn't want to deal with anonymously purchasing extra servers for the entry guards, they could change the length of time that they keep entry guards:

Code: [Select]
/* Choose expiry time smudged over the past month. The goal here
* is to a) spread out when Tor clients rotate their guards, so they
* don't all select them on the same day, and b) avoid leaving a
* precise timestamp in the state file about when we first picked
* this guard. For details, see the Jan 2010 or-dev thread. */
entry->chosen_on_date = time(NULL) - crypto_rand_int(3600*24*30);

Change 30 to 180 and you've got entry guards for up to 6 months at a time, minus churn. Then it takes 4 years and $44,000 to achieve a 90% success rate with this attack. Adjust as needed.

The second and more common configuration of a hidden service is to proxy the traffic through tor. Although it gets more complicated to explain it makes it much harder to De-anomize a hidden service. Think of it this way, the hidden service has to connect to other stuff. By proxying it through a tor client it hides the tor requests for a hidden service from the guard nodes.

Are you talking about Tor over Tor, ie you run two Tor instances and proxy one instance (which serves the hidden service) through the SocksPort of the other? Because all hidden services work over Tor. In any case, I don't think Tor over Tor (or even layered entry guards) helps here, because the probability of the attacker becoming one of the first-layer entry guards is the same.

What about the DoS attack based on taking over the hidden services directory? How difficult to implement, and protect against is that one?

That is more worrying. It looks like it's easy for an attacker to pull of, and there's not much a hidden service can do to defend against it. It's not like messing with your entry guards or intro points, because in order for your visitors to figure out your configuration in the first place (like which intro points they can talk to you), they need to find your descriptor. That requires mutual assumptions made by both parties: for example, that I as your visitor can find your hidden service descriptor at a relay whose fingerprint is closest to the hash of your public key and the date.

They can make the descriptor ID unpredictable, for example by concatenating a random string to the hash of the public key and the date, and hashing that again, but that kind of solution needs to be implemented by the whole network, and new browser bundles must be distributed to users. They are working on it thought:

https://trac.torproject.org/projects/tor/ticket/8244

Astor first I would like to say that everything you have said in this thread is very accurate, thanks for helping people to understand this attack. The second thing I want to say is that, although using persistent non-rotating entry guards can perfectly protect from this attack, it doesn't save hidden services from LE. They can still trace to entry guards, and then once again Tor is reduced to trusting a single hop proxy (well, actually three single hop proxies). So although the Tor configuration you suggest protects from an internal attacker (ie: the researchers in this paper), it doesn't protect from an external attacker who can monitor a targeted entry guard. If any of the entry guards are in the USA, tough luck because the feds don't even need a warrant for a pen register / trap and trace. Using layered guards can help to protect from this though, the trace always begins at the position of the attacker controlled node closest to the hidden service though. Layer enough guards and get lucky and you might have a moderately difficult to trace hidden service. Vanilla Tor is dangerously weak though. And the truth is that even some of the core Tor developers have essentially admitted this fact. They have taken to saying that you are even more screwed if you don't use Tor, which is an accurate although not very reassuring way to put things.
Title: Re: Hidden services security doesn't look too good.
Post by: astor on May 29, 2013, 04:16 am
I know, they could trace to the entry guard in 1 hour and 20 minutes. :(

Seems like constructing your own network of layered private bridges in hostile countries like China, Cuba, and Venezuela might be the best immediate protection. Sucks when a political solution is better than a technological solution.

That's just going back to the clearnet days of thinking you were safe because you got web hosting in Panama.
Title: Re: Hidden services security doesn't look too good.
Post by: kmfkewm on May 29, 2013, 04:18 am
Quote
hear people claim that attacks against Tor are only theoretical but I never quite understood this idea. Many of the theoretical attacks against Tor have been carried out against the live Tor network with success. For example, certainly timing attacks have been proven to work against Tor. This new attack is simply a timing attack in which the attacker positions themselves at the HSDIR and hopes to own one of the clients or hidden services entry guards. From the quote I have read here on the first page of posts, it seems like the researchers are taking the wrong angle when approaching this method of attack. If the hidden service has a bad entry guard it can be deanonymized by the owner of the entry guard so long as the entry guard owner knows the .onion address. It seems the researchers are enumerating hidden service .onion addresses and then carrying out a trivial timing attack to see if one of their entry guards was selected by any of the hidden services. This is interesting, but many of the interesting hidden services are already public knowledge, in which case the attack is a simple timing attack that has already been in literature for many years. I think that more importantly, this attack allows the attacker to position themselves such that they only need to own the entry guard of a client connecting to a hidden service in order to deanonymize the client. The client connects to a HSDIR that is attacker controlled, so the attacker has half of a timing attack, if the clients utilized entry guard is also attacker controlled then the attacker can link the client to the hidden service. That is a bit more interesting, it is nothing really ground breaking though. It is also clearly not simply a theoretical attack, and indeed it could be easily carried out against the live Tor network, the only issue is owning the entry guard utilized by the connecting client, which is the hard part.

I imagine that for the most part the Tor developers will say 'meh' about this paper. None of this is really new, except for perhaps the ability for an attacker to become the HSDIR of arbitrary hidden services. Entry guards protect from this attack to the extent that they can, and we are left again with what is essentially trusting a single hop proxy.

Ok fair enough. If you were to put on your ToR developer hat what would you do to strengthen the integrity of the ToR network?


Honestly, having a network that resembles Tor rules out some of the things that can be done to enormously increase anonymity. But even something that looks a lot like Tor can be much more anonymous than Tor. The first step is to reduce the number of entry guards selected by the client, beyond a doubt to two and it is possible that even using only one is the best option. The second step is to increase the number of nodes on a circuit and introduce layered guard nodes. The third step is to greatly reduce the frequency with which guard nodes are rotated, especially first layer guard nodes.  The fourth step is to use PIR for HSDIR requests and to remove the concept of  using a set of introduction nodes that persistently introduce for a specific hidden service. Perhaps something like SURBs, the single use reply blocks of type III remailers, can be used instead. In such a case the hidden service would layer encrypt a packet that routes toward it, publish the packet to the HSDIR, and the client would query the HSDIR with a PIR protocol to retrieve one of the signed SURB packets. Then the client would create a circuit and send the SURB to the first node specified, which would remove a layer of encryption revealing the second node specified, etc, all the way up to the hidden service. Something like this. Then the attacker would need to own the hidden services first layer entry guard to do an end point timing attack against connections to the hidden service. Additionally, they could only brute force up to the position of the layered guard node that they own that is closest to the hidden service. Additionally, there would no longer be a centralized set of introduction nodes to DoS. An additional measure that could be taken is using some system to encrypt hidden service addresses. My first thought was that hidden services could be queried for by the hash of their .onion address rather than their .onion address (of course with PIR in either case, but the hash would be used to obscure the list of all .onion addresses from the HSDIR nodes), with the retrieved information encrypted symmetrically with the actual .onion address of the hidden service. However, there have been issues identified with this hash based system. However, rransom, one of the Tor developers, proposed a different (much more advanced) solution that uses elliptic curve cryptography and blinding to get the same security properties without any of the pitfalls.

I think that this is essentially the best that Tor can hope for without fundamentally changing itself into something else. Even this proposed set of changes includes significant reworkings of the hidden service protocol.
Title: Re: Hidden services security doesn't look too good.
Post by: Miah on May 29, 2013, 04:24 am
Very interesting... do you think something like that could work if I wanted to develop a website that cpu;d conduct financial transactions? I started another thread on how a site or application could be developed to anonymize transactions. Like for example how Liberty Reserve got shut down. Ideally a market place like Silk Road should have a currency exhanger or monetary transmitter that is not traceable. Now that's the flaw in the whole plan. The money has to go or come from somewhere right? It's not like I can apply the concept of Freenode and just deposit money into other peoples account and take it out later. Until that gets figured out one day it will be use playing catch up and not the Feds.

My speciality is more on the programming/web side just learning about networks now but find it quite fascinating. Can you imagine how infuriating it is for a DEA agent knowing that there's a marketplace that sells drugs and uses their own post service to mail it out. That my friend is poetic justice at it's finest.

My concern is not that the DEA will figure out how to topple Bitcoin and SR but just with network security you have layers upon layers to protect yourself. It just seems the major flaw in this whole set up is the cashing out and buying of Bitcoins. I know Vendors have creative ways to cash out but don't they have enough responsibilities? The good vendors on here make nice cash but when you think about the risks they take on a daily basis it's not worth the money in the end. A lot if caught would be serving 5-10 years easy so if I'm your average vendor making $3-5k and hustling like mad it just would be nice to have some kind of anyonmous financial transaction service. Fuck it. Here's a crazy idea. A few people around made Bitcoin atms so the technology is there but those guys were quoted saying they're scared shitless with all the new bank rules and the 'money transmitter' clauses(utter bs) that they're machines will never go live and I thought this was a democracy?
Title: Re: Hidden services security doesn't look too good.
Post by: astor on May 29, 2013, 04:45 am
It's still worth noting that no hidden service has been deanonymized through a direct attack on the Tor network.

Maybe that's because LE is always 3 years behind the curve...


Very interesting... do you think something like that could work if I wanted to develop a website that cpu;d conduct financial transactions? I started another thread on how a site or application could be developed to anonymize transactions. Like for example how Liberty Reserve got shut down. Ideally a market place like Silk Road should have a currency exhanger or monetary transmitter that is not traceable. Now that's the flaw in the whole plan. The money has to go or come from somewhere right? It's not like I can apply the concept of Freenode and just deposit money into other peoples account and take it out later. Until that gets figured out one day it will be use playing catch up and not the Feds.

If you're talking about trades within bitcoin, there's Zerocoin.

http://dkn255hz262ypmii.onion/index.php?topic=152682.45

If you're talking about exchanging bitcoins for government-controlled currency, cash is the only (potentially) anonymous, untraceable government-backed currency, so cash for bitcoins seems like the only thing that would work.

Then you're back to face to face meetings.


My speciality is more on the programming/web side just learning about networks now but find it quite fascinating. Can you imagine how infuriating it is for a DEA agent knowing that there's a marketplace that sells drugs and uses their own post service to mail it out. That my friend is poetic justice at it's finest.

haha, I needed something to cheer me up man. :)
Title: Re: Hidden services security doesn't look too good.
Post by: kmfkewm on May 29, 2013, 05:44 am
It's still worth noting that no hidden service has been deanonymized through a direct attack on the Tor network.

Test hidden services have been deanonymized by researchers, but so far nobody knows of a targeted illegal hidden service being pwnt by LE (via a direct attack on Tor anyway).
Title: Re: Hidden services security doesn't look too good.
Post by: Miah on May 29, 2013, 05:59 am
Quote
haha, I needed something to cheer me up man. :)

Glad I can be of help =)

Quote
Test hidden services have been deanonymized by researchers, but so far nobody knows of a targeted illegal hidden service being pwnt by LE (via a direct attack on Tor anyway).

I think that's the scary part of all this. Ideally there should be some sort of mechanism that would recognize a guard node being taken over. I'm sure there's some algorithm that can figure that out. I'm still not 100% sure on how Tor works with the guard nodes and all the mechanisms in place. Whatever I've learnED has been pretty much reading your guys posts and some papers.

 Correct me if I'm wrong but the guard nodes are random and no one knows where they will be exactly? Right? So LE does a timed attack to start eliminating possible nodes to hone in on the target? I think I understand why it's so diffucult to implement certain things in ToR. In another situation I would just say let people assign their own guard nodes for their site but that just opens the doors for attack and makes ToR completely useless. It's quite the conundrum actually.
Title: Re: Hidden services security doesn't look too good.
Post by: astor on May 29, 2013, 06:47 am
I think that's the scary part of all this. Ideally there should be some sort of mechanism that would recognize a guard node being taken over. I'm sure there's some algorithm that can figure that out.

Well, kmf is actually talking about a different attack from the one published in the paper that started this thread.

He's talking about a well known attack published in 2006 which you can read here: http://freehaven.net/anonbib/date.html#hs-attack06

That one doesn't require an entry guard to be taken over. It just requires LE to identify an entry guard by opening up many connections to the hidden service, and it's a lot scarier because it only takes 1-2 hours to find the entry guard, although probably days to weeks longer to monitor it and find the hidden service. However, that's way shorter than the 4-8 months it takes to carry out the attack in the recent paper. The best defense against the 2006 attack is layered entry guards, which are discussed in the original paper and still not implemented.

I'm still not 100% sure on how Tor works with the guard nodes and all the mechanisms in place. Whatever I've learnED has been pretty much reading your guys posts and some papers.

Correct me if I'm wrong but the guard nodes are random and no one knows where they will be exactly? Right? So LE does a timed attack to start eliminating possible nodes to hone in on the target? I think I understand why it's so diffucult to implement certain things in ToR.

Relays are given the entry guard flag by the directory authorities. Entry guards are chosen based on uptime and bandwidth. So right now there are about 3500 relays and 1200 entry guards. Here's a graph of how they change over time:

https://metrics.torproject.org/network.html?graph=relayflags&start=2012-02-29&end=2013-05-29&flag=Running&flag=Guard#relayflags

Your Tor client picks 3 entry guards and sticks with them for a month at a time. It does this "randomly" but based on bandwidth, otherwise small guards would be overloaded and the Tor network would be even slower. Your Tor client builds new circuits every 10 minutes, so before entry guards were created, your client would pick new entry nodes every 10 minutes. Going from 10 minutes to 1 month with the same entry nodes, you can see that was a big change in Tor client behavior.

In the 2006 attack, LE opens many connections to a hidden service, until one of them happens to pass through a node they control, which is one hop away from the entry guard. That way they can identify the entry guard.

In another situation I would just say let people assign their own guard nodes for their site but that just opens the doors for attack and makes ToR completely useless. It's quite the conundrum actually.

Allowing people to choose the nodes in their circuits would make their circuits distinguishable, because of individual biases in how they selected nodes, and that would reduce their anonymity.

Your anonymity is maximized when you look like everyone else, and more people that look just like you, the bigger your anonymity set. That's why you should stick to the defaults in your browser bundle, we all should, so we'll all look the same.

That being said, you *can* choose your own entry and exit nodes, it's just not recommended.

Title: Re: Hidden services security doesn't look too good.
Post by: Jack N Hoff on May 29, 2013, 06:58 am
Allowing people to choose the nodes in their circuits would result in their circuit creation being influenced by individual biases, which would make patterns in circuit creation distinguishable, fingerprintable, thus reducing their anonymity.

Your anonymity is maximized when you look like everyone else, and more people that look just like you, the bigger your anonymity set. That's why you should stick to the defaults in your browser bundle, we all should, so we'll all look the same.

In like 2004 to maybe 2007 when TOR was incredibly slow, I would modify the config file and have TOR go through specific IPs that I had selected from the list.  Ones with more bandiwdth and I just felt in my gut were safer lol.  It helped with the speed a lot.  I connected to private drug marketplace boards like that for years with no problems.  Before I had modified it, I would time out and have to change my identity all the time.  My memory is awful but I would assume that I wasn't going through an internet connection that didn't belong to me.  I really had no idea that I was compromising my security back then...  I'm sure you also remember when TOR was ridiculously slow astor.
Title: Re: Hidden services security doesn't look too good.
Post by: kmfkewm on May 29, 2013, 07:31 am
I think that's the scary part of all this. Ideally there should be some sort of mechanism that would recognize a guard node being taken over. I'm sure there's some algorithm that can figure that out.

Well, kmf is actually talking about a different attack from the one published in the paper that started this thread.

He's talking about a well known attack published in 2006 which you can read here: http://freehaven.net/anonbib/date.html#hs-attack06

I think there have been other successful direct attacks on Tor. Traffic classifiers have 'predicted'/'identified' encrypted websites loaded through Tor with over 60% accuracy, and that was before hidden markov models were used. I think there was a fairly recent research paper that took into account hidden markov  models, called something like 'missing the forest for the trees'. I don't recall the results, but I am sure that the accuracy jumped up significantly over 60%. Essentially the classifier that got over 60% accuracy only took a single loaded page into consideration to fingerprint a webpage, whereas with hidden markov models classifiers take an entire sequence of loaded pages into account to fingerprint a website. There was also an attack that could fairly accurately geolocate servers by measuring clock skew, not really a direct attack on Tor though. There are probably some others that I am not recalling as well. However as far as purely direct attacks on Tor go, pretty much in all cases they require the target to use at least one attacker controlled or monitored entry guard.

Quote
That one doesn't require an entry guard to be taken over. It just requires LE to identify an entry guard by opening up many connections to the hidden service, and it's a lot scarier because it only takes 1-2 hours to find the entry guard, although probably days to weeks longer to monitor it and find the hidden service. However, that's way shorter than the 4-8 months it takes to carry out the attack in the recent paper. The best defense against the 2006 attack is layered entry guards, which are discussed in the original paper and still not implemented.

Yeah way more worried about the attack from 2006 than this "new" one. This new attack is like 50% the 2006 attack anyway, "own the hidden services entry guard to deanonymize it". But instead of brute forcing circuits against a specific hidden service, they just hope they can enumerate enough hidden service .onions to own an entry guard used by some of them. They really are taking a kind of alarmist tone with their paper, from what I can see, considering that it is nothing really new. The only new part is the technique of forcing yourself to the position of a particular hidden services HSDIR (I guess, I still have not read the full paper). From what I can tell they are taking a completely different approach than I would, once they can detect all clients attempting to connect to the hidden service I would try to get the clients to that specific hidden service with an end point timing attack between the HSDIR node and the clients entry guard. I have no idea how many hidden services they enumerated, but the % of hidden services they deanonymized with this attack should extrapolate to the % of clients they can deanonymize connecting to any particular hidden service. That is the scary part and it seems they completely overlooked that attack angle.
Title: Re: Hidden services security doesn't look too good.
Post by: astor on May 29, 2013, 05:14 pm
In like 2004 to maybe 2007 when TOR was incredibly slow, I would modify the config file and have TOR go through specific IPs that I had selected from the list.  Ones with more bandiwdth and I just felt in my gut were safer lol.  It helped with the speed a lot.  I connected to private drug marketplace boards like that for years with no problems.  Before I had modified it, I would time out and have to change my identity all the time.  My memory is awful but I would assume that I wasn't going through an internet connection that didn't belong to me.  I really had no idea that I was compromising my security back then

Yeah, if your entry guard operator was malicious, he could notice 'this guy only goes through very high bandwidth nodes', or 'only nodes that share these properties', 'let me spin up a few exit nodes that meet those requirements and pwn him'.

One of the more common biases that I've seen is people don't want to use nodes in their own country, but again, if your entry guard operator is looking for suspicious people, that certainly makes you look suspicious. Someone looking for "red tape" protection is probably worried about LE.

It would be ok if every Tor client behaved that way, but when only a small subset of users are doing it, they stick out of the crowd.

It's interesting how studying anonymity theory improves your thinking skills -- at least it did mine -- because it forces you to think logically about a problem whose solutions are often unintuitive. Lots of people intuitively do things that they think make them safer, but actually harm their anonymity.


  I'm sure you also remember when TOR was ridiculously slow astor.

Oh yeah. People who complain about how slow Tor is have no idea how painfully slow it used to be.
Title: Re: Hidden services security doesn't look too good.
Post by: astor on May 29, 2013, 05:32 pm
I think there have been other successful direct attacks on Tor. Traffic classifiers have 'predicted'/'identified' encrypted websites loaded through Tor with over 60% accuracy, and that was before hidden markov models were used. I think there was a fairly recent research paper that took into account hidden markov  models, called something like 'missing the forest for the trees'. I don't recall the results, but I am sure that the accuracy jumped up significantly over 60%. Essentially the classifier that got over 60% accuracy only took a single loaded page into consideration to fingerprint a webpage, whereas with hidden markov models classifiers take an entire sequence of loaded pages into account to fingerprint a website. There was also an attack that could fairly accurately geolocate servers by measuring clock skew, not really a direct attack on Tor though. There are probably some others that I am not recalling as well. However as far as purely direct attacks on Tor go, pretty much in all cases they require the target to use at least one attacker controlled or monitored entry guard.

mikeperry isn't convinced by these fingerprinting attacks. To quote him from the Torbutton design doc:

Quote
Website traffic fingerprinting is an attempt by the adversary to recognize the encrypted traffic patterns of specific websites. In the case of Tor, this attack would take place between the user and the Guard node, or at the Guard node itself.

The most comprehensive study of the statistical properties of this attack against Tor was done by Panchenko et al. Unfortunately, the publication bias in academia has encouraged the production of a number of follow-on attack papers claiming "improved" success rates, in some cases even claiming to completely invalidate any attempt at defense. These "improvements" are actually enabled primarily by taking a number of shortcuts (such as classifying only very small numbers of web pages, neglecting to publish ROC curves or at least false positive rates, and/or omitting the effects of dataset size on their results). Despite these subsequent "improvements", we are skeptical of the efficacy of this attack in a real world scenario, especially in the face of any defenses.

In general, with machine learning, as you increase the number and/or complexity of categories to classify while maintaining a limit on reliable feature information you can extract, you eventually run out of descriptive feature information, and either true positive accuracy goes down or the false positive rate goes up. This error is called the bias in your hypothesis space. In fact, even for unbiased hypothesis spaces, the number of training examples required to achieve a reasonable error bound is a function of the complexity of the categories you need to classify.

In the case of this attack, the key factors that increase the classification complexity (and thus hinder a real world adversary who attempts this attack) are large numbers of dynamically generated pages, partially cached content, and also the non-web activity of entire Tor network. This yields an effective number of "web pages" many orders of magnitude larger than even Panchenko's "Open World" scenario, which suffered continous near-constant decline in the true positive rate as the "Open World" size grew (see figure 4). This large level of classification complexity is further confounded by a noisy and low resolution featureset - one which is also relatively easy for the defender to manipulate at low cost.

To make matters worse for a real-world adversary, the ocean of Tor Internet activity (at least, when compared to a lab setting) makes it a certainty that an adversary attempting examine large amounts of Tor traffic will ultimately be overwhelmed by false positives (even after making heavy tradeoffs on the ROC curve to minimize false positives to below 0.01%). This problem is known in the IDS literature as the Base Rate Fallacy, and it is the primary reason that anomaly and activity classification-based IDS and antivirus systems have failed to materialize in the marketplace (despite early success in academic literature).

Still, we do not believe that these issues are enough to dismiss the attack outright. But we do believe these factors make it both worthwhile and effective to deploy light-weight defenses that reduce the accuracy of this attack by further contributing noise to hinder successful feature extraction.


https://www.torproject.org/projects/torbrowser/design/#website-traffic-fingerprinting
Title: Re: Hidden services security doesn't look too good.
Post by: Notrealperson6708 on May 29, 2013, 05:57 pm
They have our lovely tax dollars to hire a super brain IT guy. Actually in this day and age I'd expect some LE do just IT like the cops that work more the admin side. Police stations have computers right? Well who's in charge of setting up the networks for that and maintaining it?

I work for a small county IT Department, and we are in charge of both the Municipality police, and the County police IT infrastructure.  Not sure if this holds true for all county/city police, but in our county, we would have nowhere near the manpower or knowledge base to perform something like this.  We complain about our government being behind the times when we need a service from them that they either can't provide, or can't provide in a timely fashion, but in this scenario it is in our best interest for the government to be slow.