Silk Road forums
Discussion => Security => Topic started by: CabinBoyNathanial on September 02, 2013, 08:31 am
-
I saw this article today and just thought it was interesting enough to pass on ::
It's easier to identify TOR users than they believe, according to research published by a group of researchers from Georgetown University and the US Naval Research Laboratory (USNRL).
Their paper, Users Get Routed: Traffic Correlation on Tor by Realistic Adversaries, is to be presented in November at November's Conference on Computer and Communications Security (CCS) in Berlin. While it's been published at the personal page of lead author Aaron Johnson of the NRL, it remained under the radar until someone posted a copy to Cryptome.
The paper states simply that “Tor users are far more susceptible to compromise than indicated by prior work”. That prior work provided the framework for what Johnson's group has accomplished: using traffic correlation in the live TOR network to compromise users' anonymity.
“To quantify the anonymity offered by Tor, we examine path compromise rates and how quickly extended use of the anonymity network results in compromised paths”, they write. In some cases, they found that for the patient attacker, some users can be identified with 95 percent certainty.
The compromise isn't something available to the trivial attacker. The models that Johnson developed assume that an adversary has access either to Internet exchange ports, or controls a number of Autonomous Systems (for example an ISP). However, it's probably reasonable to assume that the instruments of the state could deploy sufficient resources to replicate Johnson's work.
At the core of Johnson's work is a Tor path simulator that he's published at github. The TorPS simulator helps provide accurate AS path inference from TOR traffic.
“An adversary that provides no more bandwidth than some volunteers do today can deanonymize any given user within three months of regular Tor use with over 50 percent probability and within six months with over 80 percent probability. We observe that use of BitTorrent is particularly unsafe, and we show that long-lived ports bear a large security cost for their performance needs. We also observe that the Congestion-Aware Tor proposal exacerbates these vulnerabilities,” the paper states.
If the adversary controls an AS or has access to Internet exchange point (IXP) traffic, things are even worse. While the results of their tests depended on factors such as AS or IXP location, “some users experience over 95 percent chance of compromise within three months against a single AS or IXP.”
The researchers also note that different user behaviours change the risk of compromise. Sorry, BitTorrent fans, your traffic is extremely vulnerable over time. ®
------Article copypasta'd from [CLEARNET] http://www.theregister.co.uk/2013/09/01/tor_correlation_follows_the_breadcrumbs_back_to_the_users/ [CLEARNET]-----------------------
-
It's called end-to-end-correlation. There is most likely nothing to get done about it by the Tor developers. If you don't use Tor as a clearnet proxy then this doesn't concern you.
can deanonymize any given user within three months of regular Tor use with over 50 percent probability and within six months with over 80 percent probability.
Nice. Didn't expect it to take that long. It depends on the amount of relays/exit nodes the attacker has under surveillance, I suppose. Stream isolation like in Whonix and Tails should make it a little harder.
We observe that use of BitTorrent is particularly unsafe
lol
-
Yep. In a nutshell, when someone controls the guard node you're using and the exit node that's sending your clearnet traffic to the Internet, they can put two and two together and figure out that your source address is sending the traffic.
So for whatever period you're using guard+exit nodes that an adversary controls (or can passively view), you're deanonymized.
Shouldn't be directly applicable to hidden services because of how you connect to them. The adversary has to be able to locate the hidden service to correlate that traffic.
What's interesting is that at the point where an adversary with a near-perfect network view (i.e. NSA) deanonymizes most traffic to exit nodes, the majority of things they can't deanonymize are probably destined for hidden services. Subtract exit-destined traffic, analyze the rest. I have a hard time imagining how the location of the hidden service with the highest number of unique visitors (SR) isn't known to NSA in that scenario. It should pop right off the charts in bold letters in that scenario. Of course, knowing is one thing..choosing to burn the capability by acting on that is different.
-
Yep. In a nutshell, when someone controls the guard node you're using and the exit node that's sending your clearnet traffic to the Internet, they can put two and two together and figure out that your source address is sending the traffic.
So for whatever period you're using guard+exit nodes that an adversary controls (or can passively view), you're deanonymized.
Shouldn't be directly applicable to hidden services because of how you connect to them. The adversary has to be able to locate the hidden service to correlate that traffic.
What's interesting is that at the point where an adversary with a near-perfect network view (i.e. NSA) deanonymizes most traffic to exit nodes, the majority of things they can't deanonymize are probably destined for hidden services. Subtract exit-destined traffic, analyze the rest. I have a hard time imagining how the location of the hidden service with the highest number of unique visitors (SR) isn't known to NSA in that scenario. It should pop right off the charts in bold letters in that scenario. Of course, knowing is one thing..choosing to burn the capability by acting on that is different.
Probably they can't right now, since no single NatSec agency has access to enough guard/exit nodes to run a big enough correlation - think how many are in different countries around the world.
Or maybe they are playing the same gimmik Churchill did in WWII so the Germans didn't know that Enigma had been compromised...
-
This is why I change VPN providers every week, use a hardened up Whonix with physical stream isolation and also keep changing location, I never use my home connection for any Tor business and the connections I do use would be incredibly hard to trace, assuming of course they control any nodes I use since I maintain a massive node blacklist to prevent snoopers. Furthermore, I have tried recently toying with the idea of Tor over Tor, much like how SilkRoad marketplace itself runs. The DDOS attack didn't hit SilkRoad servers but rather hit the RV points in the network if I am correct, so by doing Tor over tor this allows you to not reveal the introductory points or RV points so easily. The danger being though, your end exit node being the same as the entry one, but you can modify your torrc file to counter this.
-
:)
-
I just read the paper, which you can get here: http://www.ohmygodel.com/publications/usersrouted-ccs13.pdf
It builds on the work of several other papers that explored the issue of multiple relays in the same autonomous system, or in different autonomous systems that are run by the same organization. For those that don't know, the internet is comprised of thousands of subnetworks that are controlled by different organizations (corporations, universities, governments, etc). For example, your ISP runs an autonomous system. The web site you want to visit is at a hosting provider in a data center that runs an autonomous system (or perhaps several). Large organizations like Amazon with its cloud hosting service (AWS) run autonomous systems in many locations around the world.
OVH is a dedicated server provider that has data centers in multiple locations and many Tor relays run on OVH servers. So a few ISPs like OVH and Hetzner are potentially powerful adversaries. If your entry guard is an OVH server in France and your exit node is an OVH server in Montreal, then OVH can watch both ends of your connection and see who you are and what site you are visiting, thus rendering Tor useless.
The problem is made worse by internet exchange points, which are places where autonomous systems exchange traffic. Someone who controls an IXP can watch the traffic of two or more autonomous systems simultaneously as it traverses those networks. Western intelligence agencies like the NSA and GCHQ are almost certainly tapping many of these IXPs.
This is why we need to diversify the Tor network, and why I suggested running relays in South America and Asia in my relay guide. If you look at a map of Tor relays by geolocation, you'll see that way too many are in North America and Europe. Way too many of the high bandwidth relays are in a handful of autonomous systems, which is especially bad since circuit path building is weighted by relay bandwidth. 20% of Tor circuits will begin and end within an autonomous system that is controlled by the same organization, at any one time!
According to the simulations in the paper, 80% of Tor users can be deanonymized within 6 months through normal Tor use, and without the adversary doing anything special, just watching the networks they already have control over. Interestingly, some of the suggestions they make at the end of the paper to improve security are the same thing we've been saying here for months. Entry guards are the weakest point, so you can increase your security by reducing the number of entry guards (from 3 to 2 or 1) and increasing the entry guard rotation period.
The number of entry guards can be changed in torrc with:
NumEntryGuards NUM
You can manually specify which entry guards you want to use, for example, selecting entry guards that are in autonomous systems where no exit nodes exist. Do that with:
EntryNodes node,node,node
Where "node" can be a relay nickname, identity key fingerprint, or country code (ie, {us},{de}).
You can look up information about relays and ASes on the Tor Compass web site: https://compass.torproject.org
There is no torrc option to increase the entry guard rotation period. You have to modify the source code and compile a custom version of Tor, which I've explained elsewhere on the forum, but it's not a viable option for most people.
A better option may be to use bridges. Since you set them manually, they act like persistent entry guards. There is no rotation period. You keep them for as long as you want, as long as they are up and set in your torrc. Also, since they are theoretically private, the adversary may not know that they are Tor entry point, especially if they are using the obfsproxy protocol to defend against DPI.
Keep in mind that these stats are based on Tor users visiting clearnet sites. It is more difficult to deanonymize hidden service users because the adversary must control specific relays, such as the hidden service's entry guard or service directory, rather than just any exit node. However, it is still possible to attack hidden service users.
There are proposals to make Tor "AS-aware", meaning that it would come bundled with information about ASes and who controls them, and it would avoid building circuits through ASes that are controlled by the same organizations. None of these proposals have been implemented yet (right now Tor only avoids building circuits through relays that in the same /24 subnet, I believe). So it's up to us to defend ourselves against this threat. Probably the safest thing you could do is grab the list of exit nodes, figure out which AS numbers they are in, and who controls them, then find bridges that are in ASes not controlled by those organizations.
-
Shouldn't be directly applicable to hidden services because of how you connect to them.
So it doesn't apply directly. It applies indirectly. Meaning, it may take longer, but it's just as doable.
Also, there must be some particular characteristics, traffic-wise, related to the servers running hidden services, since a lot of clients are connecting(pointing) to them.
It may be that SR is safer than sites that serve relatively big amounts of data, but who knows.
-
So it doesn't apply directly. It applies indirectly. Meaning, it may take longer, but it's just as doable.
Also, there must be some particular characteristics, traffic-wise, related to the servers running hidden services, since a lot of clients are connecting(pointing) to them.
It may be that SR is safer than sites that serve relatively big amounts of data, but who knows.
Hidden services have to be deanonymized before an attacker can correlate traffic to them. With exit nodes, the attakcer controls that end. They'd have to either monitor the hidden service Introduction Points, or have a good enough view to see all six hops of the client<->hidden service conversation.
It's not the amount of data per session that would worry me.. it's the fact that there are so few hidden services with large amounts of unique users connecting to them. So any marginal traffic analysis attack can get closer to SR than to some obscure hidden POP3 server.
Here's one that I've been wondering about today:
Guard+Exit Correlation: You->TorClient->EvilMonitoredGuardNode->Middle>EvilMonitoredExitNode = denanonymize the user. This is the usual model.
If your attacker has a great view of clearnet Internet traffic, I don't think they need your exit node. Pretend that NSA can grab 75% of US traffic metadata at will (especially traffic transiting the borders). They have an especially good view of clearnet HTTP/HTTPS server traffic since so many servers are hosted in the US.
Now what about this:
You->TorClient->EvilMonitoredGuardNode->Middle->ExitTheyCantSee->ClearnetHops->(NSA Tap)->ClearnetHops->www.someclearsite.com = deanonymize the user?
I'm thinking yes. And I'm also thinking that there are a finite number of ExitNodes, and that should be a relatively small filter to rip all metadata out for. In that scenario, they don't need to monitor the ExitNode's traffic, just watch all traffic to/from it that they can see. It's a subset of the total traffic leaving ExitNode.
But in any of these exit-based correlation attacks, the net result is that, for periods of time where traffic correlation is possible, a determined adversary can see where you're going on the Internet. Not to be confused with when you're not using Tor (when you're fucked 100% of the time)
-
You->TorClient->EvilMonitoredGuardNode->Middle->ExitTheyCantSee->ClearnetHops->(NSA Tap)->ClearnetHops->www.someclearsite.com = deanonymize the user?
That's why you're theoretically better off using entry guards and exit nodes in the US if you are in the US, and assuming they do more surveillance at the borders, although they are likely tapping IXPs inside the US too. Unfortunately, the internet is not a spider web with multiple links between destinations. Most traffic congregates at choke points like IXPs and backbone fiber, making it relatively easy to snoop on a lot of it, just as everyone traveling long distance is likely to drive on an interstate.
-
This attack still applies to hidden services. Connections to hidden services are not magically protected from timing correlation attacks.
The writing is on the wall for VPN's, Tor , I2P and Proxies of all sorts. They are all dying technologies at best and dead at worst.
-
This attack still applies to hidden services. Connections to hidden services are not magically protected from timing correlation attacks.
No, but I imagine you'd have to deanonymize them first before you had something to correlate to.
The writing is on the wall for VPN's, Tor , I2P and Proxies of all sorts. They are all dying technologies at best and dead at worst.
In terms of them providing any level of effective anonymity against a determined nation-state adversary? Yeah, they're dead or dying. And it's not fixable through a new setting or obsfuscation method.
Tomorrow's anonymous network is going to have to look a lot more like Freenet. Your request slowly moves its way from node to node until the content slowly works its way back to you. Hopefully mixed in with everyone else's requests and content. Traffic analysis is more difficult because it's both high-latency, and because nodes are actually caching content.
What do you give up in that scenario? Immediate gratification, I guess. You will wait longer for content delivery. But most things (marketplaces, message-based communities, email gateways) are perfectly doable. You give up things requiring true dynamic content on the fly, but the upside is that you get a very multicast-like benefit to content distribution.
-
You can also make traffic analysis of your Tor through VPN connection harder, by downloading torrents through the VPN while using Tor. (don't torrent through Tor however, or you may make traffic analysis easier)
-
This attack still applies to hidden services. Connections to hidden services are not magically protected from timing correlation attacks.
No, but I imagine you'd have to deanonymize them first before you had something to correlate to.
The writing is on the wall for VPN's, Tor , I2P and Proxies of all sorts. They are all dying technologies at best and dead at worst.
In terms of them providing any level of effective anonymity against a determined nation-state adversary? Yeah, they're dead or dying. And it's not fixable through a new setting or obsfuscation method.
Tomorrow's anonymous network is going to have to look a lot more like Freenet. Your request slowly moves its way from node to node until the content slowly works its way back to you. Hopefully mixed in with everyone else's requests and content. Traffic analysis is more difficult because it's both high-latency, and because nodes are actually caching content.
What do you give up in that scenario? Immediate gratification, I guess. You will wait longer for content delivery. But most things (marketplaces, message-based communities, email gateways) are perfectly doable. You give up things requiring true dynamic content on the fly, but the upside is that you get a very multicast-like benefit to content distribution.
The biggest upside is that you actually get anonymity. In some cases it can be partially computationally based anonymity as well instead of probabilistic route selection based anonymity. So more like encryption than Tor. You are anonymous until the attacker solves this hard marth problem, instead of you are anonymous until the attacker watches these two locations. The reason Tor, VPN, Proxy and I2P are dead/dying is because it turns out that it actually isn't that hard for an attacker to watch two arbitrary locations, and some attackers like NSA solve the problem by watching all locations.
-
But there are two sorts of attack to keep in mind. Take PIR for example. PIR allows a client to download an item from a server without the server or any third party being able to determine the item obtained. At face value this means receive anonymity is automatically perfect with PIR. But these systems are still weak to traffic analysis. Let's say that the network consists of 100 people. One day Alice sends Bob 500 messages. Alice can watch the entire network externally. She notes that only one node obtained 500 messages for a given cycle, all other nodes obtained 1 or less messages. Now Alice cannot break the PIR to determine who Bob is, and even the server doesn't know who downloaded the messages sent to Bob. But due to the fact that only one node downloaded 500 messages, Alice can have a pretty damn good idea of who Bob is. So even though the PIR protects from some attacks (hell I don't even know what to call this class of attacks? cryptographic attacks?) it doesn't inherently protect from traffic analysis. But using PIR-like systems as a base allows us to focus on the remaining traffic analysis issues. Some of them are probably impossible to solve with pseudonymity. DC-nets are information theoretical perfect anonymity but even they can be broken by long term intersection attacks if the users are pseudonymous and the ideal conditions are not maintained indefinitely.
-
I read this paper as well and I wonder how valid it actually is.
First of all, it's based on models of user activity for web browsing, IRC, bittorrent and some collaborative text editor hardly anyone uses. Is this really an accurate representation of user behavior? Also, the paper doesn't mention how representative the Tor traffic conditions they replay are. They could've merely replayed an outlier in terms of congestion and network behavior.
Lastly, after reading the paper it struck me what a better research method would be:
-- Over several months time, pull down the list of Tor relays
-- Map each of those relay IP addresses to the AS they come from
-- Graph the peering, geographic and other relationships among the relays' ASes
-- Find out which IXPs each AS has a presence in and take that into account as well
-- Using many many connections to Tor start creating circuits, noting each time which relays the circuits involve
-- See how many circuits would be likely to be compromised in light of the information about the ASes found above
Why don't they do something like this? Maybe this has been done?
What I liked about the paper--but what isn't unique to it--is that it took into account IXPs and ASes, something crucial in trying to get a decent gauge as to how likely a full compromise and control of the Tor network would be by an intelligence agency.
-
The paper is relevant but hardly not groundbreaking as some of the tech experts on SR like astor and kmf have been saying the same thing for years about traffic co-relation attacks however times and technolgy are changing and Tor will have to adapt or it won't be relevant anymore. What those changes are is tough to say at this point. We know what the issues are and the scope of the attack surface so new protocols will have to be implemented.
-
subbed