Silk Road forums
Discussion => Security => Topic started by: astor on August 14, 2013, 03:06 am
-
In the wake of the Freedom Hosting exploit, I think we should reevaluate our threat model and update our security to better protect ourselves against the real threats that we face. So I wrote this guide in order to spark a conversation. It is by no means comprehensive. I only focus on technical security. Perhaps others can address shipping and financial security. I welcome feedback and would like these ideas to be critiqued and expanded.
As I was thinking about writing this guide, I decided to take a step back and ask a basic question: what are our goals? I've come up with two basic goals that we want to achieve with our technical security.
1. Avoid being identified.
2. Minimize the damage when we are identified.
You can think of these as our _guiding security principles_. If you have a technical security question, you may be able to arrive at an answer by asking yourself these questions:
1. Does using this technology increase or decrease the chances that I will be identified?
2. Does using this technology increase or decrease the damage (eg, the evidence that can be used against me) when I am identified?
Obviously, you will need to understand the underlying technology to answer these questions.
The rest of this guide explains the broad technological features that decrease the chances we are identified and that minimize the damage when we are identified. Towards the end I list specific technologies and evaluate them based on these features.
First, let me list the broad features that I have come up with, then I will explain them.
1. Simplicity
2. Trustworthiness
3. Minimal execution of untrusted code
4. Isolation
5. Encryption
To some extent, we've been focusing on the wrong things. I've predominantly been concerned with network layer attacks, or "attacks on the Tor network", but it seems clear to me now that application layer attacks are far more likely to identify us. The applications that we run over Tor are a much bigger attack surface than Tor itself. We can minimize our chances of being identified by securing the applications that we run over Tor. This observation informs the first four features that we desire.
===Simplicity===
Short of not using computers at all, we can minimize threats against us by simplifying the technological tools that we use. A smaller code base is less likely to have bugs, including deanonymizing vulnerabilities. A simpler application is less likely to behave in unexpected and unwanted ways.
As an example, when the Tor Project evaluated the traces left behind by the browser bundle, they found 4 traces on Debian Squeeze, which uses the Gnome 2 desktop environment, and 25 traces on Windows 7. It's clear that Windows 7 is more complex and behaves in more unexpected ways than Gnome 2. Through its complexity alone, Windows 7 increases your attack surface, exposing you to more potential threats. (Although there are other ways that Windows 7 makes you more vulnerable, too.) The traces left behind on Gnome 2 are easier to prevent than the traces left behind on Windows 7, so at least with regard to this specific threat, Gnome 2 is desirable over Windows 7.
So, when evaluating a new technological tool for simplicity, ask yourself these questions:
Is it more or less complex than the tool I'm currently using?
Does it perform more or fewer (unnecessary) functions than the tool I'm currently using?
===Trustworthiness===
We should favor technologies that are built by professionals or people with many years of experience rather than newbs. A glaring example of this is CryptoCat, which was developed by a well-intentioned hobbyist programmer, and has suffered severe criticism because of the many vulnerabilities that have been discovered.
We should favor technologies that are open source, have a large user base, and a long history of use, because they will be more thoroughly reviewed.
When evaluating a new technological tool for trustworthiness, ask yourself these questions:
Who wrote or built this tool?
How much experience do they have?
Is it open source, and how big is the community of users, reviewers, and contributors?
===Minimal Execution of Untrusted Code===
The first two features assume the code is trusted but has potential unwanted problems. This feature assumes that as part of our routine activities, we may have to run arbitrary untrusted code. This is code that we can't evaluate in advance. The main place this happens is in the browser, through plug-ins and scripts.
You should completely avoid running untrusted code, if possible. Ask yourself these questions:
Are the features that it provides absolutely necessary?
Are there alternatives that provide these features without requiring plug-ins or scripts?
===Isolation===
Isolation is the separation of technological components with barriers. It minimizes the damage incurred by exploits, so if one component is exploited, other components are still protected. It may be your last line of defense against application layer exploits.
The two types of isolation are physical (or hardware based) and virtual (or software based). Physical isolation is more secure than virtual isolation, because software based barriers can themselves be exploited by malicious code. We should prefer physical isolation over virtual isolation over no isolation.
When evaluating virtual isolation tools, ask yourself the same questions about simplicity and trustworthiness. Does this virtualization technology perform unnecessary functions (like providing a shared clipboard)? How long has it been in development, and how thoroughly has it been reviewed? How many exploits have been found?
===Encryption==
Encryption is one of two defenses we have to minimize the damage when we are identified. The more encryption you use, the better off you are. In an ideal world, all of your storage media would be encrypted, along with every email and PM that you send. The reason for this is because, when some emails are encrypted but others are not, an attacker can easily identify the interesting emails. He can learn who the interesting parties are that you communicate with because those will be the ones you send encrypted emails to (this is called metadata leakage). Interesting messages are lost in the noise when everything is encrypted.
The same goes for storage media encryption. If you store an encrypted file on an unencrypted hard drive, an adversary can trivially determine that all the good stuff is in that small file. But when you use full disk encryption, you have more plausible deniability as to whether the drive contains data that would be interesting to that adversary, because there are more reasons to encrypt an entire hard drive than a single file. Also, an adversary who bypasses your encryption would have to cull through more data to find the the stuff that is interesting to him.
Unfortunately, using encryption incurs a cost that the vast majority of people can't bare, so at a minimum, sensitive information should be encrypted.
On a related note, the other defense against damage is secure data erasure, but that takes time that you may not have. Encryption is preemptive secure data erasure. It's easier to destroy encrypted data, because you only have to destroy the encryption key to prevent an adversary from accessing the data.
Finally, I'd like to add a related non-technical feature.
===Safe Behavior===
In some cases, the technology we use is only as safe as our behavior. Encryption is useless if your password is "password". Tor is useless if you tell someone your name. It may surprise you how little an adversary needs to know about you in order to uniquely identify you. Here are some basic rules to follow:
Don't tell anyone your name. (obv)
Don't describe your appearance, or the appearance of any major possessions (car, house, etc.).
Don't describe your family and friends.
Don't tell anyone your location beyond a broad geographical area.
Don't tell people where you will be traveling in advance (this includes festivals!).
Don't reveal specific times and places where you lived or visited in the past.
Don't discuss specific arrests, detentions, discharges, etc.
Don't talk about your school, job, military service, or any organizations with official memberships.
Don't talk about hospital visits.
In general, don't talk about anything that links you to an official record of your identity.
===A List of Somewhat Secure Setups for Silk Road Users===
I should begin by pointing out that the features outlined above are not equally important. Physical isolation is probably the most useful and can protect you even when you run complex and untrusted code. In each of the setups below, I assume a fully updated browser / TBB with scripts and plug-ins disabled. Also, the term "membership concealment" means that someone watching your internet connection doesn't know you are using Tor. This is especially important for vendors. You can use bridges, but I've included extrajurisdictional VPNs as an added layer of security.
With that in mind, here is a descending list of secure setups for SR users.
Starting off, I present to you the most secure setup!
#1
A router with a VPN + an anonymizing middle box running Tor + a computer running Qubes OS.
Advantages: physical isolation of Tor from applications, virtual isolation of applications from each other, encryption as needed, membership concealment against local observers with VPN
Disadvantages: Qubes OS has a small user base and is not well tested, as far as I know.
#2
Anon middle box (or router with Tor) + Qubes OS
Advantages: physical isolation of Tor from applications, virtual isolation of applications from each other, encryption as needed
Disadvantages: Qubes OS has a small user base and is not well tested, no membership concealment
#3
VPN router + anon middle box + Linux OS
Advantages: physical isolation of Tor from applications, full disk encryption, well tested code base if it's a major distro like Ubuntu or Debian
Disadvantages: no virtual isolation of applications from each other
#4
Anon middle box (or router with Tor) + Linux OS
Advantages: physical isolation of Tor from applications, full disk encryption, well tested code base
Disadvantages: no virtual isolation of applications from each other, no membership concealment
#5
Qubes OS by itself.
Advantages: virtual isolation of Tor from applications, virtual isolation of applications from each other, encryption as needed, membership concealment (possible? VPN may be run in VM)
Disadvantages: no physical isolation, not well tested
#6
Whonix on Linux host.
Advantages: virtual isolation of Tor from applications, full disk encryption (possible), membership concealment (possible, VPN can be run on host)
Disadvantages: no physical isolation, no virtual isolation of applications from each other, not well tested
#7
Tails
Advantages: encryption and leaves no trace behind, system level exploits are erased after reboot, relatively well tested
Disadvantages: no physical isolation, no virtual isolation, no membership concealment, no persistent entry guards! (but can manually set bridges)
#8
Whonix on Windows host.
Advantages: virtual isolation, encryption (possible), membership concealment (possible)
Disadvantages: no physical isolation, no virtual isolation of applications from each other, not well tested, VMs are exposed to Windows malware!
#9
Linux OS
Advantages: full disk encryption (possible), membership concealment (possible)
Disadvantages: no physical isolation, no virtual isolation
#10
Windows OS
Advantages: full disk encryption (possible), membership concealment (possible)
Disadvantages: no physical isolation, no virtual isolation, the biggest target of malware and exploits!
Assuming there is general agreement about the order of this list, our goal is to configure our personal setups to be as high up on the list as possible.
Thanks for your attention, and again I welcome comments and criticism.
-
Bravo astor. Bravo.
/slowclap
-
Thank you for this astor. Even for people that know this already it's a good reminder. One set-up you didn't mention is using Whonix on windows with physical isolation. I know windows is not ideal from a security set up but what do you think of using physical isolation for windows using Whonix? Say using a clean laptop as the gateway and for that purpose only then using your main os as the host? I've gone through all the documentation at Whonix and it says it's pretty secure. Well as secure as you can get on windows I suppose.
The other thing that you touched upon is Qubes. Ideally it looks like a great security methodology but as you said it being new and untested it's hard to make a real solid evalution of it. Many exploits are produce when a combination of factors come into play. Combining different software or hardware can produce weaknesses and vulnerabilities in your OS.
What advice would you have for a vendor that wants a secure set up at the least? Disregarding Qubes as well..thanks!
-
Thank you for this astor. Even for people that know this already it's a good reminder. One set-up you didn't mention is using Whonix on windows with physical isolation. I know windows is not ideal from a security set up but what do you think of using physical isolation for windows using Whonix? Say using a clean laptop as the gateway and for that purpose only then using your main os as the host? I've gone through all the documentation at Whonix and it says it's pretty secure. Well as secure as you can get on windows I suppose.
I don't understand. The Gateway is on an anon middle box (the laptop), and Windows is the workstation? So it's not really Whonix, it's just Windows + an anon middle box.
Or do you mean, the Gateway is on an anon middle box, and run the Whonix Workstation (Linux) in a VM on Windows?
The other thing that you touched upon is Qubes. Ideally it looks like a great security methodology but as you said it being new and untested it's hard to make a real solid evalution of it. Many exploits are produce when a combination of factors come into play. Combining different software or hardware can produce weaknesses and vulnerabilities in your OS.
You're right, and Qubes violates the principle of software simplicity. Excellent point. This is why we need to talk about it. :)
What advice would you have for a vendor that wants a secure set up at the least? Disregarding Qubes as well..thanks!
Disregarding Qubes, I would tell them to run option #3, router with a VPN in another country + anon middle box running Tor + a popular Linux OS. If they can't afford the hardware, I would tell them to run Whonix on a Linux host.
-
I think another thing also that people forget is not to download torrents!
You can secure the shit out of your system but if you have a pirated copy of Adobe Photoshop CS6 on your system or just downloaded the full seasons of Breaking Bad well good job you just opened a new port to your computer and who knows what code is in the software or you downloaded. I read a post on whether or not viruses and malware can be embded in video files and the consensus was they can but it is unlikely. A common trick would be to change the exploit code to have a .avi or .mp4 format so for example it could
.avi evilcode.exe
I also think sometimes people forget the basic things about computer security which is the foundation really but then again I assume that most SR users are at an above average skill level when it comes to computing.
-
Wow, a great, fairly definitive overview! I think everyone should just point to that post as the answer for "Should I use Tails or Whonix or Windows 95?" questions.
On the Qubes OS front, in the "plus" column for Qubes is that it's fundamentally relying on the Xen hypervisor to enforce isolation, and that's one of the more mature, well-understood VM technologies available. And the core developer of Qubes OS has as good of a pedigree for VM-related security as anyone on earth.
One observation from some brief use of Qubes.. new users should make sure they understand how it works at a basic level. Your basic OS/windowing system (the dom0, in Xen terms) that boots up isn't actually on the network. And you shouldn't ever put it on the network, unless you're applying updates and you're sure you know what you're doing. Which is really the strong point of Qubes.. it's like Whonix isolation on steroids. And if I had to pick between trusting VirtualBox or trusting Xen (particularly as configured by Joanna Rutkowska and company), I'd pick Xen and Qubes.
-
Thank you for this astor. Even for people that know this already it's a good reminder. One set-up you didn't mention is using Whonix on windows with physical isolation. I know windows is not ideal from a security set up but what do you think of using physical isolation for windows using Whonix? Say using a clean laptop as the gateway and for that purpose only then using your main os as the host? I've gone through all the documentation at Whonix and it says it's pretty secure. Well as secure as you can get on windows I suppose.
I don't understand. The Gateway is on an anon middle box (the laptop), and Windows is the workstation? So it's not really Whonix, it's just Windows + an anon middle box.
Or do you mean, the Gateway is on an anon middle box, and run the Whonix Workstation (Linux) in a VM on Windows?
The other thing that you touched upon is Qubes. Ideally it looks like a great security methodology but as you said it being new and untested it's hard to make a real solid evalution of it. Many exploits are produce when a combination of factors come into play. Combining different software or hardware can produce weaknesses and vulnerabilities in your OS.
You're right, and Qubes violates the principle of software simplicity. Excellent point. This is why we need to talk about it. :)
What advice would you have for a vendor that wants a secure set up at the least? Disregarding Qubes as well..thanks!
Disregarding Qubes, I would tell them to run option #3, router with a VPN in another country + anon middle box running Tor + a popular Linux OS. If they can't afford the hardware, I would tell them to run Whonix on a Linux host.
I don't understand. The Gateway is on an anon middle box (the laptop), and Windows is the workstation? So it's not really Whonix, it's just Windows + an anon middle box.
Ok I might be a bit confused. If I'm using windows as the workstation directing everything through the gatway (laptop) then is that basically useless or am I achieving anything? I know I gotta break out of windows it's just hard when I've been using it for work for so many years.
Since we're at let me ask you this. I tested out Whonix for a bit. What I did was encrypt my system with Truecrypt, created a hidden OS. Installed Virtual box and ran Whonix there in the hidden OS as to not to leave any traces of Whonix on my computer. Does that set up do anything for me?
-
What a wealth of information. Astor outdoes himself (as usual) :)
I think another thing also that people forget is not to download torrents!
You can secure the shit out of your system but if you have a pirated copy of Adobe Photoshop CS6 on your system or just downloaded the full seasons of Breaking Bad well good job you just opened a new port to your computer and who knows what code is in the software or you downloaded. I read a post on whether or not viruses and malware can be embded in video files and the consensus was they can but it is unlikely. A common trick would be to change the exploit code to have a .avi or .mp4 format so for example it could
.avi evilcode.exe
I also think sometimes people forget the basic things about computer security which is the foundation really but then again I assume that most SR users are at an above average skill level when it comes to computing.
Yes, this is all true. Have you heard of Kali Linux? It's a hacker's toolbox crammed with programs that can do anything from crack WPA2 Wi-Fi to extract various databases to scan for open ports to perform a metasploit attack.
All you need to do is install an effective firewall, such as Comodo Firewall. This will prevent any remote administration tool or keylogger from functioning (unless you allow the outgoing connection).
-
Wow, a great, fairly definitive overview! I think everyone should just point to that post as the answer for "Should I use Tails or Whonix or Windows 95?" questions.
On the Qubes OS front, in the "plus" column for Qubes is that it's fundamentally relying on the Xen hypervisor to enforce isolation, and that's one of the more mature, well-understood VM technologies available. And the core developer of Qubes OS has as good of a pedigree for VM-related security as anyone on earth.
One observation from some brief use of Qubes.. new users should make sure they understand how it works at a basic level. Your basic OS/windowing system (the dom0, in Xen terms) that boots up isn't actually on the network. And you shouldn't ever put it on the network, unless you're applying updates and you're sure you know what you're doing. Which is really the strong point of Qubes.. it's like Whonix isolation on steroids. And if I had to pick between trusting VirtualBox or trusting Xen (particularly as configured by Joanna Rutkowska and company), I'd pick Xen and Qubes.
Thanks, this is great info. Yeah, when I was reading the Qubes web site and their blog, I got the sense that the devs knew what they were doing, which is a plus in Qubes' favor, but I wasn't sure how secure their configuration is, and the testing community seems kind of small.
Ok I might be a bit confused. If I'm using windows as the workstation directing everything through the gatway (laptop) then is that basically useless or am I achieving anything? I know I gotta break out of windows it's just hard when I've been using it for work for so many years.
Ok, I guess you could call that Whonix with physical isolation and a Windows Workstation. It's the equivalent of Windows with an anon middle box. That wasn't one of the options I listed above, primarily because I consider Windows insecure, since it's the biggest target of malware by an order of magnitude over OS X and by two or three orders of magnitude over Linux, and because the vast majority of Windows installs are linked to people's real identities (the licenses are linked to the purchases). So you can still leak your identity even though the connection goes over Tor whenever there is a system update. I think even the default Whonix Workstation + Gateway on a Windows host is safer than that.
Since we're at let me ask you this. I tested out Whonix for a bit. What I did was encrypt my system with Truecrypt, created a hidden OS. Installed Virtual box and ran Whonix there in the hidden OS as to not to leave any traces of Whonix on my computer. Does that set up do anything for me?
That's good, but the VirtualBox configuration files point to the files in the encrypted container, so you are leaking their existence. You should run the portable version of VirtualBox and store it in the encrypted volume too.
-
Ya I don't trust any commercial firewalls or anti-viruses. A well made rootkit will totally evade them and it's not something you would ever know about. It's funny because 10 years ago people that made malware it was more for show and they tried to make it known that they infected your pc. Nowadays it's the complete opposite. With a rootkit once it's on your pc it basically changes the way your computer operates by inserting jumps and interrupts at crucial stages of system calls and having system code being replaced with external code. This seems to affect windows users more just because of the complexity of the system. I hate to use the word complex when it comes to windows because it's not accurate. It's a very disorganized and messy system with countless system calls, hundreds of registry files which is a problem as it's possible for external sources to add their registry protocols or change your existing ones. I realized this one day when I had to dig deep into my PC. Reminded me of a title for a move - What lies beneath. lol..
-
Yes, this is all true. Have you heard of Kali Linux? It's a hacker's toolbox crammed with programs that can do anything from crack WPA2 Wi-Fi to extract various databases to scan for open ports to perform a metasploit attack.
Just an FYI. Kali is meant for pentesting. It's fundamentally an attack platform. And it's a nice one.
But it's not a desktop OS. You always run as root, as so do all tasks you're running. You give up the vast majority of the controls and protections that you get from a UNIX environment in Kali. It's literally a single-user root environment. It's a fair trade, but just make sure you know that you're making that trade.
-
Can we trust truecrypt hidden partitions in which or "deeds" and Tor bundles are kept, assuming we erase all logs of access to those directories?
-
Can we trust truecrypt hidden partitions in which or "deeds" and Tor bundles are kept, assuming we erase all logs of access to those directories?
That's the hard part. Windows is a complex OS. Shit could be logged and cached all over the place. Look how many traces the browser bundle leaves behind, and it's a portable app. I wouldn't rest my security on my ability to erase my activities on Windows. Encrypt the entire hard drive if you want to hide your activities.
-
Can we trust truecrypt hidden partitions in which or "deeds" and Tor bundles are kept, assuming we erase all logs of access to those directories?
That's the hard part. Windows is a complex OS. Shit could be logged and cached all over the place. Look how many traces the browser bundle leaves behind, and it's a portable app. I wouldn't rest my security on my ability to erase my activities on Windows. Encrypt the entire hard drive if you want to hide your activities.
I meant on nix where I can see hidden files and have open sourced programs. I don't think I would trust Windows with much aside from the multimedia that linux seems not up to snuff with as of yet.
-
Just encrypt the whole hard drive. It's much easier than trying to erase log files, and much safer in the long run. :)
You can do it with a few clicks at install time on Ubuntu, Debian, the latest version of Linux Mint, along with CentOS, Scientific Linux, and probably Fedora.
-
Subbing.
-
Is it possible in qubes to make a proxy vm locked to just a vpn connection?
-
Is it possible in qubes to make a proxy vm locked to just a vpn connection?
It already provides a Tor VM. I imagine it's possible to spin up another VM that just runs OpenVPN so you route all traffice through application-specific domain VMs -> Tor VM -> VPN VM -> internet.
-
astor, you should consider teaching a course on this stuff, framed a different way, like cyber safety or something, or how to prevent being tracked by marketers. it's all equal.
a lot of these tips are stuff that i see so many people today never once would think of. consider the person who falls for the 'YOUR COMPUTER HAS BEEN INFECTED WITH MALWARE. CLICK HERE TO REMOVE IT.' pop-ups. it's not that these people are complete idiots -- though they may be -- it's just that they have no framework for understanding what happens on a computer.
apart from being great information on how to remain anonymous, this is essentially information that the next generation should be equipped with to protect themselves from any interests that aim to invade their privacy without their permission or knowledge.
-
Also, SandboxIE is an excellent resource for application isolation in Windows. Use it! Here is the advanced topic discussion which covers what it does and can't do, and also explains what Windows does, where it stores data (even from apps that are sandboxed) and how to deal with it.
http://www.sandboxie.com/index.php?PrivacyConcerns
(there are a number of links in the article not shown below so it would be best to read it from the source, only pasting here for potential points of discussion).
-----
Privacy Concerns
This is an advanced topic, which explains that even after running a program under Sandboxie, your computer may still record which programs were executed or what they did. It is important to emphasize that this is not a security breach as it will never allow sandboxed programs to infect or otherwise abuse your computer. However, this may be interesting reading for those concerned with the privacy aspects of using Sandboxie.
Overview
The guiding principle of Sandboxie is to isolate and contain any actions taken by programs that Sandboxie supervises, for the purpose of keeping your computer and operating system in a clean and healthy state.
Most of the side effects of running a program under Sandboxie are in fact caused by the very program that is running under Sandboxie, and are gone when the sandbox is deleted. For example, a Web browser running under Sandboxie will record your browsing history in the sandbox, and this history will be completely erased when you delete the sandbox.
Thus it is easy to make a small leap of logic from the guiding principle above, and assume that a principle of Sandboxie is to protect your privacy and clean any all traces caused directly or indirectly by any program running under its supervision. However, this assumption would not be correct.
Sandboxie puts a great deal of effort into containing the actions taken by the program it supervises, however Sandboxie makes no effect at all to prevent your own Windows operating system from keeping records of what you do in your computer.
One who makes the incorrect assumption of extreme concern for privacy on the part of Sandboxie might be surprised to find several kinds of traces and logs in Windows that record which programs have been running, even inside the sandbox.
This page will explain the various known mechanisms that record information about the programs you run, either inside or outside the supervision of Sandboxie.
Prefetch and SuperFetch
Prefetch, introduced in Windows XP, and SuperFetch, introduced in Windows Vista, make up the prefetcher component in Windows.
This component is designed to improve application start up time by keeping copies of program files in a location that can be quickly accessed. The copies are kept in a folder called Prefetch that resides within the main Windows folder; typically that is C:\Windows\Prefetch.
Windows may store copies of programs files in this Prefetch folder even when the programs were executed under Sandboxie.
Prefetch behavior can be reduced to caching only programs using during the boot sequence, or to not cache anything at all. Follow these links for more information:
http://www.theeldergeek.com/prefetch_parameters_-_altering.htm
http://www.howtogeek.com/howto/windows-vista/change-superfetch-to-only-cache-system-boot-files-in-vista/
http://www.howtogeek.com/howto/windows-vista/how-to-disable-superfetch-on-windows-vista/
MUI Cache
Windows Explorer records in the registry the names of programs that are launched directly through it. This includes launching programs through the Start menu, the desktop, the quick launch area, or any folder views. It is true even if the right-click "Run Sandboxed" action is used to launch the program under Sandboxie.
The recorded information is kept in this registry key:
HKEY_CURRENT_USER\Software\Microsoft\Windows\ShellNoRoam\MUICache
If launch a program through a Sandboxie facility (such as the Sandboxie Start menu) or through a program which is already running under Sandboxie, then this information is kept in the registry inside the sandbox.
There are various third-party registry clearing tools that can erase this information.
Windows 7 Taskbar
On Windows 7 and later, Windows Explorer stores information associated with icons on the taskbar. This information includes the icon for the program and the command used to launch it. The information is stored in files in the following folder, within the user profile folder.
%Appdata%\Microsoft\Internet Explorer\Quick Launch\User Pinned\ImplicitAppShortcuts
The Sandbox Settings > Applications > Miscellaneous settings page includes the setting "Permit programs to update jump lists in the Windows 7 taskbar". If this setting is enabled, additional files are created in the following folders, within the user profile folder.
%Appdata%\Microsoft\Windows\Recent\CustomDestinations
%Appdata%\Microsoft\Windows\Recent\AutomaticDestinations
Windows Page File
During its normal course of operation, Windows sometimes needs to put away the contents of memory used by one program in order to make room for another program. The memory contents are stored in the Windows page file.
Programs that run under Sandboxie are still running in the same Windows operating system as any other program in the computer, so portions of sandboxed and normal programs may end up sitting side by side in the same page file.
It is possible to configure Windows to clear the contents of the page file at shutdown. More information here and here.
It is possible to configure Windows Vista to encrypt the contents of the page file:
* Run secpol.msc to open the Local Security Policy editor
* Expand the group labeled Public Key Policies
* Right-click Properties on the item labeled Encrypting File System
* Select Allow to enable Encrypting File System
* Check the box to Enable pagefile encryption.
* Click OK and reboot to put the new setting into effect.
Windows Hibernate File
Similar to the Windows Page File, the hibernate file stores a copy of the memory and state of the system before the computer is turned off as part of the hibernate process. Thus the hibernate file may contain bits of memory that were used by a sandboxed program.
System Restore
Restore points are snapshots of the state of the operating system at some points in time. The System Restore components in Windows XP and later versions of Windows records and restores these snapshots.
Snapshots are recorded in the (typically inaccessible) folder called System Volume Information and may include many types of files found throughout the system, including within the folders of the sandbox.
Thus it is possible that System Restore will create backup copies in its folders for files or programs that exist only in the sandbox.
The System Restore component ignores files and folders in temporary folders, so moving the sandbox to C:\TEMP\SANDBOX instead of the default C:\SANDBOX should cause System Restore to ignore the sandbox when creating a snapshot.
System, Audit and Other Event Logs
Windows sometimes records bits of information about running programs in its various event logs. Typically, very little if any information is logged about a program. However, if security auditing has been enabled for some aspects of the system, Windows will have no trouble logging the details of any actions taken by a program running under Sandboxie.
Windows has an Event Viewer program which can be used to view and delete the event logs. More information here.
Windows System Tray Icons
When a programs which is running under Sandboxie asks to place an icon in the system tray area, Sandboxie lets the program place the icon in the real system tray, which is typically located at the bottom right corner of the display.
This has the advantage that interaction with the tray icon of the sandboxed program is as easy as interacting with any other tray icon. However, it also means that Windows will record this icon and its description in the history of all tray icons it has ever displayed.
It is possible to manually clear this history in Windows XP and Windows Vista. There may also be third-party registry clearing tools that can erase this information.
Disk Defragmentation
Disk defragmenter software can be used to organize the contents of the hard disk at the level of data blocks, so that files may be accessed faster by the operating system.
Although this is not a privacy concern, the issue of sandboxed programs being able to defragment the disk has been raised and should be addressed.
Sandboxie isolation occurs at the higher file level rather than the lower level of data blocks. Moving data blocks around on the disk has no impact on the isolation of the sandbox, and cannot be used by a malicious program to somehow "move" its data out of the sandbox.
IP Privacy
Sandboxie isolation and protection occurs entirely within the local computer and is not visible to any other remote computer. Thus accessing the Internet using a sandboxed program looks the same as accessing the Internet using a program that is not running under Sandboxie. In both cases the remote computer identifies the accessing computer by its IP address.
-
When using a VPN with Tor some level of time/size correlation may still be possible, when browsing clearnet websites. That's because you are sending TCP packets of a certain size within a certain timeframe, and they arrive at the clearnet destination within that timeframe and a similar size. So if someone is sniffing the route between your computer and the VPN, and the route between the exit node and the clearnet destination at the same time, they can assume that there is some probability that you belong to a small group of people who possibly connected to the clearnet website within that timeframe.
This can be prevented by using secure remote desktop connections. Such connections basically send data all the time, and the size of the TCP packets between your computer and the remote desktop differ significantly from the TCP packets which arrive at the clearweb destination.
A possible setup would be:
VPN -> safe box with remote desktop -> Tor -> clearnet
VPN into a box in a country which is not a PRISM partner and use the remote desktop (e.g. VNC) of that box, which preferably has Linux or *BSD installed and was rented anonymously. On that box run Tor and firewall the box to only let Tor traffic out. Then the main problem would be to secure your data on that remote machine. I'm not sure how safe VPN encryption is, so you may want to tunnel the VNC connection through SSH for increased security.
If you only use Tor for hidden services then this is not an issue though, so you can use less paranoid setups.
-
1. Simplicity
2. Trustworthiness
3. Minimal execution of untrusted code
4. Isolation
5. Encryption
I would just like to say that there are generally three broad sorts of security mechanisms when it comes to protecting from hackers. These are isolation, correctness and randomization. I wish I still had the picture from Polyfront showing the various things to protect from, but generally:
Forensic analysts -> They primarily attempt to analyze your computer system, primarily hard drive, in order to find damning evidence or intelligence for future investigations. Forensics is a broad terminology and can mean various things when it comes to computers, but this is the traditional role of computer forensics. This sort of forensics is also called dead forensics because they are dealing with already seized computer equipment. Live forensics is what the FBI attack against users accessing FH sites is called, better known as hacking.
Traffic Analysts / Signals Intelligence -> They primarily gather and analyze communication carrying signals in an attempt to determine who is talking with who, or to trace the origin of a signal. These are the people who would launch a direct attack on Tor, for example carrying out the attack that traces hidden services to their entry guards. They are not generally very concerned with the content of a signal but rather with its meta-characteristics.
Network Analysts -> They are primarily interested in mapping out groups of people and the relationships between them. They could use traffic analysis to do this, or various other techniques.
Communications Intelligence -> Is primarily concerned with finding out what people say to each other. Whereas signals intelligence is interested primarily in the meta-characteristics of communication carrying signals, communications intelligence is primarily interested in the content of communication carrying signals. A communications intelligence attack may be running a server like Tor mail and gathering drug shipment addresses from everybody who doesn't encrypt them. In some cases meta-characteristics of communication signals can be used to determine the communications, in these cases communications intelligence would be interested in the meta-characteristics.
Hackers / Live Forensics -> Is primarily concerned with gaining unauthorized access to remote computers. This is very dangerous because it can be used as a hard to protect from vector through which all other sorts of intelligence can be gathered (by passing Tor removes the need for traffic analysis and leads to easy communications gathering, network analysis, remote forensics, etc).
Open Source Intelligence -> I believe an example of Open Source intelligence would be running a Tor exit node in an attempt to identify interesting servers on the clear net.
________
Traditional forensic analysts (dead forensics) are confounded almost entirely by FDE with strong passphrases. In some cases the feds may attempt to circumvent FDE by carrying out cold boot attacks, using keyloggers or hidden cameras, etc. The first level of security comes from using FDE in the first place. The second level of security comes from protecting from the various ways in which FDE keys can be obtained covertly. To protect from Cold Boot attacks you may use a system like Tresor which stores encryption keys in CPU registers rather than in RAM. You can use a motherboard with chassis intrusion detection support and set it to wipe encryption keys immediately if the case is breached. You can configure a system similar to tails, where you have a USB stick that once removed immediately results in the computer shutting down into a memory wipe (tails does this but you can configure similar things for any OS). You could tether this USB stick to a wrist strap and wear it while you work on your computer, so even if the feds rush in and tackle you they will result in the USB stick being pulled out of the PC. You can have hot key combinations on your keyboard, or even a single key, that immediately shuts down into a memory wipe in case of emergency.
You also need to follow good operational security procedures. Don't leave your system booted up when you are not near it. Use multiple layers of encryption. FDE is the catch all, but you should also have any stored information individually encrypted with some symmetric algorithm via GPG. If you have stored content keep it encrypted with GPG in a Truecrypt container on a drive that is FDE encrypted, and compartmentalize your stuff, there is no need for your entire FDE drive to have its entire content available in plaintext when it is booted. Various OS allow the home folder to be encrypted separately and mounted with the root password during login, and will automatically dismount it and take you to a login screen after some period of time. Using various layers of encryption like this makes it less likely that all of them will be compromised.
The hardest thing to protect yourself from is a covertly placed keylogger or pinhole camera. These can be used to gather all of your encryption passphrases without you even noticing. There are only a few ways to protect from this. The first method is to use a laptop that you literally never let of your sight, and that you sleep next to even. The second method is to use a laptop that you keep in a strong safe when you are not using. The third method is to use battery powered hidden cameras that monitor all entrance points to your PC, and to check for previous surreptitious entry every time before you type your password in.
Even if you follow all of these steps you are not totally protected. TEMPEST attacks and remote keylogging attacks (such as laser microphone on a nearby window to gather the sound of you typing, for analysis that can lead to the keystrokes you have made) are still possible. In some cases what you type can even leak into the power grid for semi-remote gathering, if you have your system plugged into a power outlet while you type on it. Taking care of every possible attack like this is next to impossible without having something near a SCIF , secure compartmentalized information facility. This is not realistic for us to do. However, it is rare that the police will go to such lengths, and every additional layer of security you add makes it less likely they will be able to obtain a complete plaintext copy of your drive.
Traffic analysts and signals intelligence is very difficult to protect from, especially if the NSA is your adversary. Using Tor offers some level of protection, it is probably breakable by the NSA in many cases but there is not much better right now. To get the most out of Tor you need to make sure you are using it correctly. In my opinion this entails not using tails, because it causes too much entry guard rotation and makes it so Tor does not offer you as much protection as it can. Hopefully using regular Tor is enough for now, if it is not there isn't much you can do other than look at Freenet perhaps. I2P is not really something I would even consider, and it is horrible for our threat model. Hopefully a new generation of anonymity technology is around the corner.
Communications intelligence can be protected from by always making sure to use GPG , OTR, or similar. There is still a risk of MITM attacks so it is a good idea to check public keys over multiple independently operated channels (not key servers though), and to create and utilize OTR shared secrets for authentication. OTR without authentication is actually very weak to MITM attacks.
Live Forensics is what I would be most worried about because it is the hardest to protect from and stands to gain the most. The techniques for protecting from this generally fall into three broad categories, isolation, correctness and randomization. I think that there are more methods than this though. Isolation would entail running Firefox in a virtual machine that isn't aware of an external IP address and which also don't have the ability to access the Tor process. There are other isolation tools as well, primarily mandatory access controls, these are hard to configure but can provide a great deal of security as well. Correctness means that the programs you are running are implemented properly and without bugs. Almost all programs have security bugs in them, they just might not have any currently known at a specific point in time. Keeping everything fully patched and updated is a requirement for security, the more you lag behind a patch the more likely you are to get pwnt. Additionally, different operating systems and programs have different levels of correctness due to the skill level of the people who implemented them as well as the sort of analysis they have been subjected to. Generally you want to use the most correct OS possible with the most correct applications included. This means you would opt for Debian stable over Ubuntu, Debian stable has a slow release cycle and prior to a release of the OS it and its included applications have been analyzed significantly. Ubuntu on the other hand puts more focus on features than it does on stability. At the extreme end of the spectrum you have operating systems like OpenBSD which have been subjected to continuous security audits for many years and are thought to be largely correct. I personally would actually probably opt for qubes though due to the sophisticated way it has implemented isolation. Randomization refers to features such as ASLR, which can make vulnerabilities that are present much harder to exploit.
So once you find the right balance of isolation, correctness and randomization in the OS and software you use, you still are not done. You need to configure the system in a secure way still. This could entail firewall rules, individual hardening of applications (particularly the browser, which at the very least should have javascript disabled), and general hardening of the OS. There are other security programs that can be added as well, such as intrusion detection systems, etc.
To some extent, we've been focusing on the wrong things. I've predominantly been concerned with network layer attacks, or "attacks on the Tor network", but it seems clear to me now that application layer attacks are far more likely to identify us. The applications that we run over Tor are a much bigger attack surface than Tor itself. We can minimize our chances of being identified by securing the applications that we run over Tor. This observation informs the first four features that we desire.
I think both are serious threats, I would be more worried about application layer attacks as well but I would not ignore the possibility of direct attacks on Tor by any means.
===Trustworthiness===
We should favor technologies that are built by professionals or people with many years of experience rather than newbs. A glaring example of this is CryptoCat, which was developed by a well-intentioned hobbyist programmer, and has suffered severe criticism because of the many vulnerabilities that have been discovered.
BitMessage is another good example of this.
Isolation is the separation of technological components with barriers. It minimizes the damage incurred by exploits, so if one component is exploited, other components are still protected. It may be your last line of defense against application layer exploits.
The two types of isolation are physical (or hardware based) and virtual (or software based). Physical isolation is more secure than virtual isolation, because software based barriers can themselves be exploited by malicious code. We should prefer physical isolation over virtual isolation over no isolation.
Indeed, and it all comes back to complexity. Routing your traffic through an old computer that you turned into a Tor router that runs on OpenBSD is much more secure than running an OS in virtualbox that routes through Tor on the host. If your primary computer is rooted in the first case, the attacker will very likely need to exploit Tor to deanonymize you on the application layer. If the guest OS is rooted in the second case, the attacker could exploit virtualbox to break out of the isolation OR they could exploit Tor to break out of the isolation. Using virtualbox for isolation adds an entire large chunk of code that you need to trust not to be exploitable, versus the hardware solution where you are primarily only trusting the Tor code to not be exploitable. On the other hand, if you use no isolation at all, then you are not getting any additional protection, and as soon as your network facing application is pwnt you are deanonymized (as we saw in the freedom hosting attack).
It is also worth noting that firewall rules could have prevented the freedom hosting attack from working, as could have mandatory access controls. A combination of mandatory access controls + virtual or hardware isolation + firewall rules would have added three different layers of security via isolation that an attacker would have needed to overcome before they could get their payload to phone home.
When evaluating virtual isolation tools, ask yourself the same questions about simplicity and trustworthiness. Does this virtualization technology perform unnecessary functions (like providing a shared clipboard)? How long has it been in development, and how thoroughly has it been reviewed? How many exploits have been found?
Also ask yourself "Does this virtualization based isolation tool support ASLR? does it support NX-bit?". Xen is probably the most secure virtualization system in that it will be hardest for the attacker to break out of. This is why Qubes uses Xen. On the other hand, Xen doesn't support ASLR. This means that if you run Firefox in a Xen VM, it is probably more likely that an attacker can exploit its vulnerabilities than it is that the same attacker could exploit its vulnerabilities if it was in a virtualbox VM. On the other hand, it is more likely that the attacker will be able to break out of the virtualbox isolation than it is that they will be able to break out of the xen isolation. I am not sure where the correct balance is, but the answer is probably to use hardware isolation because it is the strongest isolation possible and it also supports ASLR and everything else. Or maybe the solution is to use Hardware isolation + virtual isolation, but then we are back to square one, should we use virtual isolation that is harder to penetrate or virtual isolation that allows us to use other important security mechanisms as well.
I should begin by pointing out that the features outlined above are not equally important. Physical isolation is probably the most useful and can protect you even when you run complex and untrusted code.
Physical isolation with Tor on an OpenBSD box = 2 orders of magnitude more secure than running vanilla TBB. Physical isolation with GPG keys on an air gapped machine = 2 more orders of magnitude more secure. Physical isolation of the network facing applications from Tor, and air gapped GPG keys is probably close to the best you can hope for when it comes to protection from hackers.
A router with a VPN + an anonymizing middle box running Tor + a computer running Qubes OS.
I agree, but don't forget to air gap your GPG keys and plaintext messages :).
Advantages: physical isolation of Tor from applications, full disk encryption, well tested code base if it's a major distro like Ubuntu or Debian
Disadvantages: no virtual isolation of applications from each other
You could always use Xen or something else yourself. Most people only really want to isolate a few applications, maybe Pidgin and Tor Browser and GPG. You don't really need Qubes for this, it just tries to make it easier and prettier. And Xen is very well tested and widely used.
Whonix on Linux host.
This is a good bet as well, and the biggest advantage is ease of use versus Qubes I would say.
Disadvantages: no physical isolation, no virtual isolation of applications from each other, not well tested
A big plus for qubes is virtual airgapped GPG, but this can be configured manually with Xen or VB as well.
Tails
Advantages: encryption and leaves no trace behind, system level exploits are erased after reboot, relatively well tested
Disadvantages: no physical isolation, no virtual isolation, no membership concealment, no persistent entry guards! (but can manually set bridges)
No persistent entry guards is a massive disadvantage, if you don't set persistent bridges don't use Tails. If they add persistent entry guards I would consider it a fine solution and although not on the level of Whonix or Qubes it would be a solid third place. They shoot themselves in the foot by not having persistent entry guards though, so make sure you use bridges if you use Tails. It is worth noting that had the FH attackers targeted Linux, their payload would have failed to phone home because of their firewall rules (but it didn't target Linux in the first place).
Whonix on Windows host.
Advantages: virtual isolation, encryption (possible), membership concealment (possible)
Disadvantages: no physical isolation, no virtual isolation of applications from each other, not well tested, VMs are exposed to Windows malware!
Definitely on the insecure side of the spectrum, although it would have protected from the FH attack.
Linux OS
Advantages: full disk encryption (possible), membership concealment (possible)
Disadvantages: no physical isolation, no virtual isolation
Definitely on the insecure side of the spectrum as well, it only protected from FH attack because of security via obscurity which is never what you want to rely on. Isolation is important. Tails is a bit of an exception because even if Linux had been targeted Tails would have prevented the exploit from phoning home. Technically you could configure similar firewall rules on any Linux OS, but you didn't specify that in the description, and virtualization based isolation is much better anyway.
#10
Windows OS
Advantages: full disk encryption (possible), membership concealment (possible)
Disadvantages: no physical isolation, no virtual isolation, the biggest target of malware and exploits!
This is about as insecure as you can get.
-
The other thing that you touched upon is Qubes. Ideally it looks like a great security methodology but as you said it being new and untested it's hard to make a real solid evalution of it. Many exploits are produce when a combination of factors come into play. Combining different software or hardware can produce weaknesses and vulnerabilities in your OS.
You're right, and Qubes violates the principle of software simplicity. Excellent point. This is why we need to talk about it. :)
Qubes is based on Xen and has a fairly minimal trusted code base.
-
I think another thing also that people forget is not to download torrents!
You can secure the shit out of your system but if you have a pirated copy of Adobe Photoshop CS6 on your system or just downloaded the full seasons of Breaking Bad well good job you just opened a new port to your computer and who knows what code is in the software or you downloaded. I read a post on whether or not viruses and malware can be embded in video files and the consensus was they can but it is unlikely. A common trick would be to change the exploit code to have a .avi or .mp4 format so for example it could
.avi evilcode.exe
I also think sometimes people forget the basic things about computer security which is the foundation really but then again I assume that most SR users are at an above average skill level when it comes to computing.
I think pretty close to anything can have attack code embedded into it.
-
When using a VPN with Tor some level of time/size correlation may still be possible, when browsing clearnet websites. That's because you are sending TCP packets of a certain size within a certain timeframe, and they arrive at the clearnet destination within that timeframe and a similar size. So if someone is sniffing the route between your computer and the VPN, and the route between the exit node and the clearnet destination at the same time, they can assume that there is some probability that you belong to a small group of people who possibly connected to the clearnet website within that timeframe.
This can be prevented by using secure remote desktop connections. Such connections basically send data all the time, and the size of the TCP packets between your computer and the remote desktop differ significantly from the TCP packets which arrive at the clearweb destination.
A possible setup would be:
VPN -> safe box with remote desktop -> Tor -> clearnet
VPN into a box in a country which is not a PRISM partner and use the remote desktop (e.g. VNC) of that box, which preferably has Linux or *BSD installed and was rented anonymously. On that box run Tor and firewall the box to only let Tor traffic out. Then the main problem would be to secure your data on that remote machine. I'm not sure how safe VPN encryption is, so you may want to tunnel the VNC connection through SSH for increased security.
If you only use Tor for hidden services then this is not an issue though, so you can use less paranoid setups.
If you tunnel Tor through a VPN the packets will all be padded to the same size anyway. And timing analysis can still fuck it in any case.
-
Is there a use case for having something like:
client -> vpn or other obfuscation -> connect remotely to vps or server bought anonymously running your desired os -> tor
-
These rankings seem to be biased towards systems that maximise security for individuals who will predominantly be committing their offences from a single location and/or using the same network repeatedly. More bluntly, people sitting at home ordering their drugs to be delivered to their door ;) While the set-ups you've described are brilliant, they're also involved and unwieldy, inelegant.
I prefer Tails as not only is it a secure OS, but it's a means of encouraging secure behaviour. Used as recommended, the lack of persistent entry guards isn't really an issue. Used as recommended, I believe, tor bridges may be less safe, at best redundant, as you would want to randomise them as much as possible, also. Spoof your mac address, briefly access random networks to conduct your business, ram wiped, away you go. Easy as... :)
-
subbing
-
changing your mac address is only useful if you connect to a wireless network (or data network) you are not supposed to, or connect to TOR via a free wireless network at a cafe or whatever.
You guys relies the mac address is only known between the local LAN right ? so the router and other devices in the local lan can see what your MAC is.
Your ISP doesn't know what your devices MAC is... nor do websites you access.
The IP address is the only thing transmitted end to end (obviously NAT will change the IP but you get what I mean)... not the MAC.. :)
Some of you have really good ideas but I think your over thinking it abit to much... and I'm sure someone will come back saying "you can never be to careful" but if you actually have a fairly good understanding of networks and security you'd relies alot of the stuff suggested is point.
Hitch my comments aren't directed at you!! more so everyone else with there overkill ideas like running vm in vm in vm in vm in vm in vm to stop dns leaking.. which you can easily stop anyways hehe.
hmm I didn't mean to come off sounding harsh, its just some people can inadvertently create a system which isnt quite as secure as they think by implementing 10000000 diff things, which is a shame as they are trying to do the right thing.
These rankings seem to be biased towards systems that maximise security for individuals who will predominantly be committing their offences from a single location and/or using the same network repeatedly. More bluntly, people sitting at home ordering their drugs to be delivered to their door ;) While the set-ups you've described are brilliant, they're also involved and unwieldy, inelegant.
I prefer Tails as not only is it a secure OS, but it's a means of encouraging secure behaviour. Used as recommended, the lack of persistent entry guards isn't really an issue. Used as recommended, I believe, tor bridges may be less safe, at best redundant, as you would want to randomise them as much as possible, also. Spoof your mac address, briefly access random networks to conduct your business, ram wiped, away you go. Easy as... :)
-
astor, you should consider teaching a course on this stuff, framed a different way, like cyber safety or something, or how to prevent being tracked by marketers. it's all equal.
Are you saying I should cancel my Silk Road Security 101 course at the local junior college? ;)
apart from being great information on how to remain anonymous, this is essentially information that the next generation should be equipped with to protect themselves from any interests that aim to invade their privacy without their permission or knowledge.
Thanks. I wanted to put my knowledge on this subject in one place as well as spark a conversation, and writing helps me organize my thoughts.
-
When using a VPN with Tor some level of time/size correlation may still be possible, when browsing clearnet websites. That's because you are sending TCP packets of a certain size within a certain timeframe, and they arrive at the clearnet destination within that timeframe and a similar size. So if someone is sniffing the route between your computer and the VPN, and the route between the exit node and the clearnet destination at the same time, they can assume that there is some probability that you belong to a small group of people who possibly connected to the clearnet website within that timeframe.
I don't consider VPNs to be secure against someone who is specifically targeting you. The reason I added a VPN to the options, as stated in the guide, is to protect vendors. I didn't mention the specific attack that it protects against, but I've described it before:
I believe that vendors should hide their Tor use. It isn't a crime, but it could be used to identify them.
LE orders a package and gets the vendor's city. I calculated the average density of Tor users in the United States, based on my estimate that there are 250,000 monthly Tor users in the US (the global numbers vary too much by country to be useful). That's about 80 in a city of 100,000, and 800 in a city of 1 million. Actually, the number of daily connecting users is 80,000, and some of them are different people on subsequent days, so the number of people who connect every day like a typical vendor is probably more like 60,000. That's 20 people in a city of 100K, and 200 people in a city of 1M.
LE works with the local ISP to identify these users by watching for connections to entry guards, a list of about 1200 IP addresses. From there they correlate the people connected to entry guards with the vendor's online activity. They could send messages to the vendor and look at the response times, and if the vendor posts on this forum, look at the post times. Anyone not connected to the Tor network at the time of a vendor activity is not the vendor (or so they assume). They could exclude most of those Tor users in a short period of time, probably a couple of weeks. They wouldn't be able to exclude everyone, because some people are always connected, but if they have a list of 5 to 10 people, and the vendor is pushing a lot of weight, it could be worth investigating all of them through traditional means to find the vendor.
The purpose of the VPN is to avoid connecting to entry guard IPs, not to defeat traffic analysis. You can achieve the same result with bridges, but LE may have enumerated most of the bridges, whereas they probably don't have the IP addresses of all the OpenVPN servers in the world, so if you pick an obscure VPN provider in an obscure country, you're safer than using bridges.
-
Forensic analysts
Traffic Analysts / Signals Intelligence
Network Analysts
Communications Intelligence
Hackers / Live Forensics
Open Source Intelligence
I'm not going to requote the entire thing, but this is great. Exactly the kind of input I was hoping to get.
It is also worth noting that firewall rules could have prevented the freedom hosting attack from working, as could have mandatory access controls. A combination of mandatory access controls + virtual or hardware isolation + firewall rules would have added three different layers of security via isolation that an attacker would have needed to overcome before they could get their payload to phone home.
Yes, mandatory access controls and firewall rules are other forms of isolation that I didn't mention. I was too narrowly focused on VMs.
Also ask yourself "Does this virtualization based isolation tool support ASLR? does it support NX-bit?". Xen is probably the most secure virtualization system in that it will be hardest for the attacker to break out of. This is why Qubes uses Xen. On the other hand, Xen doesn't support ASLR. This means that if you run Firefox in a Xen VM, it is probably more likely that an attacker can exploit its vulnerabilities than it is that the same attacker could exploit its vulnerabilities if it was in a virtualbox VM. On the other hand, it is more likely that the attacker will be able to break out of the virtualbox isolation than it is that they will be able to break out of the xen isolation. I am not sure where the correct balance is, but the answer is probably to use hardware isolation because it is the strongest isolation possible and it also supports ASLR and everything else. Or maybe the solution is to use Hardware isolation + virtual isolation, but then we are back to square one, should we use virtual isolation that is harder to penetrate or virtual isolation that allows us to use other important security mechanisms as well.
Theory meets practice at some point. Since posting this guide, people have admitted to me that running Qubes or setting up an anon middle box (even following the instructions to manually set up the Whonix Gateway on a separate device) is beyond their capabilities. A big difference between Xen and VirtualBox is that there is a preconfigured solution for VirtualBox, and that's better than no virtual isolation at all.
A big plus for qubes is virtual airgapped GPG, but this can be configured manually with Xen or VB as well.
What do you mean by virtual air-gapped?
No persistent entry guards is a massive disadvantage, if you don't set persistent bridges don't use Tails. If they add persistent entry guards I would consider it a fine solution and although not on the level of Whonix or Qubes it would be a solid third place. They shoot themselves in the foot by not having persistent entry guards though, so make sure you use bridges if you use Tails. It is worth noting that had the FH attackers targeted Linux, their payload would have failed to phone home because of their firewall rules (but it didn't target Linux in the first place).
This really needs to be TODO item #1, like out in the next version of Tails.
-
Is there a use case for having something like:
client -> vpn or other obfuscation -> connect remotely to vps or server bought anonymously running your desired os -> tor
If you didn't purchase the VPS anonymously, then it's equivalent to connecting to Tor directly. Otherwise, it could be a useful layer of obfuscation, kind of like a 4th hop private bridge. Not sure how badly it would degrade your connection, though.
-
A possible setup would be:
VPN -> safe box with remote desktop -> Tor -> clearnet
VPN into a box in a country which is not a PRISM partner and use the remote desktop (e.g. VNC) of that box, which preferably has Linux or *BSD installed and was rented anonymously. On that box run Tor and firewall the box to only let Tor traffic out. Then the main problem would be to secure your data on that remote machine. I'm not sure how safe VPN encryption is, so you may want to tunnel the VNC connection through SSH for increased security.
If you only use Tor for hidden services then this is not an issue though, so you can use less paranoid setups.
If you tunnel Tor through a VPN the packets will all be padded to the same size anyway. And timing analysis can still fuck it in any case.
A remote desktop is used in this example, so the time/size correlation is not an issue. The VPN is only used for encryption of the VNC (remote desktop) connection between your computer and the safe box. Though using a SSH tunnel would basically be enough. Not sure whether VPN has better encryption than SSH, but I guess both are equal. And SSH is easier to setup. However more steps may be necessary to make this type of setup safe to use.
If anyone wonders what's more secure, Tails or Whonix, have a look at this comparison:
http://zo7fksnun4b4v4jv.onion/wiki/Comparison_with_Others
-
These rankings seem to be biased towards systems that maximise security for individuals who will predominantly be committing their offences from a single location and/or using the same network repeatedly. More bluntly, people sitting at home ordering their drugs to be delivered to their door ;) While the set-ups you've described are brilliant, they're also involved and unwieldy, inelegant.
You're absolutely right. The first 5 setups are beyond the capabilities of the vast majority of people, but I've listed them because they really are the most secure. So now you have a fun challenge. Can you convert an old laptop into a Whonix Gateway, or install PORTAL on your router? If you never try anything hard, how will you ever grow?
In any case, I think Whonix on a Linux host or Tails with persistent bridges are safe enough for most people, and within their capabilities to setup. Either of these options is much safer than running TBB on Windows, which is what most people do right now. I want to lift the collective security of the community, and I've given them a variety of options.
I prefer Tails as not only is it a secure OS, but it's a means of encouraging secure behaviour. Used as recommended, the lack of persistent entry guards isn't really an issue. Used as recommended, I believe, tor bridges may be less safe, at best redundant, as you would want to randomise them as much as possible, also. Spoof your mac address, briefly access random networks to conduct your business, ram wiped, away you go. Easy as... :)
If by "used as recommended" you mean used as a mobile operating system where you log on to different, random wifi spots, then you're correct, your bridges should be different each time so you aren't linked to other logons (of course, you should randomize your MAC address in that case too, which unfortunately Tails doesn't give you an option to do during boot).
However, the vast majority of Tails users in this community don't use it as a mobile OS. They repeatedly connect from home. In that case, you want persistent entry guards, because choosing different ones all the time increases the chances that you pick a malicious node.
-
And also they lack of having the same entry guards would make it easier for an adversary to use physical fingerprinting to establish you're using Tails as you stand out from the rest of the Tor users who have persistent entry guards for a month.
I agree though setting up a Whonix gateway on seperate machine should be achievable by most as it's not fairly complicated.
-
In regards to PORTAL what if your ISP only allows you to use their routers to connect to the internet? Anyway around that besides switching ISP's?
-
If anyone wonders what's more secure, Tails or Whonix, have a look at this comparison:
http://zo7fksnun4b4v4jv.onion/wiki/Comparison_with_Others
That's a nice overview. Looks like Whonix with physical isolation and Qubes OS with physical isolation are about even. You might still prefer Qubes OS so you can create disposable VMs to open untrusted files, or store your passwords in text files that are in a separate encrypted VM from your browser. I guess Whonix with virtual isolation is less secure that Qubes OS because VirtualBox is less secure than Xen. Are kernel exploits to gain root privileges more of a threat than VirtualBox exploits to escape the VM?
-
In regards to PORTAL what if your ISP only allows you to use their routers to connect to the internet? Anyway around that besides switching ISP's?
That sucks, but you can setup an anon middle box in that case.
An old laptop works well as an anon middle box without needing to buy extra hardware (other than a crossover cable, perhaps), because it already has two NICs. Just remember to route the connection from your Workstation into the ethernet port of the laptop, and use the wireless card to connect to your router, not the other way around.
Also, you should use a desktop computer for the Workstation, or physically remove the wireless card from a laptop and connect to the anon middle box via ethernet, so an attacker who gains control of your Workstation can't connect to your neighbor's unprotected wifi and get their IP address. If we're talking about LE, and they know that IP belongs to your neighbors, they can knock on a few doors and find you pretty quickly.
-
astor :)
You never cease to amaze me :)
Hugs to ya from Me 8)
Love, Peace & Respect from me to You :)
Chem
O0
-
If anyone wonders what's more secure, Tails or Whonix, have a look at this comparison:
http://zo7fksnun4b4v4jv.onion/wiki/Comparison_with_Others
That's a nice overview. Looks like Whonix with physical isolation and Qubes OS with physical isolation are about even. You might still prefer Qubes OS so you can create disposable VMs to open untrusted files, or store your passwords in text files that are in a separate encrypted VM from your browser. I guess Whonix with virtual isolation is less secure that Qubes OS because VirtualBox is less secure than Xen. Are kernel exploits to gain root privileges more of a threat than VirtualBox exploits to escape the VM?
That's a very good and valid question in regards to which is more of a threat - kernel exploits or virtual box exploits. I would be more concerned with kernel exploits because that level of attacker would be more sophisiticated in his approach and would have the knowledge to engineer his own custom exploits tailored to the information he/she has already on your OS.
Someone trying to exploit VirtualBox I think would be less sophisiticated as they're relying on holes and weaknesses in the software. With that being said Kernel exploits are no small feat and even the most infamous so called hackers would have no clue on how to write one. So I think your attack surface would be smaller in the case of using Qubes OS because the kernel exploits are not something that just your average wanna be hacker could pull off. If it comes to the point where LE or a global adversary is writing custom kernel exploits in order to find you then you got trouble.
-
Thank you for explaining that set up with the laptop. That's what I had in mind. Getting a used laptop, Dban wipe it and install either OpenBSD or LinuxMint(prolly Linux Mint little easier for a first timer) and use that as the router/gateway with Whonix. The desktop computer would be the hosts and running Windows. On a scale of 1-10 how secure would you rate that assuming the absence of malware?
-
Amazing post Astor, first and foremost - thanks a bunch.
I know this thread is dedicated to technological security, but I need a place to rant.
Lately I have been getting tiny orders in packages that don't fit into my mailbox.
While I'm not 100% this is a security risk in and of itself.
Today one arrived with a Click and Ship label.
Needless to say, I was a bit angry.
Fortunately I've had the same mailman for over 10 years, and he doesn't mind bringing them right to me.
I don't see the point of way over sized packages, when they contain less than a gram.
And certainly disagree with using Click and Ship.
This might be personal preference, but just thought I'd get it off my chest.
Oh, and +1 Astor.
Vanquish
-
Ya there was a big debate on that in the shipping forum on how vendors using click n ship are actually violating part of their vendor contract by not storing and keeping their customers information. By using click n ship the vendor is inputting the customers information onto USPS website which is then stored on their database. The problem is that if one package gets seized they trace it back to the USPS account from which it was sent and voila they have a list of addresses that could also be suspect for receiving drugs. Personally I think it's a violation of the vendors contract by doing that but I may be wrong. However I would not order or re-order from a vendor that does it that way.
-
Ya there was a big debate on that in the shipping forum on how vendors using click n ship are actually violating part of their vendor contract by not storing and keeping their customers information. By using click n ship the vendor is inputting the customers information onto USPS website which is then stored on their database. The problem is that if one package gets seized they trace it back to the USPS account from which it was sent and voila they have a list of addresses that could also be suspect for receiving drugs. Personally I think it's a violation of the vendors contract by doing that but I may be wrong. However I would not order or re-order from a vendor that does it that way.
You're correct and make some valid points, and I agree. +1
-
changing your mac address is only useful if you connect to a wireless network (or data network) you are not supposed to, or connect to TOR via a free wireless network at a cafe or whatever.
You guys relies the mac address is only known between the local LAN right ? so the router and other devices in the local lan can see what your MAC is.
Your ISP doesn't know what your devices MAC is... nor do websites you access.
The IP address is the only thing transmitted end to end (obviously NAT will change the IP but you get what I mean)... not the MAC.. :)
Some of you have really good ideas but I think your over thinking it abit to much... and I'm sure someone will come back saying "you can never be to careful" but if you actually have a fairly good understanding of networks and security you'd relies alot of the stuff suggested is point.
Hitch my comments aren't directed at you!! more so everyone else with there overkill ideas like running vm in vm in vm in vm in vm in vm to stop dns leaking.. which you can easily stop anyways hehe.
hmm I didn't mean to come off sounding harsh, its just some people can inadvertently create a system which isnt quite as secure as they think by implementing 10000000 diff things, which is a shame as they are trying to do the right thing.
These rankings seem to be biased towards systems that maximise security for individuals who will predominantly be committing their offences from a single location and/or using the same network repeatedly. More bluntly, people sitting at home ordering their drugs to be delivered to their door ;) While the set-ups you've described are brilliant, they're also involved and unwieldy, inelegant.
I prefer Tails as not only is it a secure OS, but it's a means of encouraging secure behaviour. Used as recommended, the lack of persistent entry guards isn't really an issue. Used as recommended, I believe, tor bridges may be less safe, at best redundant, as you would want to randomise them as much as possible, also. Spoof your mac address, briefly access random networks to conduct your business, ram wiped, away you go. Easy as... :)
Isolation isn't primarily to stop DNS leaks, it just does that as well. It is primarily to prevent hackers from getting your real IP address. Also, spoofing MAC address can be useful in this situation as well, look at the FH attack their malware gathered MAC addresses to be able to identify the individual machine that went to FH. It isn't only a concern about MAC address leaking to the wireless router but also a concern about remote attackers hacking in and grabbing it.
-
What do you mean by virtual air-gapped?
I just meant isolated I shouldn't have said air-gapped since that actually means isolated such that there is a complete lack of an attackable path to it from the internet. Virtualization used for isolation of GPG private key simulates air gaps but it isn't as secure. Qubes allows for storing GPG private keys and plaintexts in an isolated domain but it still has a path to it if the hackers can break out of the virtualization. On the other hand, running GPG on a machine without any attack path from the internet to it is a real air gap.
The best is probably to have two machines for encryption. Machine 1 has your private key on it and is used for decrypting messages, it can have ciphertexts from the internet brought over to it via one time use CD's (so can be infected), but never has any outgoing patch to the internet (so cannot phone home). Machine 2 is used for encrypting messages, it can have ciphertexts sent from it via one time use CD's (so can phone home) but it cannot have anything brought to it by media that has accessed the internet (so cannot be infected).
Your private key will be completely protected. The only way an attacker could compromise your plaintexts is if they hack you via a public GPG key somehow (which you must load to be able to encrypt messages to people), and then configure your system to somehow send out the information they are interested in, perhaps by screwing with your PRNG so they can always determine the session key of your outgoing ciphertexts.
-
The best is probably to have two machines for encryption. Machine 1 has your private key on it and is used for decrypting messages, it can have ciphertexts from the internet brought over to it via one time use CD's (so can be infected), but never has any outgoing patch to the internet (so cannot phone home). Machine 2 is used for encrypting messages, it can have ciphertexts sent from it via one time use CD's (so can phone home) but it cannot have anything brought to it by media that has accessed the internet (so cannot be infected).
Like what the Armory does with cold storage of bitcoin wallets. That is certainly secure, but it's hard enough to get people to use PGP. :)
-
Good thread.
However I don't agree necessarily that Whonix on a Linux Host is *fundamentally* better than the same on a Windows Host to the point of making the second insecure in EVERY case.
It's true that Windows has the problem of malware but if you use (for example) a notebook only for SR & co. having a Windows host or a Linux one doesn't change that much just because you will use anyway always the VM guest (and you can use a Linux guest there, for example). If you don't use the host for anything insecure (as going only to SECURE clearnet sites and that's it, doing usual software operations without the use of a net etc.) there will be no more problems per se than using a Linux Host.
So, imo, it depends on what you use the host for. It's obvious that if you have a PC that you use for everything with a Windows Host on it and then going with a Whonix VM would be much worse than doing the same having a Linux Host, but if the host is not the primary use of the PC or the host use is not insecure to begin with it will not make as much difference as turning it into insecure by itself.
Sure, you can say that at that point it's simply better to go with a Linux Host because you will use the PC only for the Whonix Guest, but not everyone is a techo and many tech newbies don't find themselves well with Linux for anything that's beyond simple surfing or very basic operations, so having an Host they can better rely upon (as the one they always use) can make them feel more at home.
Having a Windows Host on a dedicated notebook + a Whonix Guest is a great step forward than the usual ToR under a Windows Host on a desktop PC (used for everything) that many (the majority alas) use. Pretending that these people will do the same with Linux is asking a too huge jump, but doing it with a Windows OS is something they can do easily and find at home with, increasing their security by 1000% (in comparison to the usual). Easy to setup, easy to adopt, and they have not to go with an OS they feel not at home with so that they can work with the notebook for home matters.
-
thanks astor - you always provide good info.
in terms of physical isolation, my own solution has been to use a 2nd laptop, and it is used for nothing but SR.
do not load google on it. access SR only thru the cable connection. do not link with wi-fi.
as for simplicity, have your computer tech remove all the bloatware possible.
download nothing except updates for firefox and Tor. always create a new folder for Tor.
keep those passwords secure, fresh, and convoluted.
our business on SR is an important expression of our values, and lifestyle.
love and respect yourself, protect your community as would guard a small child.
-
thanks astor - you always provide good info.
in terms of physical isolation, my own solution has been to use a 2nd laptop, and it is used for nothing but SR.
do not load google on it. access SR only thru the cable connection. do not link with wi-fi.
as for simplicity, have your computer tech remove all the bloatware possible.
download nothing except updates for firefox and Tor. always create a new folder for Tor.
keep those passwords secure, fresh, and convoluted.
our business on SR is an important expression of our values, and lifestyle.
love and respect yourself, protect your community as would guard a small child.
That is not what is meant by the technical term isolation.
-
Windows Page File
During its normal course of operation, Windows sometimes needs to put away the contents of memory used by one program in order to make room for another program. The memory contents are stored in the Windows page file.
Programs that run under Sandboxie are still running in the same Windows operating system as any other program in the computer, so portions of sandboxed and normal programs may end up sitting side by side in the same page file.
It is possible to configure Windows to clear the contents of the page file at shutdown. More information here and here.
It is possible to configure Windows Vista to encrypt the contents of the page file:
* Run secpol.msc to open the Local Security Policy editor
* Expand the group labeled Public Key Policies
* Right-click Properties on the item labeled Encrypting File System
* Select Allow to enable Encrypting File System
* Check the box to Enable pagefile encryption.
* Click OK and reboot to put the new setting into effect.
To do the same on a Windows 7 or 8 host, do this:
Open a command prompt with administrative rights (in Windows 8 just do Win + X and you find it there, with Windows 7 go to All Programs > Accessories locate Command Prompt in the listing, right click on it and select Run as Administrator) then write this:
fsutil behavior set EncryptPagingFile 1
And press enter. Done. Reboot Windows and the pagefile will be encrypted.
-
Good thread.
However I don't agree necessarily that Whonix on a Linux Host is *fundamentally* better than the same on a Windows Host to the point of making the second insecure in EVERY case.
Sure, a Windows host with no malware is about as secure as a Linux host, but I think you are downplaying the difference. Most people will be running Whonix on a Windows host that they use for other purposes. If they can spare a computer for SR activities only, why would they run Whonix on Windows anyway? They have all the more freedom to install Linux and run Whonix on it, or turn that computer into a Whonix Workstation with physical isolation (using Tor on the router or a middle box as the Gateway). No part of SR requires Windows.
So it is extremely likely that anyone running Whonix on Windows will be using Windows for other reasons, like normal activities tied to their real identities. It's also a fact that Windows is a bigger target of malware and exploits by two or three orders of magnitude over Linux. In the 5+ years that I've been using Linux, I've never heard of Linux-specific malware in the wild. There have been a few cross-platform Java exploits, which were easy enough to protect against (don't install Java).
Under what I predict to be the normal use case, I consider that setup to be insecure.
I should also point out that my cut off is a bit arbitrary. Tails or Whonix on a Linux host can also be exploited, but I think the difference between #7 and setups below it, in terms of the probability of that happening, is much bigger than the difference between #5 through #7, so I drew the line there (the difference between any setup that uses physical isolation and any setup that doesn't is probably also very big). Since posting that guide, people have told me that they feel insecure because they are using Tails, because it's so far down the list. I think Tails with bridges is fine for the average SR user if they are incapable of setting up something higher on the list. There is no magic cut off line that makes you "secure enough", although the probability of getting pwned decreases as you go up the list.
-
It may require having a dedicated computer just for this function. Thus:
1) One may have a cheap laptop that is reformatted and used exclusively for connecting to SR or HitmansRus or HireAPrivateHooker or whatever.
2) Other (less secure) computers are used for work, school, playing videogames, downloading GoT or whatever.
3) If necessary, simply remove the cheap laptop from one's primary location or optionally -magnetize the harddrive and drill some holes in it. If the authorities come, -"that's just an old broken computer; my main one is right there."
This idea is pretty old school in that you have one computer for a certain function. That function is separated from other computerish activities. Neither the dedicated nor the main computer interact with one another.
-
Isolation isn't primarily to stop DNS leaks, it just does that as well. It is primarily to prevent hackers from getting your real IP address. Also, spoofing MAC address can be useful in this situation as well, look at the FH attack their malware gathered MAC addresses to be able to identify the individual machine that went to FH. It isn't only a concern about MAC address leaking to the wireless router but also a concern about remote attackers hacking in and grabbing it.
kmfkewm's point here is really important, and anyone worried about this kind of thing should take the time to understand why if they don't already.
If I had to make a list of easiest-to-hardest deanonymization attacks against Tor users, it would probably look something like this:
1. Self-incrimination - a user sharing personal details, contaminating across online personas, etc. Lots of folks share way too much. Go watch grugq's OPSEC video on Youtube for some great examples.
2. Sending Clearnet instead of Tor - DNS leaks to specific domains, clearnet links that are unique enough to tie back to a Tor user, anything that gets you to send traffic somewhere someone can control enough to see your source IP.
3. Simple exploits - FH hosting exploit is good example. Get MAC address, report to server clearnet or over Tor. (and FYI, just because you software-spoofed your MAC address, don't think that root access to your OS is going to be blind to the hardware address. There's a reason macchanger can show your original *and* the one it changed it to).
4. Advanced exploits - disabling host firewall and sending traffic, scraping hardware for serial numbers/unique identifiers/etc, grepping for names/addresses/etc on all partitions (good reason to use FDE on every OS if you multi-boot), breaking into isolated Tor router from workstation, or breaking out of Virtualbox/VMWare/whichever hypervisor to host machine.
5. Sybil attacks/etc against Tor - Large number of controlled Tor relays/nodes coming online, possibly leveraging weaknesses in Tor. This would likely be hit-or-miss on an individual user basis. But with cloud computing, it's not that expensive to do in the grand scheme of things. Probably affects hidden services more than exit nodes, because bringing a bunch of exit nodes online at once would be an epic pain in the ass.
6. Broad-scale traffic monitoring (and timing analysis) of Tor nodes - NSA has enough views into full traffic flows in the US that if it was a priority, they should be able to get a decent view of more than we realize. I'm guessing that the correlation is easier for traffic going out exit nodes than hidden services. But that's a guess. The good news? You're probably not the droids they're looking for. Today.
You can't do shit about 5 or 6. VPN might help #5 a little, but it won't help #6 as much as you think.
But, while some of us will head miles down Paranoia Lane, it's also important to realize that consistently executing good (even if simple) security practice is often worth more than an elaborate setup you don't really understand the nuances of.. For all of the elaborate mechanisms in this thread and others, I have to wonder how many users here would be better off with just booting Tails from USB, making sure Javascript is off, and having one hell of a password on their persistent volume.
-
5. Sybil attacks/etc against Tor - Large number of controlled Tor relays/nodes coming online, possibly leveraging weaknesses in Tor. This would likely be hit-or-miss on an individual user basis. But with cloud computing, it's not that expensive to do in the grand scheme of things. Probably affects hidden services more than exit nodes, because bringing a bunch of exit nodes online at once would be an epic pain in the ass.
6. Broad-scale traffic monitoring (and timing analysis) of Tor nodes - NSA has enough views into full traffic flows in the US that if it was a priority, they should be able to get a decent view of more than we realize. I'm guessing that the correlation is easier for traffic going out exit nodes than hidden services. But that's a guess. The good news? You're probably not the droids they're looking for. Today.
We all seem to agree that network layer attacks are harder than application layer attacks, which is why I focused on the application layer in my guide. I still have my doubts about the effectiveness of the hidden service deanonymization attacks. We'll see what intel comes out of the Marques case. If you think those attacks are effective and they didn't identify the FH server through an attack on the hidden service, you have to explain why. Even longterm entry guards and Tor over Tor only slow down the attack. kmf calculated it increases the time of the 2006 attack from 1-2 hours to about 40 days, but they were investigating FH for a year, so why didn't they do it?
You can't do shit about 5 or 6.
Sure you can. For #5, get people to run more relays (see the guide I just posted :) ). For #6, diversify the network outside of the cooperating intelligence agencies zone, which is my main suggestion in the relay guide.
But, while some of us will head miles down Paranoia Lane, it's also important to realize that consistently executing good (even if simple) security practice is often worth more than an elaborate setup you don't really understand the nuances of.. For all of the elaborate mechanisms in this thread and others, I have to wonder how many users here would be better off with just booting Tails from USB, making sure Javascript is off, and having one hell of a password on their persistent volume.
I agree and said so myself. Tails is probably secure enough for most SR users as long as they manually set bridges.
-
why would they run Whonix on Windows anyway? They have all the more freedom to install Linux and run Whonix on it, or turn that computer into a Whonix Workstation with physical isolation (using Tor on the router or a middle box as the Gateway). No part of SR requires Windows.
Because, as you expressly said in this tutorial and I evidenced myself, the high majority (probably about 80%) of the users of SR (and probably even more when the Forbes article will come out and a flood of new people will go to the silkroadlink.com site) use a Windows host + ToR bundle and that's it. Now if a person like that wants to increase security and has a notebook the last thing he wants to do is to utilize it only for SR with Linux (as you said yourself here), but on the contrary he would be quite willingly to use that notebook only for secure home applications on Windows + a Whonix Guest to go to SR with. In this way the notebook will not be "wasted" on SR alone but still SR would be the primary use.
Windows is the OS they are used to and feel right at home with, the notebook they can spare but not naturally and totally for SR alone, they would still like to do the usual home applications with it and/or surfing the web a bit with it (but being aware to not do insecure operations in it) and setting up a Whonix Guest is really easy to do, even easier than setting up Tails with all the benefits of having a notebook you can use for other secure things (good luck on having the usual guy being able to do the same with a Linux Host).
What I'm saying is that I understand perfectly what you intend with your tutorial but you are asking a too big jump given the target. If you ask the usual user having just Windows + ToR to setup a notebook with an Host they don't know how to use you are asking them to literally use that notebook only for SR and that's a "waste" that many would not accept for what they think the matter is (wrongly, but that's the burden of the thing). If instead the jump is on using the notebook with Windows only in a certain way (so they can still do many applications in there and not "waste" the notebook) and use a Whonix Guest to go to SR in there they will increase their usual security by 1000% and the jump will not be so big as to think "to hell to this, I'm not going to do something like that for this paranoia".
-
What I'm saying is that I understand perfectly what you intend with your tutorial but you are asking a too big jump given the target.
I'm not really asking them to do anything. :)
I listed the available setups, explained their security advantages and disadvantages, and ranked their overall security relative to each other.
You are free to do whatever you want to do. If you have one computer and you can't get rid of Windows, then booting into Tails or running Whonix on it is certainly safer than running just the browser bundle.
-
I'm not really asking them to do anything. :)
I know. I'm just debating on this point so that maybe some users can understand what I mean.
I listed the available setups, explained their security advantages and disadvantages, and ranked their overall security relative to each other.
I'm arguing on the fact that you put the Whonix Gateway in a Windows Host in the insecure field just because of Windows but it is not necessarily so and it depends on the circumstances. Given how many newcomers use Windows it is important imo that they can understand how to better use that OS instead of just saying "it's insecure to use it" because if the change is too great it will just turn them aside. Understanding how they can use what they know in the best way possibles is a lesser jump than going immediately to uncharted ground at beginning, it is a smoother learning curve, and as such it is something they can think "oh yes, I can do this without many problems to increase my security" instead of doing nothing because the jump required is too high.
You are free to do whatever you want to do. If you have one computer and you can't get rid of Windows, then booting into Tails or running Whonix on it is certainly safer than running just the browser bundle.
That's what I'm saying, but by putting a same exact thing into "insecure" just because of the OS can seem a too high pretense for some newcomers and can put them aside. For this I think it's better they know how they can still use what they use but in a better way in the beginning and then, as time passes, improve even more. A thing is asking someone that never jumped to jump 5 meters on the first try, another is to ask him to jump for 2 meters and then go on from there.
-
We all seem to agree that network layer attacks are harder than application layer attacks, which is why I focused on the application layer in my guide. I still have my doubts about the effectiveness of the hidden service deanonymization attacks. We'll see what intel comes out of the Marques case. If you think those attacks are effective and they didn't identify the FH server through an attack on the hidden service, you have to explain why. Even longterm entry guards and Tor over Tor only slow down the attack. kmf calculated it increases the time of the 2006 attack from 1-2 hours to about 40 days, but they were investigating FH for a year, so why didn't they do it?
I agree. It would be extremely difficult, but the overall longterm risk is still present. Plus, to the end user, the risk from a hidden service being exploited mostly turns into an application-level client security problem. If (insert whoever you're afraid of) gets control over (your favorite hidden service), most of your risk is them backdooring it, and that either causes a scenario where a user has self-incriminated (there are cleartext messages on the exploited service, etc that identify the user) or the hidden service can attack client via exploits.
Sure you can. For #5, get people to run more relays (see the guide I just posted :) ). For #6, diversify the network outside of the cooperating intelligence agencies zone, which is my main suggestion in the relay guide.
Yes, adding more relays helps against #5. They ratchet the cost of Sybil attacks up. But to be clear, a hundred new relays doesn't change the risk all that much. A hundred thousand new relays does. As an individual user, there's not much you can can do about today. And the risk stays relatively static, but the cost to exploit keeps going up. I think that last paper where they leveraged the bandwidth calculation on stacked Tor nodes was operating in the $500-1000 range for hosting, if I remember right.
Diversity helps #6, but I can't imagine the magic combination of routes you'd need to actually defeat it consistently. NSA should have visibility into any US links they want, and should be able to horsetrade or coerce for views of other links. If you could somehow balance the links between multiple spheres of influence (US, Russia, China, ?) you could make their jobs much harder. But again, as an everyday user of Tor and possibly hidden services, it's just a base level of risk that's present. You probably can't do enough personally change your risk. But it's a very small, very mild risk in the grand scheme of things.
Compared to the risk of mailing drugs around the world using the postal system, or trading CP, or leaking US military secrets, #5 & #6 are negligible levels of risk.
-
I'm arguing on the fact that you put the Whonix Gateway in a Windows Host in the insecure field just because of Windows but it is not necessarily so and it depends on the circumstances.
#8 means the Workstation and Gateway on the same Windows host, otherwise you have physical isolation, which is better.
In any case, I removed the line saying it was insecure.
-
We all seem to agree that network layer attacks are harder than application layer attacks, which is why I focused on the application layer in my guide. I still have my doubts about the effectiveness of the hidden service deanonymization attacks. We'll see what intel comes out of the Marques case. If you think those attacks are effective and they didn't identify the FH server through an attack on the hidden service, you have to explain why. Even longterm entry guards and Tor over Tor only slow down the attack. kmf calculated it increases the time of the 2006 attack from 1-2 hours to about 40 days, but they were investigating FH for a year, so why didn't they do it?
There is no doubt about the hidden service deanonymization attacks, they have been carried out on the live network and they work. Hidden services have crap anonymity, they are traced to entry guards in no time at all and then it is a single court order (at most) from that point on to get its real IP address. And it isn't even that good because in reality the hidden service has THREE entry guards each of which can be quickly located and each of which can be used to obtain the hidden services real IP address. For all we know it took the FBI 5 years to even figure out that this attack is possible, and I wouldn't be at all surprised if they traced his hidden service with this attack then put him under passive surveillance and using a timing correlation attack to ID him as FH admin after he made the Tor Bank post. Then two weeks of paper work later they raided him. That is one of my top theories. Then they could have deanonymized anyone who accessed FH whose Tor entry guard they owned. They did not need to do only application layer attacks that is only all we know about. They were positioned for 1/2 of timing attack against anyone accessing FH server, anyone who used bad entry guard to connect to FH during that time would be deanonymized just as much as anyone who was pwnt by the javascript. There is a good chance they used traffic analysis as well as application layer attacks, and pwnt those who used their entry guards as well as those who had vulnerable browser and OS targeted by the payload.
Application layer attacks are a big worry but direct attacks on Tor are also a big worry. At least with application layer attacks we can use things like isolation to protect from them, direct attacks on Tor are even more worrying because there isn't a whole lot we can do short of hacking the Tor source code, and even if we make Tor as ideal as possible it is still limited by its fundamental design.
Sure you can. For #5, get people to run more relays (see the guide I just posted :) ). For #6, diversify the network outside of the cooperating intelligence agencies zone, which is my main suggestion in the relay guide.
Doesn't matter where the relays are if you are in US your traffic always enters through networks the NSA monitors.
But, while some of us will head miles down Paranoia Lane, it's also important to realize that consistently executing good (even if simple) security practice is often worth more than an elaborate setup you don't really understand the nuances of.. For all of the elaborate mechanisms in this thread and others, I have to wonder how many users here would be better off with just booting Tails from USB, making sure Javascript is off, and having one hell of a password on their persistent volume.
None of them would be better off with tails in regards to anything other than possibly forensic analysis, if they don't boot tails with a persistent volume. Qubes and Whonix are superior when it comes to protection from essentially all other forms of attack. And I do really understand the nuances of computer security I studied it for many years and continue to do so.
-
why would they run Whonix on Windows anyway? They have all the more freedom to install Linux and run Whonix on it, or turn that computer into a Whonix Workstation with physical isolation (using Tor on the router or a middle box as the Gateway). No part of SR requires Windows.
Because, as you expressly said in this tutorial and I evidenced myself, the high majority (probably about 80%) of the users of SR (and probably even more when the Forbes article will come out and a flood of new people will go to the silkroadlink.com site) use a Windows host + ToR bundle and that's it. Now if a person like that wants to increase security and has a notebook the last thing he wants to do is to utilize it only for SR with Linux (as you said yourself here), but on the contrary he would be quite willingly to use that notebook only for secure home applications on Windows + a Whonix Guest to go to SR with. In this way the notebook will not be "wasted" on SR alone but still SR would be the primary use.
Windows is the OS they are used to and feel right at home with, the notebook they can spare but not naturally and totally for SR alone, they would still like to do the usual home applications with it and/or surfing the web a bit with it (but being aware to not do insecure operations in it) and setting up a Whonix Guest is really easy to do, even easier than setting up Tails with all the benefits of having a notebook you can use for other secure things (good luck on having the usual guy being able to do the same with a Linux Host).
What I'm saying is that I understand perfectly what you intend with your tutorial but you are asking a too big jump given the target. If you ask the usual user having just Windows + ToR to setup a notebook with an Host they don't know how to use you are asking them to literally use that notebook only for SR and that's a "waste" that many would not accept for what they think the matter is (wrongly, but that's the burden of the thing). If instead the jump is on using the notebook with Windows only in a certain way (so they can still do many applications in there and not "waste" the notebook) and use a Whonix Guest to go to SR in there they will increase their usual security by 1000% and the jump will not be so big as to think "to hell to this, I'm not going to do something like that for this paranoia".
Sure if people feel a compulsion to use Windows they can do it and use Whonix and be much better off than using TBB by itself. We are not asking people to do anything, we are telling them what their best options are.
-
I'm arguing on the fact that you put the Whonix Gateway in a Windows Host in the insecure field just because of Windows but it is not necessarily so and it depends on the circumstances. Given how many newcomers use Windows it is important imo that they can understand how to better use that OS instead of just saying "it's insecure to use it" because if the change is too great it will just turn them aside. Understanding how they can use what they know in the best way possibles is a lesser jump than going immediately to uncharted ground at beginning, it is a smoother learning curve, and as such it is something they can think "oh yes, I can do this without many problems to increase my security" instead of doing nothing because the jump required is too high.
This is the logic the Tor developers went with when they decided to leave javascript enabled. Oh , new users wont know to turn javascript on if they need it and so much of the internet needs javascript and there are other ways to be attacked anyway. So they left javascript on to cater to the noobs, and the noobs got fucked by it since the people who know to harden their browsers turned it off manually. There is a line between easy to use and secure, and when people head too far toward easy to use they get pwnt. We should not cater our tutorials to people who do not want to be secure. If they want to be less secure than we know how to be, they can still be more secure than the average user. Using Whonix from Windows is much more secure than using the TBB alone. Using Tails could be seen as an improvement as well, and certainly would have been for users accessing FH when it was pwnt. Users can pick their own trade offs, but we should always suggest the most secure solutions just like Tor Project should have had javascript disabled by default. People warned them months prior to the FH attack that having javascript on by default was going to lead to compromise of users and they always waved their hands talking about how people want to watch videos of cats on youtube.
-
Sure you can. For #5, get people to run more relays (see the guide I just posted :) ). For #6, diversify the network outside of the cooperating intelligence agencies zone, which is my main suggestion in the relay guide.
Yes, adding more relays helps against #5. They ratchet the cost of Sybil attacks up. But to be clear, a hundred new relays doesn't change the risk all that much. A hundred thousand new relays does.
Since relay selection is weighted by bandwidth, adding a few hundred high bandwidth relays to the network to run a successful Sybil attack is also hard, especially if you don't want people to notice. When the number of relays jumped from 3500 to 3800 in a day last month, I know people who shut down their hidden services. They noticed. (That was probably a false positive, just a burst of interest in running relays.)
As an individual user, there's not much you can can do about today. And the risk stays relatively static, but the cost to exploit keeps going up. I think that last paper where they leveraged the bandwidth calculation on stacked Tor nodes was operating in the $500-1000 range for hosting, if I remember right.
The paper from a few months ago presented an attack on hidden services, which cost $11,000 and took 8 months to achieve a 90% detection rate.
It did not present an attack on Tor users, although the implications were that you could run a similar attack on the users of a hidden service if you become the hidden service's HSDir. Instead of rotating in as one of the hidden service's entry guards, you rotate in as a user's entry guard. With a large enough user base (like SR), you are guaranteed to pwn a small random sample of the user base pretty quickly, but then what? All you know is that those people visited the web site. Journalists, curious people and even other LE agencies do that all the time. You'd be expending large amounts of resources on traditional investigations of a lot of dead leads and small time buyers who don't matter.
Also, users can mitigate the attack by increasing their entry guard rotation period. A few permanent bridges completely stop the attack.
Diversity helps #6, but I can't imagine the magic combination of routes you'd need to actually defeat it consistently. NSA should have visibility into any US links they want, and should be able to horsetrade or coerce for views of other links. If you could somehow balance the links between multiple spheres of influence (US, Russia, China, ?) you could make their jobs much harder. But again, as an everyday user of Tor and possibly hidden services, it's just a base level of risk that's present. You probably can't do enough personally change your risk. But it's a very small, very mild risk in the grand scheme of things.
Compared to the risk of mailing drugs around the world using the postal system, or trading CP, or leaking US military secrets, #5 & #6 are negligible levels of risk.
Exactly. Every busted buyer that we know about was busted because of drugs in the mail. Every busted vendor that we know about was busted because of IRL dealing or drugs in the mail. We should keep our focus on the big threats.
We also just witnessed an application layer exploit that probably deanonymized thousands of FH users, so I consider that a big threat now.
-
This is the logic the Tor developers went with when they decided to leave javascript enabled. Oh , new users wont know to turn javascript on if they need it and so much of the internet needs javascript and there are other ways to be attacked anyway. So they left javascript on to cater to the noobs, and the noobs got fucked by it since the people who know to harden their browsers turned it off manually. There is a line between easy to use and secure, and when people head too far toward easy to use they get pwnt. We should not cater our tutorials to people who do not want to be secure. If they want to be less secure than we know how to be, they can still be more secure than the average user
Excellent point.
-
The application layer attack that we witnessed is much worse than any network layer attack that we know about. All of the network layer attacks against hidden service users are statistical attacks that identify a random sample of users (although one could argue it's not completely random if technically savvy people mitigate it while less savvy people don't). If LE hacked the SR server and distributed a similar exploit, they could correlate IP addresses with specific users, because they would serve cookies to people who are logged into their accounts. So they wouldn't have to waste time investigating OzFreelancer or somebody who has never made a purchase. They could directly correlate IP addresses to the top vendors. That's why it's much more dangerous, and top vendors absolutely must protect themselves with more secure setups than TBB on Windows.
-
Doesn't matter where the relays are if you are in US your traffic always enters through networks the NSA monitors.
If the hidden service your a visiting resides outside of the NSA - Euro surveillance zone, then the more of the Tor network that resides outside of it, the more likely the hidden service is to pick entry guards that are outside of it (assuming the operator takes no steps to select entry points outside of it). In that case, they can only watch one end of the connection, and fingerprinting a triple encrypted circuit (or more if you use a VPN or SSH tunneling) is all they have. I expect to see the other network layer attacks successfully deployed in the wild before I see that one.
-
The application layer attack that we witnessed is much worse than any network layer attack that we know about. All of the network layer attacks against hidden service users are statistical attacks that identify a random sample of users (although one could argue it's not completely random if technically savvy people mitigate it while less savvy people don't). If LE hacked the SR server and distributed a similar exploit, they could correlate IP addresses with specific users, because they would serve cookies to people who are logged into their accounts. So they wouldn't have to waste time investigating OzFreelancer or somebody who has never made a purchase. They could directly correlate IP addresses to the top vendors. That's why it's much more dangerous, and top vendors absolutely must protect themselves with more secure setups than TBB on Windows.
If LE hacked the SR server they could do traffic analysis and link anyone who uses their entry guards to individual accounts without cookies. I am not sure that the application layer attack is really that much worse than network layer attacks. Certainly application layer attacks are more in our control to defend against, and network layer attacks are statistical probabilities over time. But both are serious threats. The feds got some subset of people who visited FH by hacking them from FH, but the feds could also get some subset of people who visited FH by owning their entry guards and doing end point traffic correlation. For all we know they did both in this case, it is just easier to identify application layer attacks than it is to identify traffic analysis. And isolation etc could have protected people from the application layer attack, but Tor itself is totally incapable of protecting people from correlation attacks if they have a bad entry node and go to a compromised server, at best it can decrease the probability that the victim has a bad entry guard by getting more good users to run relays. But what if the NSA does a passive attack and feeds the intelligence to DEA? Then it no longer matters if your entry guards are good or not if you are in USA or your entry guards are or your traffic passes through USA on the way to your entry guards.
In the past I put more faith in Tor than I currently do, and was more worried about application layer attacks. And I did think the feds would do application layer attacks prior to traffic analysis attacks, and was apparently correct about it. Now I am worried about traffic analysis and application layer attacks, and I bet the feds start using both.
Application Attacks: Easier to add defenses that mitigate, theoretically possible but unrealistic to fully protect from
Traffic Analysis: Harder to add defenses that mitigate, theoretically possible but unrealistic to fully protect from
Application Attacks: More likely to be noticed
Traffic Analysis: Much less likely to be noticed
Application Attacks: Capable of taking full control of remote system and stealing private keys, plaintexts, etc
Traffic Analysis: Only capable of obtaining suspect IP address to a high degree of certainty
Application Attacks: Constantly evolving threat with no end in sight, new zero days all the time thousands and thousands waiting to be discovered, attacks are fully protected from shortly after they are discovered
Traffic Analysis: Largely understood, slowly evolving with few new attacks, old attacks are rarely able to be fully protected from
Application Attacks: Security advances are making application attacks more and more difficult
Traffic Analysis: Passive surveillance is making traffic analysis harder and harder to protect from
Application Attacks: Are more likely to deanonymize all *vulnerable* users immediately
Traffic Analysis: Is more likely to slowly deanonymize *ALL* users over time
Application Attacks: Are more likely to target a subset of users rather than all users, but likely to compromise all targeted users
Traffic Analysis: Is more likely to target all users but only compromise a subset of targeted users
Application Attacks: Are trivial and cheap to do against users who do not stay on their toes and keep fully patched
Traffic Analysis: Is not usually easier to do against users who are not fully patched, but it can be (ie: the introduction of guards)
Application Attacks: Are expensive to do against users who stay fully patched and very expensive to do against users who stay fully patched and use layers of isolation and other defense mechanisms, cost increases substantially as subset of users to target increases.
Traffic Analysis: Can be made more expensive to do in some cases but there is a hard and low ceiling tied to the anonymity technology being used, is usually roughly as effective against all users regardless of their configuration (with some variance but not nearly as much as compared to application attacks), cost correlates directly with time, the more the attacker spends the less time they need to wait to deanonymize their targets, the less they spend the longer they need to wait
Application Attacks: Quickly identify all vulnerable users but become less effective over time as users patch and awareness spreads
Traffic Analysis: Identifies targets with various speed depending on amount spent on it, the more time that passes the more targets are identified
Application Attacks: Have a one time cost to obtain but become less valuable as time passes
Traffic Analysis: Has continuous cost to maintain but becomes more valuable as time passes
-
Fuck that I'm not taking chances with windows. At the very least I think a laptop running linux OS used as the router is the very least a vendor should have. I don't know shit about Linux but I'll learn it within a week or two and have a better piece of mind.
In regards to the exploit I think a couple of things should be noted. After going through the code a couple of things seemed odd. First of all that code was obfuscated like crazy. Instead of using decimal values for return calls they used either binary or hex which is a real pain in the ass to follow the calls and determine the values for code that's over 1500 lines in length total.
I believe the NSA did have a hand in writing that code though. If you take a look at their website and view the source you'll see a heavy use of javascript but the way it's written is an old school style. It's kinda hard to explain but the style of the exploit script and code I've seen from NSA's normal webpages to me have a similar coding style. Both are inefficient and verbose. Also many people believed that it came from the NSA servers, the other theory is that was done on purpose as a form of a calling card like 'hey we're here now, we can take this shit down'. That's kinda off topic but just my thoughts I wanted to share on the topic. On a scale of 1-10 that exploit was like a 4 maybe a 5 because it depended on the end user having 3 conditions being meet for the exploit to even work thus any of those users that were comprised would have been better suited off using Tails like others here have mentioned already.
-
kmfkewm - I'm having a hard time disagreeing with your concept of the long term threat of traffic analysis. The attractive part from a government perspective is that it's probably not "wiretapping", it's the equivalent of a pin register. "We're just looking a metadata, we never actually look at the user's payload. Just where it's going, when it's going there, how it fragments, and what the sequencing looks like." Hit the ISP/hosting company serving the bandwidth up to the node with a NSL for Netflow data. Shove into large database. Mine, rinse, and repeat.
-
Thought about this some more. Assuming you're talking about identifying users accessing hidden services, the key is the attackers' ability to successfully deanonymize the hidden service. They have to be able to monitor the traffic going to the hidden service to correlate it with monitored user guard node traffic.
And if they can deanonymize the hidden service (AND intercept traffic directly to it for correlation) they'd just be choosing not to bring it down in order to perform traffic analysis. Or they'd perform that analysis while they were waiting to bring it down. I think I'm finally understanding your theory about the FH attack.
It's a fairly difficult scenario, though. They'd need to be able to monitor the traffic to the hidden service, but not be able to bring it down.
The payoff from a LE perspective would depend on the target. For something like SR, I'm not sure they get any value from long-term traffic analysis if they have the option to just bring it down and calling a press conference (or backdoor it)...because looking at websites serving up a variety of possibly illegal goods isn't (easily) an overt criminal act. For FH, I'm betting the broad-ranging CP definitions and laws might make just accessing some of the hosted sites illegal, regardless of what was/wasn't transferred/downloaded.
Long-term, the solution is better hidden service anonymization (which is difficult). Because for broad traffic analysis, you can only correlate guard node traffic with the destination activities to get any useful information. Leaving either correlating with exit nodes or hidden services.
-
Here's another thought.
Why not use tor to connect to a remote shell sponsored by PRQ (clearnet: http://en.wikipedia.org/wiki/PRQ (http://en.wikipedia.org/wiki/PRQ) )
Then connect to your drug dealing site. You can subscribe to PRQ anonymously and pay with bitcoin...
I don't think the DEA or the FBI could get access to their logs and the Swedish courts have more important things to worry about.
Application Attacks: More likely to be noticed
Traffic Analysis: Much less likely to be noticed
All of your traffic would be coming from a PRQ anonymously registered IP address...
-
Subbed.
-
I just started playing around with Qubes OS and am quite impressed by it. There's an option for full disk encryption during installation, which I had not realized. Some of the info I read about it were complaints that it didn't have full disk encryption, so those complaints are out of date. You can even install it on a thumb drive and run off that, so you can test your hardware before installing it on the hard drive.
I will probably create a separate thread with a review of Qubes.
-
Thought about this some more. Assuming you're talking about identifying users accessing hidden services, the key is the attackers' ability to successfully deanonymize the hidden service. They have to be able to monitor the traffic going to the hidden service to correlate it with monitored user guard node traffic.
And if they can deanonymize the hidden service (AND intercept traffic directly to it for correlation) they'd just be choosing not to bring it down in order to perform traffic analysis. Or they'd perform that analysis while they were waiting to bring it down. I think I'm finally understanding your theory about the FH attack.
It's a fairly difficult scenario, though. They'd need to be able to monitor the traffic to the hidden service, but not be able to bring it down.
Most of the attacks on the Tor network that I've heard about involve surveillance at the edges. You have to run one of your target's entry nodes, and then you can pursue several different attacks.
There are more complex attacks, like brute forcing a relay identity key so it is close to the descriptor ID, so you can become a service directory for the hidden service. That's what Donncha did and it allowed him to count the number of descriptor fetches for Silk Road and other hidden services. That's how we know that Silk Road is about 100 times more popular than Atlantis, because it got 100 times as many descriptor fetches in the 24 hours that Donncha counted them. ;)
If you run the service directory, you still need to become an entry node for your targets. Tor clients keep entry nodes for a month and semi-randomly select new ones. That's why most of these attacks are statistical in nature. They depend on randomly being selected by the target. They are expensive and time consuming if you have a specific target in mind, like a hidden service, but if your target is "all Silk Road users", it's easy to pwn a small random sample of them, because out of tens of thousands of people, some of them will choose your entry guard very quickly.
The payoff from a LE perspective would depend on the target. For something like SR, I'm not sure they get any value from long-term traffic analysis if they have the option to just bring it down and calling a press conference
I don't think LE would be satisfied with simply bringing the site down. For one, DPR almost certainly has backups and could redeploy the site elsewhere within hours. They would want first to identity DPR and other admins, and second to identify top vendors. That seemed to be their MO in the FH attack -- to identify as many people visiting CP sites as possible, but more importantly to identify the admins of those sites and perhaps accounts that posted a lot of content (ie, major CP distributors).
Long-term, the solution is better hidden service anonymization (which is difficult).
Yes, definitely. The Tor developers had said that hidden services are experimental. They are a proof of concept. Nobody is getting paid right now to improve the hidden service protocol and make it robust against attacks. The Tor developers work on things that people pay them to work on. They have sponsors who give them specific deliverables. Mostly they are getting paid to work on things that help people in censored countries. That's why they push for more bridges and they've create the obfsproxy protocol. We need to pool money or find someone with deep pockets to anonymously sponsor hidden service development. :)
-
Here's another thought.
Why not use tor to connect to a remote shell sponsored by PRQ (clearnet: http://en.wikipedia.org/wiki/PRQ (http://en.wikipedia.org/wiki/PRQ) )
Then connect to your drug dealing site. You can subscribe to PRQ anonymously and pay with bitcoin...
I don't think the DEA or the FBI could get access to their logs and the Swedish courts have more important things to worry about.
One of the Tor developers draws the distinction between "privacy by design" and "privacy by policy". Tor gives you privacy by design. It's difficult for someone to know who you are and what you are doing, because of the design of the network. VPN provides offer privacy by policy. They "promise" not to log what you are doing. You have no way to verify their claim, they could change their minds, or they could be compelled by their authorities to start logging.
What you seem to be promoting is privacy by red tape. :)
Julian Assange is not confident in the Swedish government's ability to resist the US government's demands, so I don't know if I would base my safety on that.
-
If you run the service directory, you still need to become an entry node for your targets. Tor clients keep entry nodes for a month and semi-randomly select new ones. That's why most of these attacks are statistical in nature. They depend on randomly being selected by the target. They are expensive and time consuming if you have a specific target in mind, like a hidden service, but if your target is "all Silk Road users", it's easy to pwn a small random sample of them, because out of tens of thousands of people, some of them will choose your entry guard very quickly.
I agree... but you have to have the hidden service (if the attack is for "all SR users", they have to have SR) side before owning the entry guard tells you more than "Sparky from Omaha is using Tor." Otherwise you still don't know who's using SR.
Yes, definitely. The Tor developers had said that hidden services are experimental. They are a proof of concept. Nobody is getting paid right now to improve the hidden service protocol and make it robust against attacks. The Tor developers work on things that people pay them to work on. They have sponsors who give them specific deliverables. Mostly they are getting paid to work on things that help people in censored countries. That's why they push for more bridges and they've create the obfsproxy protocol. We need to pool money or find someone with deep pockets to anonymously sponsor hidden service development. :)
Weird, I was reading that page the other day and had the same thought. If there are gazillons of dollars being made by hidden service providers (SR, etc), it's odd that nobody's funding one of their key ingredients. Either they don't view it as a significant risk or they can't fund it in a clandestine enough manner.
-
Weird, I was reading that page the other day and had the same thought. If there are gazillons of dollars being made by hidden service providers (SR, etc), it's odd that nobody's funding one of their key ingredients. Either they don't view it as a significant risk or they can't fund it in a clandestine enough manner.
I suspect it's the latter. The FBI would very interested in someone who gave the Tor Project $100,000 to fund improvement of the hidden service protocol. They have anonymous sponsors, but only in the sense that they are not publicly known. The IRS certainly knows who they are, and the FBI could if they wanted to find out.
You can build your own defenses, of course. At the network layer, you could use persistent entry guards by way of private bridges. Even better would be layered entry guards, or layers of proxies before you get to your entry guards. You could even calculate the descriptor IDs in advance and run relays on anonymous VPSes with identity keys whose hashes closely match the descriptor IDs, so you would always be publishing your hidden service descriptor to service directories that are under your control.
At the application layer, you can harden your software and isolate it in VMs. I'm learning a lot about Xen at the moment, and I plan on experimenting with VM isolated LEMP stacks. :)
-
LOL, thanks. If you have any questions, don't be afraid to ask.
-
I suspect it's the latter. The FBI would very interested in someone who gave the Tor Project $100,000 to fund improvement of the hidden service protocol. They have anonymous sponsors, but only in the sense that they are not publicly known. The IRS certainly knows who they are, and the FBI could if they wanted to find out.
That's my guess, too, but you wouldn't need to directly fund the Tor project to improve hidden services. You could fund a developer privately.. it's not like there's not anonymous contributors to Tor Projects.
You can build your own defenses, of course. At the network layer, you could use persistent entry guards by way of private bridges. Even better would be layered entry guards, or layers of proxies before you get to your entry guards. You could even calculate the descriptor IDs in advance and run relays on anonymous VPSes with identity keys whose hashes closely match the descriptor IDs, so you would always be publishing your hidden service descriptor to service directories that are under your control.
I was always working under the assumption that all HSDir nodes had access to all hidden service descriptors (telling them who the Introduction Points for the HS were). I know there was a new feature added in the past year or so to allow the use of a key to only allow clients where knowing the key the ability to find the IP for the service.. but that's a non-starter for "public" hidden sites, since all users have to know the key.
If a hidden service can pick its own Introduction Points, you could certainly add a layer of security there. Obviously, not running your own IPs (because the IP for a hidden service has to be discoverable), but picking IP's that were more trustable to you for one reason or another.
At the application layer, you can harden your software and isolate it in VMs. I'm learning a lot about Xen at the moment, and I plan on experimenting with VM isolated LEMP stacks. :)
Hidden service hosting is one area where there's just not a lot of good information available. Sure, you can find howtos to get it running, but that's not the hard part. Nobody running high-load hidden sites is interested in sharing how they do it, for obvious reasons.
Personally, I wish we had an example of a popular, heavily-used site based around a hidden service that wasn't engaged in something that would get you thrown in jail somewhere. This forum is the closest thing to a community full of bright people that I've found on an onion site, and it's obviously closely aligned enough with SR that it gets maligned alongside SR. But if you want intelligent conversation and an address ending in ".onion", its all I've found.
-
One more scenario
1. Hacked Wi-Fi access points (2-3 hacked APs)
2. Linux OS installed with implied security features installed.
3. Virtual Machine with Linux installed.
4. Google some free VPN or VPN trials or buy 2 distinct VPNs in anonymous fashion.
5. On your host machine change mac address of your Wi-Fi, connect to hacked access point.
6. Connect to first VPN and launch Linux VM.
7. In Linux VM connect to the second VPN
Thus you'll be connected to through two VPNs and hacked Wi-Fi.
This scenario is good, when site have Tor access blocked, if it's not, then just use Tor.
-
One more scenario
...
Thus you'll be connected to through two VPNs and hacked Wi-Fi.
This scenario is good, when site have Tor access blocked, if it's not, then just use Tor.
Before I used two chained VPN providers, I'd think seriously about using Tor to connect to a single VPN provider that was purchased anonymously. Then they're blind to your true source.
Wifi from another location (i.e. not you) doesn't hurt, but once someone traces it back to that AP, it's not that hard of a mystery to solve in most cases. Depends on your environment.
If you ever find yourself at the point where "trusting a third-party not to keep logs" is one of your key security measures, it's time to rethink things.
-
We need to pool money or find someone with deep pockets to anonymously sponsor hidden service development. :)
How much money would it take to start making good progress on hidden services?
-
Can we trust truecrypt hidden partitions in which or "deeds" and Tor bundles are kept, assuming we erase all logs of access to those directories?
That's the hard part. Windows is a complex OS. Shit could be logged and cached all over the place. Look how many traces the browser bundle leaves behind, and it's a portable app. I wouldn't rest my security on my ability to erase my activities on Windows. Encrypt the entire hard drive if you want to hide your activities.
Hey there, props on creating this thread and providing so much useful info to the community! I am a vendor and as my business grows, so does my need for extra security so I obviously have been reading a lot about how to keep my computer completely safe in case of a worst case scenario. I obviously have all the basics down, TOR browser bundle, my PGP encryption software/keys and anything that could possibly link me to silkroad are saved on a USB stick that is TrueCrypt password protected and the stick is so small I could literally swallow it if my door was ever kicked in but from the sounds of it, it does not matter that I am not saving anything related to the road on my computer, it seems a lot of info is left behind?
I know my ISP can see I am using TOR obviously if I simply run it from my home connection, but I never do. I am fortunate enough to have access to many peoples wifi that is not password protected as anyone that lives in a city usually does so I never run it using my home connection, any outside of SR communication with partners in my business is done in very safe PGP encrypted messages using safe email services and there is only one person I need to talk to on the telephone with regarding business and we each both have burner phones that are the old ass ghetto style phone with no GPS or any bullshit like that and the only calls are literally only EVER used to call each others burner phones back in for, literally in the phones whole existence (never more than 30 days) it only dials the one number and steal nothing incriminating is ever mentioned.
But, I feel all I have down is the basics. i am going to be honest, I did not even set up the TrueCrypt USB.....I tried doing it myself and could not even figure out how TrueCrypt worked :( which I am very embarrassed about lol but the amount of knowledge I have learned about PGP and other types of encryption, TOR, bitcoins, laundering the bitcoins into clean cash and everything else my job comes with I learned very quickly so I am very confident I can get my security down TIGHT, I just need a little help :) and am even willing to compensate anyone that can easily walk my through the few questions I have:
Firstly, you say a hard drive can be COMPLETELY encrypted. Does this literally mean if my computer is seized by law enforcement, NO info at all with be able to be seen be them? Can this be done with TrueCrypt as from what I hear, it can, and not even the FBI can crack it....I would REALLY appreciate a step by step guide on totally encrypting my harddrive and not even have to worry if my computer ended up in the wrong hands....
I also need a recommendation on a good VPN to use as I know some (especially some of the free ones) cannot be trusted. I want to be 100% sure my ISP cannot even see I am running TOR. I understand they can still tell I am running a VPN but tons of people run VPN's and I just want to be 100% in a worst case scnario that even if my computer is seized, nothing will be able to recovered linking me to the road.
I am seriously willing to compensate someone that can walk me through how to totally encrypt my hard drive with, that is what I need help with the most. As I said, I don't even use my home connection to connect to TOR so a VPN does not seem THAT important IMO but maybe I am wrong.
Thank you SO much to anyone that actually talks the time to read my massive rant and I am very sorry it is so long! As I said, I will happily compensate anyone that can help :)
-
I was always working under the assumption that all HSDir nodes had access to all hidden service descriptors (telling them who the Introduction Points for the HS were). I know there was a new feature added in the past year or so to allow the use of a key to only allow clients where knowing the key the ability to find the IP for the service.. but that's a non-starter for "public" hidden sites, since all users have to know the key.
The second thing you're talking about is HiddenServiceAuthorizeClient in stealth mode, which requires a cookie/key/password to access the hidden service.
Descriptors are published using a distributed hash table type system. Donncha explains it well:
Tor hidden service desc_id‘s are calculated deterministically and if there is no ‘descriptor cookie’ set in the hidden service Tor config anyone can determine the desc id‘s for any hidden service at any point in time.This is a requirement for the current hidden service protocol as clients must calculate the current descriptor id to request hidden service descriptors from the HSDir’s. The descriptor ID’s are calculated as follows:
descriptor-id = H(permanent-id | H(time-period | descriptor-cookie | replica))
The replica is an integer, currently either 0 or 1 which will generate two separate descriptor ID’s, distributing the descriptor to two sets of 3 consecutive nodes in the DHT. The permanent-id is derived from the service public key. The hash function is SHA1.
time-period = (current-time + permanent-id-byte * 86400 / 256) / 86400
The time-period changes every 24 hours. The first byte of the permanent_id is added to make sure the hidden services do not all try to update their descriptors at the same time.
identity-digest = H(server-identity-key)
The identity-digest is the SHA1 hash of the public key generated from the secret_id_key file in Tor’s keys directory. Normally it should never change for a node as it is used for to determine the router’s long-term fingerprint, but the key is completely user controlled.
A HSDir is responsible if it is one of the three HSDir’s after the calculated desc id in a descending lists of all nodes in the Tor consensus with the HSDir flag, sorted by their identity digest. The HS descriptor is published to two replica‘s (two set’s of 3 HSDir’s at different points of the router list) based on the two descriptor id’s generated as a result of the ’0′ or ’1′ replica value in the descriptor id hash calculation.
Source: http://donncha.is/2013/05/trawling-tor-hidden-services/
Hidden service hosting is one area where there's just not a lot of good information available. Sure, you can find howtos to get it running, but that's not the hard part. Nobody running high-load hidden sites is interested in sharing how they do it, for obvious reasons.
Yep. In terms of optimizing for performance, the Torservers Wiki has a lot of good info for high bandwidth relays that also applies to hidden services, but in terms of security, there isn't much out there. I have seen one of the Tor developers say that if he ran a hidden service, he would put it in a VM so it doesn't know the public IP address of the server, and other people who have run hidden services support isolation techniques. Beyond that, you are left to figure it out yourself.
Personally, I wish we had an example of a popular, heavily-used site based around a hidden service that wasn't engaged in something that would get you thrown in jail somewhere. This forum is the closest thing to a community full of bright people that I've found on an onion site, and it's obviously closely aligned enough with SR that it gets maligned alongside SR. But if you want intelligent conversation and an address ending in ".onion", its all I've found.
Agreed. There have been plenty of attempts at starting forums in onionland. Most of them never got more than a few users and went offline pretty quickly. There was Onionforum which lasted about 5 years, but even it had a few thousand users at its height, not tens of thousands like this one.
Here's a screenshot of it: http://toxicity.myftp.org/Share/Screenshots/OnionForum.png
That was considered the nexus of onionland activity in its day, and we have eclipsed it by one or two orders of magnitude.
Despite the spammers and trolls, this is a great forum. Personally, I came for the drugs and stayed for the community. :)
-
I know my ISP can see I am using TOR obviously if I simply run it from my home connection, but I never do. I am fortunate enough to have access to many peoples wifi that is not password protected as anyone that lives in a city usually does so I never run it using my home connection
You should spoof your MAC address.
Firstly, you say a hard drive can be COMPLETELY encrypted. Does this literally mean if my computer is seized by law enforcement, NO info at all with be able to be seen be them? Can this be done with TrueCrypt as from what I hear, it can, and not even the FBI can crack it....I would REALLY appreciate a step by step guide on totally encrypting my harddrive and not even have to worry if my computer ended up in the wrong hands....
Almost no info can be seen by LE. There must be some unencrypted part that runs and decrypts the rest of the drive. LE can can know your drive is encrypted with Truecrypt, but they won't know much else about it.
Truecrypt can do full disk encryption on a running Windows system. If you want a guide, they have extensive documentation:
http://www.truecrypt.org/docs/system-encryption
I also need a recommendation on a good VPN to use as I know some (especially some of the free ones) cannot be trusted. I want to be 100% sure my ISP cannot even see I am running TOR. I understand they can still tell I am running a VPN but tons of people run VPN's and I just want to be 100% in a worst case scnario that even if my computer is seized, nothing will be able to recovered linking me to the road.
A VPN will hide your Tor use from someone who is fishing for Tor users, but it probably won't hide your Tor use from someone who is specifically targeting you. Then again, if someone is specifically targeting you, revealing that you use Tor is the least of your problems.
As for specific providers, it's a bad idea to mention any on this forum. You'll have to figure that out for yourself or talk to people privately.
-
The second thing you're talking about is HiddenServiceAuthorizeClient in stealth mode, which requires a cookie/key/password to access the hidden service.
Descriptors are published using a distributed hash table type system. Donncha explains it well:
...
Source: http://donncha.is/2013/05/trawling-tor-hidden-services/
Fantastic link.. I think I mostly get it now. And it reaffirms my belief that good Introduction Point selection is a key ingredient in long-term hidden server anonymity. :)
Yep. In terms of optimizing for performance, the Torservers Wiki has a lot of good info for high bandwidth relays that also applies to hidden services, but in terms of security, there isn't much out there. I have seen one of the Tor developers say that if he ran a hidden service, he would put it in a VM so it doesn't know the public IP address of the server, and other people who have run hidden services support isolation techniques. Beyond that, you are left to figure it out yourself.
The architecture part actually shouldn't be that hard to build, it's maintaining it that's hard: Isolate the server(s) physically from the Tor instance, probably on a DMZ-style isolated leg off of the Tor server advertising the hidden service. Clean server hardware that can't be traced via supply chain. Private IP space only, attention to detail on L2-4 filtering to ensure all server traffic goes out via Tor. I'm not even sure the server should be allowed to send a TCP packet with a SYN bit set, unless you want it updating over the network, which could be managed another way (updates on removable storage, connected to server when updating, watching traceability of unique identifier on storage device too). Server is built cleanly, never touching non-Tor space. Make sure remote management can't leak source IPs at connect (probably a physical console cable from a term server). Virtualizing on top of a IP-less hypervisor gives you some protection against hardware fingerprinting if compromised. Nothing else on the segment to leak addresses into the ARP table, just Tor gateway MAC and server's MAC. Both of which have to be fake, and hardware MAC has to be untraceable as well. Database servers could be on another isolated network behind the server DMZ, same basic approach. Accept db connections in, no connections back out.
An epic pain in the ass to maintain. The day-to-day IT side of it is what would kill you. Backups, which are needed because it's open season on hidden services (what, are you gonna block their source IP, or file a complaint?). For every fifty fucktards trying out menu options in Kali there's going to be somebody who knows what they're doing. You can snapshot from the VM, but you still have to have a way to be sure you can still trust your hypervisor and restore/rebuild when you can't.
One thing that I've yet to figure out is an effective way to configure a filter between a Tor instance and the Internet to allow ONLY Tor traffic. Everyone solves the problem by using iptables and separated users on the node running Tor to match by tor-user sending the traffic, but you really need something in front of it making sure it's only Tor traffic exiting the network. Guess you could script a regular download of relays and build rules that way, but that doesn't address an attacker just running a Tor relay and upon hacking the node, sending a packet of a known value back to the relay on the Tor port. For Tor clients, you can filter to bridges if you're using them, but from a hidden service perspective, beats me.
Agreed. There have been plenty of attempts at starting forums in onionland. Most of them never got more than a few users and went offline pretty quickly. There was Onionforum which lasted about 5 years, but even it had a few thousand users at its height, not tens of thousands like this one.
Here's a screenshot of it: http://toxicity.myftp.org/Share/Screenshots/OnionForum.png
That was considered the nexus of onionland activity in its day, and we have eclipsed it by one or two orders of magnitude.
Despite the spammers and trolls, this is a great forum. Personally, I came for the drugs and stayed for the community. :)
Yeah, it's really a shame, but I can see why there's not much out there. It's a ton of work, and bluntly, most people come to hidden services for things they can't find on the clearnet. This place is basically a candle in the dark, but it's (for obvious reasons) a SR support group first with a few stragglers thrown in.
The shame is that there's no central hub of communication as a Tor hidden service. But throwing up a crappy piece of forum software (Javascript-free cramps your style, but who the hell would run Javascript these days?) and expecting some Field of Dreams-style arrival of visitors is a non-starter.
-
I feel stupid, my firewall answer dawned on me about five minutes after I hit submit.. The firewall in front of the hidden service's Tor instance only allows traffic to/from the hidden service's entry guards. Problem solved, because as a hidden service, if you can't trust your entry guards, you are fucked with a capital F.
-
1st step to security;
iptables -A INPUT -j DROP
iptables -A FORWARD -j REJECT
iptables -A OUTPUT -j REJECT
Then allow your entry guards or VPN connection and nothing else.
Not have time yet to read all 8 pages. But do this on another machine is better. host -> eth0 -> iptable machine -> etc.
-
Here's another thought.
Why not use tor to connect to a remote shell sponsored by PRQ (clearnet: http://en.wikipedia.org/wiki/PRQ (http://en.wikipedia.org/wiki/PRQ) )
Then connect to your drug dealing site. You can subscribe to PRQ anonymously and pay with bitcoin...
I don't think the DEA or the FBI could get access to their logs and the Swedish courts have more important things to worry about.
Application Attacks: More likely to be noticed
Traffic Analysis: Much less likely to be noticed
All of your traffic would be coming from a PRQ anonymously registered IP address...
100% of traffic coming into or out of Sweden is logged by their signals intelligence agency.
kmfkewm - I'm having a hard time disagreeing with your concept of the long term threat of traffic analysis. The attractive part from a government perspective is that it's probably not "wiretapping", it's the equivalent of a pin register. "We're just looking a metadata, we never actually look at the user's payload. Just where it's going, when it's going there, how it fragments, and what the sequencing looks like." Hit the ISP/hosting company serving the bandwidth up to the node with a NSL for Netflow data. Shove into large database. Mine, rinse, and repeat.
It is certainly not wiretapping.
Thought about this some more. Assuming you're talking about identifying users accessing hidden services, the key is the attackers' ability to successfully deanonymize the hidden service. They have to be able to monitor the traffic going to the hidden service to correlate it with monitored user guard node traffic.
And if they can deanonymize the hidden service (AND intercept traffic directly to it for correlation) they'd just be choosing not to bring it down in order to perform traffic analysis. Or they'd perform that analysis while they were waiting to bring it down. I think I'm finally understanding your theory about the FH attack.
It's a fairly difficult scenario, though. They'd need to be able to monitor the traffic to the hidden service, but not be able to bring it down.
The payoff from a LE perspective would depend on the target. For something like SR, I'm not sure they get any value from long-term traffic analysis if they have the option to just bring it down and calling a press conference (or backdoor it)...because looking at websites serving up a variety of possibly illegal goods isn't (easily) an overt criminal act. For FH, I'm betting the broad-ranging CP definitions and laws might make just accessing some of the hosted sites illegal, regardless of what was/wasn't transferred/downloaded.
Long-term, the solution is better hidden service anonymization (which is difficult). Because for broad traffic analysis, you can only correlate guard node traffic with the destination activities to get any useful information. Leaving either correlating with exit nodes or hidden services.
The feds could obviously have monitored all traffic coming from and going to the FH server. If they owned your entry guard during the time that they pwned the server, they could deanonymize you without application layer attacks, and if they didn't do this in addition to application layer attacks they are idiots. I wouldn't be surprised if this is how they found the admin in the first place, I do not think it is by chance that they got him shortly after he made a post to FH server and he wasn't using Windows so. They would correlate activity to vendors, there seems to be a misconception that traffic analysis can not be used to tie users of SR to their accounts on SR but this is not the case.
Most of the attacks on the Tor network that I've heard about involve surveillance at the edges. You have to run one of your target's entry nodes, and then you can pursue several different attacks.
I think every deanonymizing attack against Tor requires the attacker to own the targets entry guard, or at least be able to observe traffic between the user and an entry guard (ie: monitor the user from their ISP, or the ISP of the entry guard).
There are more complex attacks, like brute forcing a relay identity key so it is close to the descriptor ID, so you can become a service directory for the hidden service. That's what Donncha did and it allowed him to count the number of descriptor fetches for Silk Road and other hidden services. That's how we know that Silk Road is about 100 times more popular than Atlantis, because it got 100 times as many descriptor fetches in the 24 hours that Donncha counted them. ;)
Yes and this can allow the attacker to deanonymize all users of the hidden service who use an attacker entry guard, without the attacker needing to actually be able to monitor traffic to the hidden service.
If you run the service directory, you still need to become an entry node for your targets. Tor clients keep entry nodes for a month and semi-randomly select new ones. That's why most of these attacks are statistical in nature. They depend on randomly being selected by the target. They are expensive and time consuming if you have a specific target in mind, like a hidden service, but if your target is "all Silk Road users", it's easy to pwn a small random sample of them, because out of tens of thousands of people, some of them will choose your entry guard very quickly.
And when users are using Tails it makes it take much less time before they use one of your compromised entry guards. This is why I strongly suggest against using Tails unless you also use persistent bridges.
I don't think LE would be satisfied with simply bringing the site down. For one, DPR almost certainly has backups and could redeploy the site elsewhere within hours. They would want first to identity DPR and other admins, and second to identify top vendors. That seemed to be their MO in the FH attack -- to identify as many people visiting CP sites as possible, but more importantly to identify the admins of those sites and perhaps accounts that posted a lot of content (ie, major CP distributors).
I agree in regards to silk road. In regard to the CP sites to the best of my understanding they didn't even attempt to sort people based on what they were doing, but rather went for getting as many people as possible. Since the exploit was afaik delivered from a 'down for maintenance' page, they couldn't tell the people browsing jailbait from the people uploading self produced child rape photographs.
Yes, definitely. The Tor developers had said that hidden services are experimental. They are a proof of concept. Nobody is getting paid right now to improve the hidden service protocol and make it robust against attacks. The Tor developers work on things that people pay them to work on. They have sponsors who give them specific deliverables. Mostly they are getting paid to work on things that help people in censored countries. That's why they push for more bridges and they've create the obfsproxy protocol. We need to pool money or find someone with deep pockets to anonymously sponsor hidden service development. :)
Hidden services suck at anonymity.
-
Here's another thought.
Why not use tor to connect to a remote shell sponsored by PRQ (clearnet: http://en.wikipedia.org/wiki/PRQ (http://en.wikipedia.org/wiki/PRQ) )
Then connect to your drug dealing site. You can subscribe to PRQ anonymously and pay with bitcoin...
I don't think the DEA or the FBI could get access to their logs and the Swedish courts have more important things to worry about.
One of the Tor developers draws the distinction between "privacy by design" and "privacy by policy". Tor gives you privacy by design. It's difficult for someone to know who you are and what you are doing, because of the design of the network. VPN provides offer privacy by policy. They "promise" not to log what you are doing. You have no way to verify their claim, they could change their minds, or they could be compelled by their authorities to start logging.
What you seem to be promoting is privacy by red tape. :)
Julian Assange is not confident in the Swedish government's ability to resist the US government's demands, so I don't know if I would base my safety on that.
If the VPN is in Sweden we already know all traffic to it from outside of Sweden and all traffic that exits it to outside of Sweden is being logged by Swedish signals intelligence.
I agree... but you have to have the hidden service (if the attack is for "all SR users", they have to have SR) side before owning the entry guard tells you more than "Sparky from Omaha is using Tor." Otherwise you still don't know who's using SR.
You don't need to have the hidden service if you have all of its HSDIR nodes, you can do timing attack from the targets entry guard to the HSDIR node request for the hidden service. Although then to actually tie users to their accounts on SR would require a bit more handy work, although once you are pretty certain the user is surfing SR fingerprinting attacks could be used to tie them to specific accounts with little hassle, if they make posts or send messages the attacker can view.
I was always working under the assumption that all HSDir nodes had access to all hidden service descriptors (telling them who the Introduction Points for the HS were). I know there was a new feature added in the past year or so to allow the use of a key to only allow clients where knowing the key the ability to find the IP for the service.. but that's a non-starter for "public" hidden sites, since all users have to know the key.
Cookie to access hidden services (and tell if they are up without owning their HSDIR node) is pretty old feature.
Firstly, you say a hard drive can be COMPLETELY encrypted. Does this literally mean if my computer is seized by law enforcement, NO info at all with be able to be seen be them? Can this be done with TrueCrypt as from what I hear, it can, and not even the FBI can crack it....I would REALLY appreciate a step by step guide on totally encrypting my harddrive and not even have to worry if my computer ended up in the wrong hands....
How much info they can see depends on the implementation and the way you use it but in the majority of cases FDE is not actually FDE. At least the boot sector is usually on the drive without being encrypted, often other things are not encrypted as well. This was news to me (other than boot sector which obviously cannot be encrypted) as I thought FDE meant that the entire drive looked like randomness, but in most cases there are still non encrypted areas, just no areas that you would normally have anything incriminating on or write to at all for that matter. You can put the boot sector on a USB stick and boot from that, but there will still be some non encrypted areas on the drive in most cases.
-
If you run the service directory, you still need to become an entry node for your targets. Tor clients keep entry nodes for a month and semi-randomly select new ones. That's why most of these attacks are statistical in nature. They depend on randomly being selected by the target. They are expensive and time consuming if you have a specific target in mind, like a hidden service, but if your target is "all Silk Road users", it's easy to pwn a small random sample of them, because out of tens of thousands of people, some of them will choose your entry guard very quickly.
And when users are using Tails it makes it take much less time before they use one of your compromised entry guards. This is why I strongly suggest against using Tails unless you also use persistent bridges.
... if the bridge you use isn't compromised ...
-
You don't need to have the hidden service if you have all of its HSDIR nodes, you can do timing attack from the targets entry guard to the HSDIR node request for the hidden service. Although then to actually tie users to their accounts on SR would require a bit more handy work, although once you are pretty certain the user is surfing SR fingerprinting attacks could be used to tie them to specific accounts with little hassle, if they make posts or send messages the attacker can view.
I'm trying to make sure I'm following.. Without an attacker owning the hidden service, I think the "with little hassle" part seems really, really difficult to me. Much harder than getting control of the HSDirs, actually. Would it look something like this?
1. Use control over all of a hidden service's HSDir nodes to identify a client making a request. But isn't the client's "direct" HSDir request following a traditional three hop Tor path? So the attacker needs the clients entry guard, or all he has is the middle node sending the request. And we'll assume he can analyze that (very brief) conversation well enough through the middle hop to tie it back. And at this point, the attacker already owns both the entry guard and the destination (the HSDir node).
2. The attacker can then identify that a given client was looking up the descriptor for a specific hidden service. It's a single point of data. "Yes, Client X looked up EvilSite.onion at 00:00:00"
3. The attacker then uses his control over the entry guard to deeply analyze the traffic leaving Client X, and destined for.. (he doesn't know, because unless he owns the other end, it's just traffic. This stream could be headed for an exit node, a mylittlepony-themed onion site, or the Evil Site he's controlling HSDirs to target). Unless he has significantly more nodes on the path, he has one node out of six for the conversation to the hidden service.
At that point, the attacker can try to guess "see... big session to somewhere. And a big post got made on Evil Site. Probably the same", but he can't prove anything, other than that the client looked up the descriptor. Attacker can't measure timing because he can only see one end of the conversation. Could be the big post, could be a client whose mail client just managed to send a message of the same ballpark size in the background to Gmail out an exit node while the user was waiting for EvilSite to load. It's not a fingerprint, it's just proof that the guy had a finger.
What am I missing here? I feel like the slow kid at the back of the class.
-
These rankings seem to be biased towards systems that maximise security for individuals who will predominantly be committing their offences from a single location and/or using the same network repeatedly. More bluntly, people sitting at home ordering their drugs to be delivered to their door ;) While the set-ups you've described are brilliant, they're also involved and unwieldy, inelegant.
You're absolutely right. The first 5 setups are beyond the capabilities of the vast majority of people, but I've listed them because they really are the most secure. So now you have a fun challenge. Can you convert an old laptop into a Whonix Gateway, or install PORTAL on your router? If you never try anything hard, how will you ever grow?
In any case, I think Whonix on a Linux host or Tails with persistent bridges are safe enough for most people, and within their capabilities to setup. Either of these options is much safer than running TBB on Windows, which is what most people do right now. I want to lift the collective security of the community, and I've given them a variety of options.
I prefer Tails as not only is it a secure OS, but it's a means of encouraging secure behaviour. Used as recommended, the lack of persistent entry guards isn't really an issue. Used as recommended, I believe, tor bridges may be less safe, at best redundant, as you would want to randomise them as much as possible, also. Spoof your mac address, briefly access random networks to conduct your business, ram wiped, away you go. Easy as... :)
If by "used as recommended" you mean used as a mobile operating system where you log on to different, random wifi spots, then you're correct, your bridges should be different each time so you aren't linked to other logons (of course, you should randomize your MAC address in that case too, which unfortunately Tails doesn't give you an option to do during boot).
However, the vast majority of Tails users in this community don't use it as a mobile OS. They repeatedly connect from home. In that case, you want persistent entry guards, because choosing different ones all the time increases the chances that you pick a malicious node.
It seems with each set up you have to give up something though. According to Qubes documentation on their website using virtual machines adds a bloated layer to your OS that increases your attack surface. That makes sense to a point. I'm not sure if they're just trying to promote their own OS as it relies on Xen which in itself is a VM albeit one with a small sized source code compared to Virtualbox or VMware but I think it's pointless to run a Whonix on your host because if someone was actually targeting you that would provide little resistance. Whonix with physical isolation is a different story though but it still relies on virtual machines for the set up which according to Qubes is crap from what I understood reading the docs.
-
These rankings seem to be biased towards systems that maximise security for individuals who will predominantly be committing their offences from a single location and/or using the same network repeatedly. More bluntly, people sitting at home ordering their drugs to be delivered to their door ;) While the set-ups you've described are brilliant, they're also involved and unwieldy, inelegant.
You're absolutely right. The first 5 setups are beyond the capabilities of the vast majority of people, but I've listed them because they really are the most secure. So now you have a fun challenge. Can you convert an old laptop into a Whonix Gateway, or install PORTAL on your router? If you never try anything hard, how will you ever grow?
In any case, I think Whonix on a Linux host or Tails with persistent bridges are safe enough for most people, and within their capabilities to setup. Either of these options is much safer than running TBB on Windows, which is what most people do right now. I want to lift the collective security of the community, and I've given them a variety of options.
I prefer Tails as not only is it a secure OS, but it's a means of encouraging secure behaviour. Used as recommended, the lack of persistent entry guards isn't really an issue. Used as recommended, I believe, tor bridges may be less safe, at best redundant, as you would want to randomise them as much as possible, also. Spoof your mac address, briefly access random networks to conduct your business, ram wiped, away you go. Easy as... :)
If by "used as recommended" you mean used as a mobile operating system where you log on to different, random wifi spots, then you're correct, your bridges should be different each time so you aren't linked to other logons (of course, you should randomize your MAC address in that case too, which unfortunately Tails doesn't give you an option to do during boot).
However, the vast majority of Tails users in this community don't use it as a mobile OS. They repeatedly connect from home. In that case, you want persistent entry guards, because choosing different ones all the time increases the chances that you pick a malicious node.
It seems with each set up you have to give up something though. According to Qubes documentation on their website using virtual machines adds a bloated layer to your OS that increases your attack surface. That makes sense to a point. I'm not sure if they're just trying to promote their own OS as it relies on Xen which in itself is a VM albeit one with a small sized source code compared to Virtualbox or VMware but I think it's pointless to run a Whonix on your host because if someone was actually targeting you that would provide little resistance. Whonix with physical isolation is a different story though but it still relies on virtual machines for the set up which according to Qubes is crap from what I understood reading the docs.
Everything has advantages and disadvantages. Virtualbox is going to be fine to stop most attackers from breaking isolation. In the FH attack the feds didn't even make an attempt to break isolation. If you use Virtualbox and Firefox, then to be pwnt without a zero day you will need to simultaneously be running both of them without the latest security patches. It reduces your window of vulnerability, because when one has a public vulnerability the other may not and vice versa. And getting a zero day for one or the other is much more expensive than using a known attack. Also, virtualbox still gives you ASLR which means a vulnerability in firefox in virtualbox could be harder to exploit than a vulnerability in firefox in xen.
On the other hand Xen has a really minimal code base compared to virtualbox and it will be harder for an attacker to break out of it probably. But it might be easier for an attacker to break into it. But Qubes lets you have so many domains that an attacker breaking into one of them shouldn't be a huge failure. If your firefox domain is pwnt, well you are using a Tor VM and firefox doesn't know your IP address, and you are using a GPG VM and none of your plaintexts can be accessed by Firefox and it also cannot access your private key.
Nothing gives you all of the advantages and none of the disadvantages yet. Hopefully Xen starts supporting ASLR and other security features in its guests. I don't even think dom0 can have ASLR, whereas virtualbox on a host with ASLR gives you ASLR for firefox in the VM and ASLR for virtual box on the host. Plus you can use mandatory access controls to isolate virtualbox and virtualbox to isolate firefox.
I would go with Qubes over Whonix and Xen over VBox. But Virtualbox has some advantages over Xen as well.
-
Whonix with physical isolation is a different story though but it still relies on virtual machines for the set up which according to Qubes is crap from what I understood reading the docs.
In the physical isolation setup, they recommend running the Gateway on bare metal but the Workstation in a VM to hide hardware serial numbers. Makes sense, and I'm pretty sure Qubes touts that as a feature somewhere in their documentation.
-
On the one hand:
First, products such as VMWare Workstation or Fusion, or Virtual Box, are all examples of type II hypervisors (sometimes called “hosted VMMs”), which means that they run inside a normal OS, such as Windows, as ordinary processes and/or kernel modules. This means that they use the OS-provided services for all sorts of things, from networking, USB stacks, to graphics output and keyboard and mouse input, which in turn implies they can be only as secure as the hosting OS is. If the hosting OS got compromised, perhaps via a bug in its DHCP client, or USB driver, then it is a game over, also for all your VMs.
Second, those popular consumer type II VMM systems have not been designed with security as a primary goal. Instead, their main focus has been on easy of use, performance, and providing seamless integration of the guest OS(es) with the host OS. Especially the latter, which involves lack of good method to identify which domain a given application belongs to (so, lack of trusted Window Manager), support for shared clipboards which every other VM can steal, insecure file sharing methods, and others, all make it not a very desirable solution when strong domain isolation is important. (This is not to imply that Qubes doesn't support clipboard or file sharing between domains, it does – it's just that we do it in a secure way, at least so we believe). On the other hand, there are many usability improvements in Qubes that are specific to multi-domain system, and which you won't find in the above mentioned products, such as trusted Window Manager that, while maintaining great seamless integration of all the applications onto a common desktop, still allows the user to always know which domain owns which window, support for advanced networking setups, per-domain policies, the just mentioned secure mechanisms for clipboard and filesystem sharing, and many other. Qubes also focuses on making the VMs light-weight so that it was possible to run really a lot of them at the same time, and also on mechanism to allow for secure filesystem sharing between domains (templates).
Finally, the commercial hosted VMMs are really bloated pieces of code. They support everything and the kitchen sink (e.g. Open GL exposed to VMs, and various additional interfaces to allow e.g. drag and drop of files to/from the VM), and so, the attack surface on such a VMM system is orders of magnitude bigger than in case of Qubes OS.
on the other hand
Anti-exploitation mechanisms in the hypervisor
Currently Xen doesnʼt make use of any well known anti-exploitation techniques, like Non-Executable memory
(NX) or Address Space Layout Randomization (ASLR).
Adding proper NX markings on all the pages that do not contain code is usually an obvious first step in mak-
ing potential bugs exploitation harder. Particularly the combination of NX and ASLR is used most often, be-
cause NX protection alone can easily be circumvented using the so called return-into-lib exploitation tech-
nique, where the attacker jumps into the code snippets that are already present (as they are parts of the le-
gal code) in the address space of the target being exploited.
However, in case of Xen, the potential benefits of using NX marking are questionable. This is because IA32
architecture, as implemented on modern Intel and AMD processors, allows the CPU that executes in ring0 to
jump and execute code kept on usermode pages. So the attacker can always keep the shellcode in the us-
ermode, in this case, e.g. in the VMʼs kernel or process, and can bypass all the NX protections implemented
in the Xen hypervisor. The only solution to this problem would be to modify the IA32 architecture so that it
would be possible to disable this mode of operation (e.g. via some MSR register).
ASLR does make sense though. Particularly, one might modify all the memory allocation functions and also
attempt to make the Xen code relocatable, so that each time the Xen is load it gets loaded at a different ad-
dress. On the other hand such changes might be non-trivial, and perhaps might introduce some more com-
plexity to the hypervisor. Further research is needed to decide if addition of any anti-exploitation mechanisms
is worth the effort.
-
These rankings seem to be biased towards systems that maximise security for individuals who will predominantly be committing their offences from a single location and/or using the same network repeatedly. More bluntly, people sitting at home ordering their drugs to be delivered to their door ;) While the set-ups you've described are brilliant, they're also involved and unwieldy, inelegant.
You're absolutely right. The first 5 setups are beyond the capabilities of the vast majority of people, but I've listed them because they really are the most secure. So now you have a fun challenge. Can you convert an old laptop into a Whonix Gateway, or install PORTAL on your router? If you never try anything hard, how will you ever grow?
In any case, I think Whonix on a Linux host or Tails with persistent bridges are safe enough for most people, and within their capabilities to setup. Either of these options is much safer than running TBB on Windows, which is what most people do right now. I want to lift the collective security of the community, and I've given them a variety of options.
I prefer Tails as not only is it a secure OS, but it's a means of encouraging secure behaviour. Used as recommended, the lack of persistent entry guards isn't really an issue. Used as recommended, I believe, tor bridges may be less safe, at best redundant, as you would want to randomise them as much as possible, also. Spoof your mac address, briefly access random networks to conduct your business, ram wiped, away you go. Easy as... :)
If by "used as recommended" you mean used as a mobile operating system where you log on to different, random wifi spots, then you're correct, your bridges should be different each time so you aren't linked to other logons (of course, you should randomize your MAC address in that case too, which unfortunately Tails doesn't give you an option to do during boot).
However, the vast majority of Tails users in this community don't use it as a mobile OS. They repeatedly connect from home. In that case, you want persistent entry guards, because choosing different ones all the time increases the chances that you pick a malicious node.
It seems with each set up you have to give up something though. According to Qubes documentation on their website using virtual machines adds a bloated layer to your OS that increases your attack surface. That makes sense to a point. I'm not sure if they're just trying to promote their own OS as it relies on Xen which in itself is a VM albeit one with a small sized source code compared to Virtualbox or VMware but I think it's pointless to run a Whonix on your host because if someone was actually targeting you that would provide little resistance. Whonix with physical isolation is a different story though but it still relies on virtual machines for the set up which according to Qubes is crap from what I understood reading the docs.
Everything has advantages and disadvantages. Virtualbox is going to be fine to stop most attackers from breaking isolation. In the FH attack the feds didn't even make an attempt to break isolation. If you use Virtualbox and Firefox, then to be pwnt without a zero day you will need to simultaneously be running both of them without the latest security patches. It reduces your window of vulnerability, because when one has a public vulnerability the other may not and vice versa. And getting a zero day for one or the other is much more expensive than using a known attack. Also, virtualbox still gives you ASLR which means a vulnerability in firefox in virtualbox could be harder to exploit than a vulnerability in firefox in xen.
On the other hand Xen has a really minimal code base compared to virtualbox and it will be harder for an attacker to break out of it probably. But it might be easier for an attacker to break into it. But Qubes lets you have so many domains that an attacker breaking into one of them shouldn't be a huge failure. If your firefox domain is pwnt, well you are using a Tor VM and firefox doesn't know your IP address, and you are using a GPG VM and none of your plaintexts can be accessed by Firefox and it also cannot access your private key.
Nothing gives you all of the advantages and none of the disadvantages yet. Hopefully Xen starts supporting ASLR and other security features in its guests. I don't even think dom0 can have ASLR, whereas virtualbox on a host with ASLR gives you ASLR for firefox in the VM and ASLR for virtual box on the host. Plus you can use mandatory access controls to isolate virtualbox and virtualbox to isolate firefox.
I would go with Qubes over Whonix and Xen over VBox. But Virtualbox has some advantages over Xen as well.
ASLR is a big plus making buffer exploits near impossible to execute. Doesn't PAX for Gentoo include that in the hardened version? I'm suprised Gentoo wasn't mentioned here. From what I understand it's not really for the new Linux user so maybe that's why it wasn't mentioned. In regards to Xen I don't agree Qubes developer with the fact that a minimal code base means it more secure though. How has that been proven? That's like saying for example if I have a web form on my web site with 3 fields being safer than a web form that has 20 fields. All it takes is one entry point. It's possible to actually write an exploit that just uses the programs code to change binary values to do what the attacker wants.
Heres a pdf regarding that which is actually quite a good read that explains it way better than I can:
CLEARNET: http://www.cs.dartmouth.edu/~sergey/langsec/papers/Bratus.pdf
Also I don't agree that windows is less secure than Linux. I have never gotten a virus, rootkit, or any type of malware. Being a windows user I know where to look and how to dig deep into windows to find anything that shouldn't be there. I'm just getting into Linux now but my point is if I use Linux my level of knowledge on how to operate and secure the system is not at the same level as my Windows experience so that in itself would make Linux more insecure in my case. However given a user with the same experience level in both Linux and Windows I would say that Linux is safer.
As well I think it's important to leave no traces behind especially if you're a vendor and by installing Whonix on a main OS and then using say a laptop as the gateway you don't really achieve that. You can encrypt the hard drive and create a hidden OS and install Whonix in there but that option is only for Windows so again we're back to square one. Personally I would just use a laptop with a custom hardened Gentoo OS with PAX on top of that but still being new to Linux that may not be for awhile before I know how to do that.
The problem I see with Qubes is a lot of it is based on theory and it does look good on paper but we never know till it gets put to the test. Currently looking at Fedora and I must say it's pretty nice. It seems like a good place to ease into Linux as it's kinda of similar to windows.
-
Whonix with physical isolation is a different story though but it still relies on virtual machines for the set up which according to Qubes is crap from what I understood reading the docs.
In the physical isolation setup, they recommend running the Gateway on bare metal but the Workstation in a VM to hide hardware serial numbers. Makes sense, and I'm pretty sure Qubes touts that as a feature somewhere in their documentation.
Sorry my mistake I actually forget it does say that in the Whonix docs. :P
-
Qubes also focuses on making the VMs light-weight so that it was possible to run really a lot of them at the same time
This is a funny statement, because Qubes seems bloated as crap to me. I've been playing around with it lately. The netvm and firewallvm take up 500 MB of RAM each! For what? These VMs shouldn't be using 50 MB of RAM. The dom0 control stack takes a full 2 GB. So just to boot into the default desktop you need 3 GB, and each AppVM starts out using 500 MB and grows as you run more apps. You really need 6-8 GB to run Qubes, with 4 GB as the bare minimum.
You can probably reconfigure these VMs to use less RAM, but that is the default setup.
-
This is a funny statement, because Qubes seems bloated as crap to me.
Yeah, my experiment with it was fairly short lived. Had enough RAM, but wasn't on SSD, and it's i/o-intensive as well.
I loved the idea.. But after a while, on the hardware I running it on, I started suspecting that it had another layer of security: Even exploit code might take hours to execute. :)
-
In regards to Xen I don't agree Qubes developer with the fact that a minimal code base means it more secure though.
Pretty much nobody disagrees that less code means more secure. Amount of code essentially always correlates exactly with number of bugs. I have seen security programmers measure their skill in number of bugs per thousand lines of code. If you average ten bugs per thousand lines of code, removing a thousand lines of code removes ten bugs. If you remove a thousand lines of code, it means people auditing your software can spend more time looking for bugs in the remaining code. So removing lines of code directly removes bugs, and also makes it more likely that bugs will be found and fixed in other parts of the program.
-
In regards to Xen I don't agree Qubes developer with the fact that a minimal code base means it more secure though.
Pretty much nobody disagrees that less code means more secure. Amount of code essentially always correlates exactly with number of bugs. I have seen security programmers measure their skill in number of bugs per thousand lines of code. If you average ten bugs per thousand lines of code, removing a thousand lines of code removes ten bugs. If you remove a thousand lines of code, it means people auditing your software can spend more time looking for bugs in the remaining code. So removing lines of code directly removes bugs, and also makes it more likely that bugs will be found and fixed in other parts of the program.
That's assuming all programmers are the same skill level which is simply not true. So instead of fixing the ten bugs the solution for said programmers would be to remove a thousand lines of code? Yes your code shouldn't be verbose just for the sake of it. Some programmers do it some don't but my point is again if your source code is smaller someone that wants to exploit needs less time to analyze it and figure out a weakness.
Did you look at the FH exploit code in javascript? They purposely made it very obscure using binary and hex values for javascript return calls and it was close to 1500 lines of code if I remember correctly. I'm pretty sure they could of achieved the same result with 500 lines of code but their purpose besides identifying people was to make the exploit code obscure as possible.
-
In regards to Xen I don't agree Qubes developer with the fact that a minimal code base means it more secure though.
Pretty much nobody disagrees that less code means more secure. Amount of code essentially always correlates exactly with number of bugs. I have seen security programmers measure their skill in number of bugs per thousand lines of code. If you average ten bugs per thousand lines of code, removing a thousand lines of code removes ten bugs. If you remove a thousand lines of code, it means people auditing your software can spend more time looking for bugs in the remaining code. So removing lines of code directly removes bugs, and also makes it more likely that bugs will be found and fixed in other parts of the program.
That's assuming all programmers are the same skill level which is simply not true. So instead of fixing the ten bugs the solution for said programmers would be to remove a thousand lines of code? Yes your code shouldn't be verbose just for the sake of it. Some programmers do it some don't but my point is again if your source code is smaller someone that wants to exploit needs less time to analyze it and figure out a weakness.
Did you look at the FH exploit code in javascript? They purposely made it very obscure using binary and hex values for javascript return calls and it was close to 1500 lines of code if I remember correctly. I'm pretty sure they could of achieved the same result with 500 lines of code but their purpose besides identifying people was to make the exploit code obscure as possible.
How did that work out for them? It took about 24 hours for their entire exploit to be entirely analyzed. Rule number one of secure programming is the less code you have the better. Security via obscurity is an oxymoron.
-
You also need to keep in mind that the ten bugs might not be obvious, in that you cannot fix them because you don't know about them. But then when you remove 1,000 lines of code, you remove the 10 bugs you didn't even know about. All programs should be expressed in as little code as possible, the more code you put into a program the more bugs you put into it.
-
You also need to keep in mind that the ten bugs might not be obvious, in that you cannot fix them because you don't know about them. But then when you remove 1,000 lines of code, you remove the 10 bugs you didn't even know about. All programs should be expressed in as little code as possible, the more code you put into a program the more bugs you put into it.
Yeah I agree with that and also expressed the same thing about having a minimal code base. It took 24 hours for that exploit code to be analyzed but it wasn't one person analyzing it but a community effort. Back to my point though is that more lines of code doesn't always mean more bugs. To once again bring up the exploit code used, 1500 lines of code when it could of been written in a little over 500 lines of code. Does that mean they're more bugs in that code? It sure seemed to work properly to me. I think we're talking about code in two different contexts. In terms of software yes less code and the more minimal your code base is the more manageable it becomes. However your theory of more code equals more bugs is not always true in different contexts.
-
Exploit code might not be the best example against the "simpler is generally safer" argument. Setting things on fire generally requires a different approach than making something fire retardent. Mostly, nobody really cares how well or poorly an exploit is written. It has an operational lifespan of a fraction of a second and either does its job, or it fails.
There are plenty of examples of insecure smaller code, and of more secure larger code. But assuming good coding in both cases, the less lines of code you have, the less opportunities you have to screw something up.
-
You also need to keep in mind that the ten bugs might not be obvious, in that you cannot fix them because you don't know about them. But then when you remove 1,000 lines of code, you remove the 10 bugs you didn't even know about. All programs should be expressed in as little code as possible, the more code you put into a program the more bugs you put into it.
Yeah I agree with that and also expressed the same thing about having a minimal code base. It took 24 hours for that exploit code to be analyzed but it wasn't one person analyzing it but a community effort. Back to my point though is that more lines of code doesn't always mean more bugs. To once again bring up the exploit code used, 1500 lines of code when it could of been written in a little over 500 lines of code. Does that mean they're more bugs in that code? It sure seemed to work properly to me. I think we're talking about code in two different contexts. In terms of software yes less code and the more minimal your code base is the more manageable it becomes. However your theory of more code equals more bugs is not always true in different contexts.
The security community says that more code equals more bugs. More code means more complexity, more complexity means more bugs. People make on average a certain number of mistakes per X lines of code. Removing X lines of code removes those bugs. If you can remove code and still meet your goal, you should always do it. A really good programmer might average one bug per 500 lines of code, removing 500 lines of code will likely remove a security vulnerability. A shitty programmer might average one bug or more per 50 lines of code, removing 500 lines of code will likely remove 10 security vulnerabilities.
The book "Code Complete" by Steve McConnell has a brief section about error
expectations. He basically says that the range of possibilities can be as
follows:
(a) Industry Average: "about 15 - 50 errors per 1000 lines of delivered
code." He further says this is usually representative of code that has some
level of structured programming behind it, but probably includes a mix of
coding techniques.
(b) Microsoft Applications: "about 10 - 20 defects per 1000 lines of code
during in-house testing, and 0.5 defect per KLOC (KLOC IS CALLED AS 1000 lines of code) in released
product (Moore 1992)." He attributes this to a combination of code-reading
techniques and independent testing (discussed further in another chapter of
his book).
(c) "Harlan Mills pioneered 'cleanroom development', a technique that has
been able to achieve rates as low as 3 defects per 1000 lines of code during
in-house testing and 0.1 defect per 1000 lines of code in released product
(Cobb and Mills 1990). A few projects - for example, the space-shuttle
software - have achieved a level of 0 defects in 500,000 lines of code using
a system of format development methods, peer reviews, and statistical
testing."
Seriously there is not much of a debate there are all kinds of studies showing that programmers tend to make an average number of errors per X lines of code (with more skilled programmers making less, and security oriented highly skilled programmers making very few), and that means the less lines of code your program has the less bugs it will have. The number one rule of security programming is express every program in as absolute little code as required to meet your objective. Any additional code is just introducing additional security vulnerabilities for no reason at all.
-
Yes I understand that and never disagreed with you. My point was in the context of what the codes purpose is. That's a basic principal they teach in any programming course, the less code the better. Python is very good at helping you do that, C is not, C++ can be depending on the programmer. Not really debating with you on that basic foundational principle of programming but was stating that - at times a programmer finds it beneficial for his specific purpose to write more code and used the recent exploit as an example. Albeit they're two different purposes for an exploit code and for software code but in the exploit code situtation it was more beneficial to have more lines of code as it would take longer for people to analyze it . Again I think we're just viewing the issue from a two polar opposite perspectives.
-
The security community says that more code equals more bugs. More code means more complexity, more complexity means more bugs. People make on average a certain number of mistakes per X lines of code. Removing X lines of code removes those bugs. If you can remove code and still meet your goal, you should always do it.
...
The number one rule of security programming is express every program in as absolute little code as required to meet your objective. Any additional code is just introducing additional security vulnerabilities for no reason at all.
I've never understood the preoccupation people have with lines of code... I honestly don't think this should even be considered during development -- something tangent to the number of lines, if you will, of course; but certainly not the actual number of lines themselves. If you're actually attempting to limit your "lines" of code then you're doing yourself a disservice and training your mind to think in ways that make it harder to solve problems. Frankly I think there's far too much worrying about how much whitespace is on the screen going on here...
I don't know, something about the superficiality of these statements really rubs me the wrong way.
-
The security community says that more code equals more bugs. More code means more complexity, more complexity means more bugs. People make on average a certain number of mistakes per X lines of code. Removing X lines of code removes those bugs. If you can remove code and still meet your goal, you should always do it.
...
The number one rule of security programming is express every program in as absolute little code as required to meet your objective. Any additional code is just introducing additional security vulnerabilities for no reason at all.
I've never understood the preoccupation people have with lines of code... I honestly don't think this should even be considered during development -- something tangent to the number of lines, if you will, of course; but certainly not the actual number of lines themselves. If you're actually attempting to limit your "lines" of code then you're doing yourself a disservice and training your mind to think in ways that make it harder to solve problems. Frankly I think there's far too much worrying about how much whitespace is on the screen going on here...
I don't know, something about the superficiality of these statements really rubs me the wrong way.
It really depends on what your building. It's such a general statement to say for every 1000 lines of code there will be 50 bugs because X amount of security programmers I know said so. What are you making? An e-commerce site? A database? An email client? What language are you using? There's so many factors and variables that come into play that making an overgeneralized statement that less code is always better is a simplistic way to consider the subject.
-
Clearnet: http://gpg4usb.cpunk.de/download.html
Anyone know how to set up a 4096bit rsa key
-
If you're under investigation and you got drugs in your house then you are fucked, end of story. And all this talk about NSA network layer attacks or whatever is bullshit, the day they do an attack like that on SR is the day tor dies. If they do that they'll have fucked their ability to watch their real enemies.
-
The security community says that more code equals more bugs. More code means more complexity, more complexity means more bugs. People make on average a certain number of mistakes per X lines of code. Removing X lines of code removes those bugs. If you can remove code and still meet your goal, you should always do it.
...
The number one rule of security programming is express every program in as absolute little code as required to meet your objective. Any additional code is just introducing additional security vulnerabilities for no reason at all.
I've never understood the preoccupation people have with lines of code... I honestly don't think this should even be considered during development -- something tangent to the number of lines, if you will, of course; but certainly not the actual number of lines themselves. If you're actually attempting to limit your "lines" of code then you're doing yourself a disservice and training your mind to think in ways that make it harder to solve problems. Frankly I think there's far too much worrying about how much whitespace is on the screen going on here...
I don't know, something about the superficiality of these statements really rubs me the wrong way.
Yes I agree with this. Lines of code doesn't matter taken at face value. It is used to mean "amount of code" though, or complexity of the program. If you put your entire program on a single line it doesn't make it more secure :).
-
The security community says that more code equals more bugs. More code means more complexity, more complexity means more bugs. People make on average a certain number of mistakes per X lines of code. Removing X lines of code removes those bugs. If you can remove code and still meet your goal, you should always do it.
...
The number one rule of security programming is express every program in as absolute little code as required to meet your objective. Any additional code is just introducing additional security vulnerabilities for no reason at all.
I've never understood the preoccupation people have with lines of code... I honestly don't think this should even be considered during development -- something tangent to the number of lines, if you will, of course; but certainly not the actual number of lines themselves. If you're actually attempting to limit your "lines" of code then you're doing yourself a disservice and training your mind to think in ways that make it harder to solve problems. Frankly I think there's far too much worrying about how much whitespace is on the screen going on here...
I don't know, something about the superficiality of these statements really rubs me the wrong way.
It really depends on what your building. It's such a general statement to say for every 1000 lines of code there will be 50 bugs because X amount of security programmers I know said so. What are you making? An e-commerce site? A database? An email client? What language are you using? There's so many factors and variables that come into play that making an overgeneralized statement that less code is always better is a simplistic way to consider the subject.
If you want to make a program with as few security vulnerabilities as possible, best practice is to always make the program with as little code as possible.
-
Subb'n & Transfixed... Thank you.